++, but you could go one step further.
Not only might the same second pass make this useful, but different front ends with different back ends that still use the same intermediate format makes sense, too.
Lots of data formatting tool chains go, for instance, from TeX, POD, PostScript, or SVG to some annotated intermediate format. Then, programs that read that standardized (internally standard anyway, to the tool chain) intermediate format to produce HTML 3, HTML 4, XHTML 1, Docbook, info, man, PostScript, PDF, etc.
Some compilers do the same sort of thing, actually. Things such as PCode, Java bytecode, Parrot code, or C are produced from any number of front-end translators. Then, the Java bytecode could be run or recompiled to a static executable, the C could be compiled to any number of architectures (so long as it's done with care for portability), etc.
In theory one could, if demented enough, write parts of a program in Ada, Awk, some Basic dialects, C, C++, Eiffel, Euphoria, Fortran, Haskell, Intercal, Java, Pascal, Prolog, Scheme, and Simula then translate them to C since all those languages have translators available to target C. Then, changes can be made at the C level to tie them closer together or to optimize a bit, the compiler can optimize the C, and all of the languages can work together without worrying about one another. Many C compilers actually do produce assembly then call the assembler, or have an option to produce assembly for the target platform instead of making an object file. The C can even be translated to other languages, like Fortran, Java, Java bytecode, C#, or Ada. Don't expect Ada-to-C-to-Fortran-to-C-to-Ada to look like the original program, of course.
| [reply] |