Problems with C and C++ Separate Compilation

11 11 2008

After graduation, a couple months of watching television, driving cross-country (if you get the chance you should drive across northern Wyoming), settling in at Microsoft and living in Seattle, I’m back.  And I’m annoyed at C.

C is a fantastic language in many ways.  It is essentially an abstract assembly language.  Almost any general-purpose operation which can be done in assembly can be done in C, and it makes building large, relatively portable systems much easier.  The only things which can’t be done directly in C are operations on specific registers (and it’s easy enough to link in short assembly routines when that’s necessary.

Most of my early interest in programming languages and most of my problems when I first started doing systems work were related to basic typing issues: the ugliness of casting things to void pointers and back, the conversions between various integer types, and other relatively mundane C errors which are easy to make and hard to debug.  I came to believe that additional features other than type system and memory safety improvements in other languages, while extremely useful, were mostly great conveniences rather than fundamental improvements.

But the past several months have changed my mind.  While the ease of turning a pointer to one type of object into a pointer to another type in the same place is certainly a bane as often as it is a boon, a reasonably experienced C programmer begins to recognize common symptoms for such problems.  A more serious, though less frequently encountered problem has to do with type identity and versioning.

Consider the case where you write an application (in C) to use an external library.  Your application interfaces with this library through two means: #include-ing its public header, and being linked to the library’s object file.  Initially these two interfaces will probably be fine (if, for example, you just installed this library).  Now move forward a couple months.  Update your library.  Did your update include the object file and the header file?  If not then any of sizes or layout changes to the library’s data types might cause non-obvious errors; your application will happily compile and link, but the results you get back from the library may not be what you expect.

What if it’s your library, or just an object file in your project?  These tend to have a fair amount of turnover.  Most moderately-sized projects use separate compilation to separate code changes and avoid recompiling the same code repeatedly if it doesn’t change.  But when tying these object files together, there are no checks to ensure that data structures exchanged between object files are consistent; the C compilation model assumes that your data structure definitions are stable, or you recompile from scratch every time.  It also makes the reasonable assumption that the same compiler is used for every object file.  On the off chance you violate that expectation (perhaps with a compiler update), memory layouts of the same structure definition may differ between object files.

It’s possible to work around this problem with a build system if you track every header file dependency explicitly.  For large projects, this can be difficult.  Especially with fast-moving projects, it’s easy to add an include to a .c file without remembering to add the dependency to the build system configuration.  Once this missing dependency goes unnoticed for some time it becomes considerably more difficult to track down, and developers end up either spending their time debugging the build system or resorting to rebuilding from scratch every time in favor of the broken incremental build.

Another permutation of the same problem is that of unrelated structures with the same name.  It’s easy to imagine a large system with two subsystems defining structures named CALLBACK_ARGS.  What happens when one section of code needs to interact with both of these systems?  If all appropriate headers are included, then the name collision will be detected.  If only one of the conflicting headers is included, then depending on how the headers are organized it becomes trivially easy to pass the wrong structure to a function.  Especially when working on a new system, it usually seems reasonable to assume that structures of the same name are the same semantic (and in-memory) structure.

Namespaces can help alleviate the same-name problem: including only one structure’s header and trying to pass that to another function will result in an error complaining about passing an argument of type Subsystem1::CALLBACK_ARGS* to a function expecting a Subsystem2::CALLBACK_ARGS*.  This doesn’t actually prevent you from declaring two structures of the same name in the same namespace in separate header files, but if namespaces are used judiciously to separate subsystems then the likelihood of doing so accidentally is greatly reduced.

The versioning problem is a direct result of how #include works in C.  Rather than being a direct part of the language, #include is a preprocessor directive equivalent to “take the text of the specified file and pretend I typed it in place right here, then pass that result to the actual compiler.”  At its core most C compilers only handle single files at a time, so they don’t actually know anything about other object files (or at least, they don’t directly use information about other object files).  That’s the linker’s job, and the linker knows nothing about structures per se – only matching symbolic references.

One solution is to store all structure layout information in object files, and generate code for accessing those structures once at link time.  This slows the linking process, but prevents the mismatched definition problem; all code for accessing the structure is generated at the same time from the same definition.  This blurs the distinction between compiler and linker, but adds great value.

Doing this at compile time for static linking is relatively cheap and straightforward.  Doing this at load-link time is a bit trickier.  While compilers and static linkers can play any tricks they want for code which only interacts directly with itself, dynamically linked executable formats must be defined in standard ways, limiting what can be done.  I don’t know of any major executable formats which support this (most were designed in the heyday of C and C++, when they were still the best languages around), but that is a matter of format standards rather than a technical limitation.  This would be more expensive than current dynamic linking, but doable.   A compiler could choose to use a richer format for its own object files and then resort to standard formats when asked to generate a standard library or executable.  OCaml does this; for a Test.cmx and Mod.cmx compiled to objects using differing interface files for a Test module data structure:

Yggdrasil:caml colin$ ocamlopt Test.cmx Mod.cmx
Files Mod.cmx and Test.cmx make inconsistent assumptions over interface Test
Yggdrasil:caml colin$ 

Unfortunately C and C++ have a compilation and linking model which is now so well-established that I suspect any proposal to fix this in the standards for those languages would likely meet with significant resistance.  Though at the same time, I can’t think of any desired C\C++ semantics that this would break, so maybe it could happen.





Working with Large ML Code Bases

11 02 2008

It’s interesting how tool prevalence and standardization (or lack thereof) for a programming language are very strong indicators of how many people have used it to build significant projects.

My honors thesis has me hacking up ML implementations – I started with OCaml, but later switched to MLkit after realizing that it already had support for some features I was working to implement in OCaml. As you might imagine, the source for a compiler and runtime for reasonably mature languages (the ML family of languages started in the mid-70s) is a reasonably hefty chunk of code. The layouts of the MLkit source tree is fairly clear (there is a conveniently-named “Compiler” directory, containing such subdirectories as “Backend/X86/”). The layout of the OCaml source is slightly less intuitive, but it comes with an informative README which points out where various parts of the system reside. Yet even within these more focused segments of the source trees, there is a lot of code. And it’s not boilerplate – when you need functions and types for spitting out most of the relevant instructions on the x86 architecture, or handling every abstract syntax tree node of a very expressive language, you’re going to have a lot of code. But there are no good tools for managing a code base of this size in any ML descendant.

I ran into this to some degree with the class project for my software engineering class last year (CS190 – 2007’s page is missing… I need to look into that). We wrote a frontend to GDB using OCaml. The idea was to display separate, independently-controlled panes for each thread of a multithreaded program (obviously, this didn’t scale well past, oh, 5 threads). It worked reasonably well at the end of the semester, but not well enough that we would ever release it. There were a few times, in a project which totaled only a few hundred lines of code, that I wanted some mechanized help working through other modules quickly to find what I needed. Several times I resorted to emailing a peer – hardly a terrible horror, but not something which should have been necessary for simply finding a function in a small code base. The only tools we had for managing the code base were…. Subversion, and VIm.

MLkit is nearly 240K lines of code in over 1000 files (including comments, but as it is primarily a research project, those are few and far between, plus a few odd linebreaks). Even cutting it down to just the compiler (ignoring its dependencies elsewhere in the tree) we’re looking at almost 56K lines of code in over 150 files.

C, C++ and Java programmers have a plethora of tools at their disposal for working with code bases of this size. When I was at Sun this past summer, between cscope and having OpenGrok set up on the kernel code, finding definitions, declarations, types, etc. was usually a breeze. Both have their limitations, and their bugs, but they’re still great. And that’s not even using a lot of the IDE support present in major IDEs like Visual Studio or Eclipse.

ML programmers have no reliable systems which I can find.

A quick search for OCaml IDEs yields a few results. A page listing a few of what look to be mostly-dead OCaml plugins for Eclipse. Of these ODT has apparently seen activity this year, but seems to be mostly syntax highlighting and a build tool. A from-scratch IDE, Camelia, built by my friend Nate when we TAed CS017 (now unfortunately renamed CSCI0170, thanks to the abomination that is Sungard‘s Banner…). Searching for any of OCaml, SML, ML, or Haskell with “cscope” on Google yields nothing of use. Jane Street Capital uses OCaml. I wonder what tools they use.

Oddly enough, it seems the winner in this case may be the F# plugin for Visual Studio. It has support for following references around large code bases, and is actively maintained (as MS intends it to be a mainstream .NET language in the near future). Unfortunately, it can also only deal with F# files, which are close to, but not quite close enough to Standard ML files….

Perhaps I’ll build a cscope-alike for ML myself.

After I finish my thesis.

EDIT: A commenter on reddit suggested taking a look at ocamlbrowser. It seems to be at least mostly what I’m looking for (for OCaml), though I can’t seem to get it to do anything other than display the types for functions in the standard module library, so I can’t say for sure. There’s a setting to change the module search path, but it doesn’t seem to change anything for me. I also find it odd that despite the fact that it is included with the OCaml base distribution (at least in Debian), no web search for any query I can think of which expresses the sort of task ocamlbrowser is supposed to facilitate yields any results referencing it. This suggests that very few people use it – maybe I’m not the only one who can’t get it to work on an arbitrary code base easily. Might play with it some more once I get some more time.

EDIT2: Another reddit commenter mentioned otags (in the vein of ctags), which looks good.  If only I was still working in OCaml :-p On the other hand, looking up the link for ctags made me look at that again (I had foolishly assumed that it, like cscope, supported only C and C++).  In particular, apparently someone reimplemented ctags for more language support (and called it Exuberant Ctags).  The language support list claims that it in fact works on SML!  A download and compile later, I’m quite the happy camper!  It works just like the standard ctags.  Many thanks to zem on reddit for inspiring a very fortuitous Google search.








Follow

Get every new post delivered to your Inbox.