Problems with C and C++ Separate Compilation

11 11 2008

After graduation, a couple months of watching television, driving cross-country (if you get the chance you should drive across northern Wyoming), settling in at Microsoft and living in Seattle, I’m back.  And I’m annoyed at C.

C is a fantastic language in many ways.  It is essentially an abstract assembly language.  Almost any general-purpose operation which can be done in assembly can be done in C, and it makes building large, relatively portable systems much easier.  The only things which can’t be done directly in C are operations on specific registers (and it’s easy enough to link in short assembly routines when that’s necessary.

Most of my early interest in programming languages and most of my problems when I first started doing systems work were related to basic typing issues: the ugliness of casting things to void pointers and back, the conversions between various integer types, and other relatively mundane C errors which are easy to make and hard to debug.  I came to believe that additional features other than type system and memory safety improvements in other languages, while extremely useful, were mostly great conveniences rather than fundamental improvements.

But the past several months have changed my mind.  While the ease of turning a pointer to one type of object into a pointer to another type in the same place is certainly a bane as often as it is a boon, a reasonably experienced C programmer begins to recognize common symptoms for such problems.  A more serious, though less frequently encountered problem has to do with type identity and versioning.

Consider the case where you write an application (in C) to use an external library.  Your application interfaces with this library through two means: #include-ing its public header, and being linked to the library’s object file.  Initially these two interfaces will probably be fine (if, for example, you just installed this library).  Now move forward a couple months.  Update your library.  Did your update include the object file and the header file?  If not then any of sizes or layout changes to the library’s data types might cause non-obvious errors; your application will happily compile and link, but the results you get back from the library may not be what you expect.

What if it’s your library, or just an object file in your project?  These tend to have a fair amount of turnover.  Most moderately-sized projects use separate compilation to separate code changes and avoid recompiling the same code repeatedly if it doesn’t change.  But when tying these object files together, there are no checks to ensure that data structures exchanged between object files are consistent; the C compilation model assumes that your data structure definitions are stable, or you recompile from scratch every time.  It also makes the reasonable assumption that the same compiler is used for every object file.  On the off chance you violate that expectation (perhaps with a compiler update), memory layouts of the same structure definition may differ between object files.

It’s possible to work around this problem with a build system if you track every header file dependency explicitly.  For large projects, this can be difficult.  Especially with fast-moving projects, it’s easy to add an include to a .c file without remembering to add the dependency to the build system configuration.  Once this missing dependency goes unnoticed for some time it becomes considerably more difficult to track down, and developers end up either spending their time debugging the build system or resorting to rebuilding from scratch every time in favor of the broken incremental build.

Another permutation of the same problem is that of unrelated structures with the same name.  It’s easy to imagine a large system with two subsystems defining structures named CALLBACK_ARGS.  What happens when one section of code needs to interact with both of these systems?  If all appropriate headers are included, then the name collision will be detected.  If only one of the conflicting headers is included, then depending on how the headers are organized it becomes trivially easy to pass the wrong structure to a function.  Especially when working on a new system, it usually seems reasonable to assume that structures of the same name are the same semantic (and in-memory) structure.

Namespaces can help alleviate the same-name problem: including only one structure’s header and trying to pass that to another function will result in an error complaining about passing an argument of type Subsystem1::CALLBACK_ARGS* to a function expecting a Subsystem2::CALLBACK_ARGS*.  This doesn’t actually prevent you from declaring two structures of the same name in the same namespace in separate header files, but if namespaces are used judiciously to separate subsystems then the likelihood of doing so accidentally is greatly reduced.

The versioning problem is a direct result of how #include works in C.  Rather than being a direct part of the language, #include is a preprocessor directive equivalent to “take the text of the specified file and pretend I typed it in place right here, then pass that result to the actual compiler.”  At its core most C compilers only handle single files at a time, so they don’t actually know anything about other object files (or at least, they don’t directly use information about other object files).  That’s the linker’s job, and the linker knows nothing about structures per se – only matching symbolic references.

One solution is to store all structure layout information in object files, and generate code for accessing those structures once at link time.  This slows the linking process, but prevents the mismatched definition problem; all code for accessing the structure is generated at the same time from the same definition.  This blurs the distinction between compiler and linker, but adds great value.

Doing this at compile time for static linking is relatively cheap and straightforward.  Doing this at load-link time is a bit trickier.  While compilers and static linkers can play any tricks they want for code which only interacts directly with itself, dynamically linked executable formats must be defined in standard ways, limiting what can be done.  I don’t know of any major executable formats which support this (most were designed in the heyday of C and C++, when they were still the best languages around), but that is a matter of format standards rather than a technical limitation.  This would be more expensive than current dynamic linking, but doable.   A compiler could choose to use a richer format for its own object files and then resort to standard formats when asked to generate a standard library or executable.  OCaml does this; for a Test.cmx and Mod.cmx compiled to objects using differing interface files for a Test module data structure:

Yggdrasil:caml colin$ ocamlopt Test.cmx Mod.cmx
Files Mod.cmx and Test.cmx make inconsistent assumptions over interface Test
Yggdrasil:caml colin$ 

Unfortunately C and C++ have a compilation and linking model which is now so well-established that I suspect any proposal to fix this in the standards for those languages would likely meet with significant resistance.  Though at the same time, I can’t think of any desired C\C++ semantics that this would break, so maybe it could happen.





Mercurial + MLKit + OS X = Failure, plus workaround

21 03 2008

My thesis, as I’ve mentioned several times before, has me hacking up the source to the ML Kit compiler. Because the current build arrangement I’ve been using requires a great deal of RAM, I’ve been using my desktop machine and CS department machines exclusively for building. However, I’m going home for part of spring break this coming week, there’s no computer with appropriate specs at home, and editing code over SSH is rather painful because of the high latency involved. The obvious solution, since I’m using Mercurial to manage my thesis, was to do a checkout on my laptop (a G4 Powerbook lacking enough RAM to build, and lacking the correct processor architecture to test), edit locally, and just push intermediate versions back to the CS systems to build there. This would result in some useless crap in the commit logs, but it beats the alternatives. I had an existing checkout from back in January, when I was using OCaml, so I went to update and saw the following:

Yggdrasil:thesis colin$ hg up
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
Yggdrasil:thesis colin$ hg pull
remote: ####################
pulling from ssh://cs/thesis/
searching for changes
adding changesets
adding manifests
adding file changes
added 44 changesets with 18762 changes to 17825 files (run 'hg update' to get a working copy)
remote:
remote: Forwarding you to cslab0a
remote:
remote: ####################
remote:
Yggdrasil:thesis colin$ hg up
abort: case-folding collision between mlkit/kit/basis/MLB/RI_PROF/Splaymap.sml.d and mlkit/kit/basis/MLB/RI_PROF/SPLAYMAP.sml.d
Yggdrasil:thesis colin$

I saw this, noticed it was a temp file (ML Kit makes an MLB directory for temp files) and hg rm’d all such files in the CS copy, committed, and tried again. Same result. Poked around online, and after a bit, found a message from someone with a similar problem, with a similar case difference between two distinct files, and a response pointing out the problem with Mac OS X’s case-insensitive filesystem.

When I installed the latest OS X, I intentionally opted for case-insensitive because many Mac programs have a habit of breaking on case-sensitive filesystems. And now I can’t have a copy of the ML Kit source on my machine. It’s common practice in SML programs to use all-caps names for files containing signatures (akin to interfaces for the non-ML folk) and using “upper” camel case for implementations. So I can’t just rename a couple files, it seems you just can’t have the ML Kit source on most OS X installations’ local disks.

So was it time to give up on this? No, of course not – I need to work on my damn thesis while I’m home. Screw Apple and it’s crappy filesystem design choice.

Fortunately Apple did make one excellent, excellent design choice with OS X, which was making disk images a standard, common object. So I used the Disk Utility tool to make a disk image containing a case-sensitive filesystem, mounted it, and did a checkout inside that. It’s pretty stupid that I needed to do this, but at least I can work on my thesis over break without running Vim over SSH.

Now if you’ll excuse me, my friend Owen is heading off to Apple’s filesystem team next year, and I need to go give him grief for his group’s terrible design choice.





Working with Large ML Code Bases

11 02 2008

It’s interesting how tool prevalence and standardization (or lack thereof) for a programming language are very strong indicators of how many people have used it to build significant projects.

My honors thesis has me hacking up ML implementations – I started with OCaml, but later switched to MLkit after realizing that it already had support for some features I was working to implement in OCaml. As you might imagine, the source for a compiler and runtime for reasonably mature languages (the ML family of languages started in the mid-70s) is a reasonably hefty chunk of code. The layouts of the MLkit source tree is fairly clear (there is a conveniently-named “Compiler” directory, containing such subdirectories as “Backend/X86/”). The layout of the OCaml source is slightly less intuitive, but it comes with an informative README which points out where various parts of the system reside. Yet even within these more focused segments of the source trees, there is a lot of code. And it’s not boilerplate – when you need functions and types for spitting out most of the relevant instructions on the x86 architecture, or handling every abstract syntax tree node of a very expressive language, you’re going to have a lot of code. But there are no good tools for managing a code base of this size in any ML descendant.

I ran into this to some degree with the class project for my software engineering class last year (CS190 – 2007’s page is missing… I need to look into that). We wrote a frontend to GDB using OCaml. The idea was to display separate, independently-controlled panes for each thread of a multithreaded program (obviously, this didn’t scale well past, oh, 5 threads). It worked reasonably well at the end of the semester, but not well enough that we would ever release it. There were a few times, in a project which totaled only a few hundred lines of code, that I wanted some mechanized help working through other modules quickly to find what I needed. Several times I resorted to emailing a peer – hardly a terrible horror, but not something which should have been necessary for simply finding a function in a small code base. The only tools we had for managing the code base were…. Subversion, and VIm.

MLkit is nearly 240K lines of code in over 1000 files (including comments, but as it is primarily a research project, those are few and far between, plus a few odd linebreaks). Even cutting it down to just the compiler (ignoring its dependencies elsewhere in the tree) we’re looking at almost 56K lines of code in over 150 files.

C, C++ and Java programmers have a plethora of tools at their disposal for working with code bases of this size. When I was at Sun this past summer, between cscope and having OpenGrok set up on the kernel code, finding definitions, declarations, types, etc. was usually a breeze. Both have their limitations, and their bugs, but they’re still great. And that’s not even using a lot of the IDE support present in major IDEs like Visual Studio or Eclipse.

ML programmers have no reliable systems which I can find.

A quick search for OCaml IDEs yields a few results. A page listing a few of what look to be mostly-dead OCaml plugins for Eclipse. Of these ODT has apparently seen activity this year, but seems to be mostly syntax highlighting and a build tool. A from-scratch IDE, Camelia, built by my friend Nate when we TAed CS017 (now unfortunately renamed CSCI0170, thanks to the abomination that is Sungard‘s Banner…). Searching for any of OCaml, SML, ML, or Haskell with “cscope” on Google yields nothing of use. Jane Street Capital uses OCaml. I wonder what tools they use.

Oddly enough, it seems the winner in this case may be the F# plugin for Visual Studio. It has support for following references around large code bases, and is actively maintained (as MS intends it to be a mainstream .NET language in the near future). Unfortunately, it can also only deal with F# files, which are close to, but not quite close enough to Standard ML files….

Perhaps I’ll build a cscope-alike for ML myself.

After I finish my thesis.

EDIT: A commenter on reddit suggested taking a look at ocamlbrowser. It seems to be at least mostly what I’m looking for (for OCaml), though I can’t seem to get it to do anything other than display the types for functions in the standard module library, so I can’t say for sure. There’s a setting to change the module search path, but it doesn’t seem to change anything for me. I also find it odd that despite the fact that it is included with the OCaml base distribution (at least in Debian), no web search for any query I can think of which expresses the sort of task ocamlbrowser is supposed to facilitate yields any results referencing it. This suggests that very few people use it – maybe I’m not the only one who can’t get it to work on an arbitrary code base easily. Might play with it some more once I get some more time.

EDIT2: Another reddit commenter mentioned otags (in the vein of ctags), which looks good.  If only I was still working in OCaml :-p On the other hand, looking up the link for ctags made me look at that again (I had foolishly assumed that it, like cscope, supported only C and C++).  In particular, apparently someone reimplemented ctags for more language support (and called it Exuberant Ctags).  The language support list claims that it in fact works on SML!  A download and compile later, I’m quite the happy camper!  It works just like the standard ctags.  Many thanks to zem on reddit for inspiring a very fortuitous Google search.





Variant Interfaces

1 01 2008

Today I finished my first pass reading of Colimits for Concurrent Collectors, and was struck with an interesting idea. The paper is a demonstration of an approach to refine specifications using the notion of colimits. I don’t fully understand the refinement process yet (hence the “first pass reading”). The paper contains many examples specifications written in a variant of Specware, to specify a safe interface between a mutator and garbage collector without using stop-the-world semantics – this is the case study for the refinement.

An interesting pattern of sorts is used. A number of the specifications for portions of the system are parameterized by other specifications. A common idiom in the examples are lines similar to:

MORPHISM Cleaning-View = ONLY nodes,roots,free,marked,workset,unmark

What this line means is that the current specification, when it refers to an instance of the Cleaning-View specification, can only access the mentioned observers. This is interesting if we translate it to interfaces in an object-oriented environment. What this amounts to is essentially defining an interface (here, a specification) and then multiple subsets of this interface (here, imported subsets of observer functions). One application of this (and I think the most interesting) is using the sub-interfaces as restricted capabilities on objects. This is a reasonably common use of interfaces in OO languages.

You can already mimic this in normal object oriented languages by defining minimal subsets, and larger subsets as extending the smaller interfaces. But what about keeping siblings with overlap in sync? A nice alternative would be to be able to specify all of the siblings together, as a set of variant interfaces (similar to the notion of variant types). For example, in psuedo-Java:

public interface Primary {
    void func1();
    void func2(int);
    void func3(double);

    subset Takeargs {func2, func3};
    subset Noargs {func1};
    subset Nofloatingpoint {func1, func2}
}

This example would of course need to be cleaned up a bit to deal with method overloading.

The problem with doing the equivalent of this in normal Java is keeping these “sub-interfaces” in sync with the main interface, and keeping overlapping siblings in sync without just having an arbitrarily large hierarchy of interfaces. Allowing joint specifications, and then referring to variants through names like Primary.Takeargs (to say that a variable should be of type Primary, but the current scope should only have access to the Takeargs subset) would ease this difficulty, making the use of interfaces for restricted control slightly easier to deal with as a programmer.

I also wonder what could be done with this as an extension to a language such as Spec#, with pre- and post-conditions on methods. You could then deduce that objects using only a restricted subset of an interface might maintain certain invariants (such as making no modifications to an object), more easily than before. More importantly, you could guarantee it statically at compile time – the interface variants would be subtypes of the main interface type. Without the explicit sub- and sibling-interface relationships it would still be possible to statically verify properties of interest, as the compiler can see what methods are called on various objects, but this approach lets the programmer encode this restriction easily.

Ultimately this is just syntactic sugar for a construct which can already be used, but it encourages the programmer to make the type checker do some more verification work for the programmer. It’s also sort of a static version of something similar to the E caretaker for capability delegation or more generally the object capability model(lacking revocation, because all of this happens at compile time). I should read some of the EROS and Coyotos papers…

It would be interesting to hack up a prototype implementation of this, but I don’t feel like mucking about right in the internals of another compiler (thesis work…), and unfortunately I don’t think any of the good meta-programming languages are really appropriate for this. Scheme is dynamically-typed, Ruby is duck-typed (so really, dynamically typed), and you need static typing to get the programming benefits of this. Perhaps it’s finally time for me to learn MetaOCaml.





Typing the Stack, and Why Tag-Free Garbage Collection Doesn’t Quite Work

28 11 2007

One of the most significant barriers to implementing a completely safe garbage collector lies with collecting the stack. Heap datastructures are relatively easy to walk — that is to say, that one can generate a walk function for each variant of heap datastructures, etc., in the language being collected. The stack is more difficult because it is not an explicit structure in the language. The other work I’ve read towards this (described in the post linked above) made rather awkward contortions to get around this fact. Wang and Appel CPS’ed and converted closures for their entire programs. Wang and Fluet side-step the issue by writing a safe collector for the internals of an interpreter – which isn’t really quite garbage collection in the normal sense, since they just have a Scheme stack structure defined in Cyclone which they then traverse. The harder problem is to actually type the machine-native stack.

  • The Essence of Compiling with Continuations: Towards this, my partners and I read the work defining A-Normal Form, or ANF. The authors examined a number of CPS based compilers to see what work they actually performed (many compilers use CPS as an intermediate representation to support some transformations). They found that to mitigate the code expansion of CPS transformations, and to get rid of some of the execution overhead from relatively trivial continuations, the compilers performed a number of additional reductions, pulling the continuation out of argument lists and leaving it implicit in a register, and a few other tricks. Adding these transformations together, it turns out that the sum total of these actions is essentially CPSing the source program, performing the normal reductions, and then un-CPSing the program. The authors then present an equivalent transformation directly on the source language, which is simpler to implement. This essentially accomplishes the same thing as using SSA – giving a name to every intermediate result of the computation.

This may help with typing the stack – we might be able to generate each frame’s type from the ANF of the program. Obviously some more work needs to be done here; how this might be done isn’t immediately obvious. I complained that we would have to mark the stack frames in some way, and to suggest this was not as terrible as I thought, our advisor suggested reading a couple papers by Benjamin Goldberg which pretty much wrapped up work on garbage collection without tags. In a nutshell, it really only works for languages lacking polymorphic operators.

  • Tag-Free Garbage Collection for Strongly Typed Programming Languages: This paper describes a way to sidestep tagging pointers on the stack into the heap (the notion was mentioned earlier by Appel, but implementation details were omitted). It does this basically by embedding frame-specific garbage collection routine pointers right after function calls. It then changes compilation so that returning from a routine goes an extra instruction later. Goldberg says that this works fine because the retl instruction on SPARC is only a pseudo-instruction which compiles to a jump with a hard-coded offset – I wonder how portable this is. This of course only works in a straightforward way for monomorphically typed languages. To extend the scheme to support polymorphic functions, they begin passing type-specific traversal functions to the frame-specific collection functions. This is then extended yet again to support higher-order functions – the overall strategy is to stash location-specific pointers in select locations to provide ready access to correctly-typed collector procedures in an efficient manner. There is also an addendum at the end about implementing collection for parallel languages. One interesting idea presented which isn’t explored in much depth is that there can be different collection procedures for different stages of each procedure. This allows for optimization such as not traversing uninitialized pointers early in the procedure, or not traversing dead local variables during collections after their last access in the function.
  • Polymorphic Type Reconstruction for Garbage Collection without Tags: This second paper revisits much of the content of the first paper, more clearly presenting the type reconstruction algorithm. It then addresses the fact that it is possible to have objects reachable from the garbage collection roots for which it is impossible to reconstruct the type, and proves that in the absence of polymorphic primitive operators, it is possible to reconstruct the value of any useful reachable object, because any object which matters will be passed to a primitive operation at some point, and types can be reconstructed from there. He later touches briefly on the issue of having a polymorphic equality operator (or any polymorphic primitive), and the problem with the operator needing to use data of the objects to do its job, without knowing the type, and mentions offhand that this is dealt with by implicitly passing a method dictionary to the operator, which amounts to a tag. This is still tag-free in the sense that there are no extra bits used up in the pointer, but not fully tag-free because the compiler still needs to pass explicit runtime hints about types.




JIT VMs and a Compiler Book

30 07 2007

Well, a few friends and I are doing a group independent study this semester where we’ll be writing something at least similar to a JIT JavaScript virtual machine. At least, the original goal was to do this, but we’re now being sidetracked (perhaps productively so) by OpenMoko, an all-open-source smart phone intended to have a feature set comparable to (or exceeding) the iPhone. It’s wide open, and rather than being one-upped by new developments to Tamarin or by new/upcoming efforts by a company we’ve heard rumors might be doing their own JS VM (who would almost certainly beat our performance if they attempt this), we’ll just be the first on a new platform, and rather than focusing on the main goal we had in mind before (doing a JIT compiler for purely performance reasons) we can consider other issues related to using an embedded platform (memory usage being more imporatant, power consumption, etc.) as well as being the first on a new platform. It’s not very mature. But I think it will be interesting, and fun. And educational.

As a result of this, our professor asked us to each get a copy of one of Andrew W. Appel‘s compiler books and finish it by the end of August. Since many (/most/all) of us want to us either Haskell of OCaml, we decided to standardize on Modern Compiler Implementation in ML. I got my copy yesterday, and read 40 pages (I read none today, discarding it temporarily in favor of wine tasting in Napa and Sonoma). So far, the book is well written, concise, and explains both the theory behind, and practical aspects of, the topics it addresses. I’m partway through parsing, which has been a nice refresher on FSMs (deterministic and nondeterministic) and regular expressions. One thing which has bothered me slightly is that he sort of plays fast and loose with models of computation. The example which sticks for me is that he gives a pretty good explanation of FSMs, and then begins to talk about arbitrarily deeply nested comments. The main issue with them being that you can’t use a finite state automata to parse them, because you’d need an infinite number of states to handle an arbitrarily large nesting depth. He gives the (accurate) practical solution that when specifying this in ML-lex (used in the book), states can be explicitly specified, and hitting the open and close comment representations can run ML code which increments and decrements a counter. This works fine, but he continues referring to the resulting implementation as a finite state automata – which it isn’t. Adding the counter transforms it into a pushdown automata with a single (unspecified) letter stack alphabet.

I haven’t finished the whole group of parsing chapters – maybe he doubles back to this. Another consideration is that there are also C and Java versions of this text, both of which have been updated since initial publication, while the ML version has not. It’s possible this was changed in one of those newer versions.

I’m not even sure why this bothers me so much. I used to be all about getting the intuition to work, and leaving formalism and particulars to be secondary concerns. Perhaps the years of hard work from my sometimes overly-pedantic discrete math, models of computation, and algorithms TAs finally sunk in. Or maybe it was writing a theory paper. I’m turning into a stickler for formalism.

Regardless of this silly irritation, I’m still excited for the rest of the book.