Variant Interfaces

1 01 2008

Today I finished my first pass reading of Colimits for Concurrent Collectors, and was struck with an interesting idea. The paper is a demonstration of an approach to refine specifications using the notion of colimits. I don’t fully understand the refinement process yet (hence the “first pass reading”). The paper contains many examples specifications written in a variant of Specware, to specify a safe interface between a mutator and garbage collector without using stop-the-world semantics – this is the case study for the refinement.

An interesting pattern of sorts is used. A number of the specifications for portions of the system are parameterized by other specifications. A common idiom in the examples are lines similar to:

MORPHISM Cleaning-View = ONLY nodes,roots,free,marked,workset,unmark

What this line means is that the current specification, when it refers to an instance of the Cleaning-View specification, can only access the mentioned observers. This is interesting if we translate it to interfaces in an object-oriented environment. What this amounts to is essentially defining an interface (here, a specification) and then multiple subsets of this interface (here, imported subsets of observer functions). One application of this (and I think the most interesting) is using the sub-interfaces as restricted capabilities on objects. This is a reasonably common use of interfaces in OO languages.

You can already mimic this in normal object oriented languages by defining minimal subsets, and larger subsets as extending the smaller interfaces. But what about keeping siblings with overlap in sync? A nice alternative would be to be able to specify all of the siblings together, as a set of variant interfaces (similar to the notion of variant types). For example, in psuedo-Java:

public interface Primary {
    void func1();
    void func2(int);
    void func3(double);

    subset Takeargs {func2, func3};
    subset Noargs {func1};
    subset Nofloatingpoint {func1, func2}
}

This example would of course need to be cleaned up a bit to deal with method overloading.

The problem with doing the equivalent of this in normal Java is keeping these “sub-interfaces” in sync with the main interface, and keeping overlapping siblings in sync without just having an arbitrarily large hierarchy of interfaces. Allowing joint specifications, and then referring to variants through names like Primary.Takeargs (to say that a variable should be of type Primary, but the current scope should only have access to the Takeargs subset) would ease this difficulty, making the use of interfaces for restricted control slightly easier to deal with as a programmer.

I also wonder what could be done with this as an extension to a language such as Spec#, with pre- and post-conditions on methods. You could then deduce that objects using only a restricted subset of an interface might maintain certain invariants (such as making no modifications to an object), more easily than before. More importantly, you could guarantee it statically at compile time – the interface variants would be subtypes of the main interface type. Without the explicit sub- and sibling-interface relationships it would still be possible to statically verify properties of interest, as the compiler can see what methods are called on various objects, but this approach lets the programmer encode this restriction easily.

Ultimately this is just syntactic sugar for a construct which can already be used, but it encourages the programmer to make the type checker do some more verification work for the programmer. It’s also sort of a static version of something similar to the E caretaker for capability delegation or more generally the object capability model(lacking revocation, because all of this happens at compile time). I should read some of the EROS and Coyotos papers…

It would be interesting to hack up a prototype implementation of this, but I don’t feel like mucking about right in the internals of another compiler (thesis work…), and unfortunately I don’t think any of the good meta-programming languages are really appropriate for this. Scheme is dynamically-typed, Ruby is duck-typed (so really, dynamically typed), and you need static typing to get the programming benefits of this. Perhaps it’s finally time for me to learn MetaOCaml.





Provably Correct Garbage Collection

25 10 2007

Working towards my honors thesis, my partners and I have read several interesting papers on using type systems to prove correctness (safety) of garbage collection implementations:

  • Type-Preserving Garbage Collectors: In this paper, Wang and Appel present a way of writing garbage collectors in the language being collected, and relying on a statically checked type system to prove safety of their collector. They implement a copying collector as a function written in the ML-like language they are collecting, basically by parameterizing all types by memory regions. Each data type and language function are parametrized over a single region as a type. The garbage collection routines are a basic collector plus per-datatype collector routines parameterized over a from-space and a to-space. Each copy function deep-copies a structure in the from-space, and returns a copy located in the to-space. They address pointer sharing by providing an implementation with forwarding pointers (for which they perform an odd version of subtyping to guarantee the user program can’t modify the forwarding pointers in objects, since the collector is being written in the language being collected). To guarantee that the collector returns no pointers to data in the from-space, they introduce a language construct ‘only’ which takes one or more regions and a body, and executes the body with only the specified regions in scope. The problem with this is that if you forget to use the object, you could actually return the original The paper is fairly well-written, and the work is presented in a series of progressively more complete garbage collector functions. A fair amount of time is spent addressing particulars of their implementation, because they make a number of simplifying assumptions. The language supported is first-order only, and is written in CPS with closures converted to datastructures. Also presented are performance results, which are interesting, but largely inconclusive because the system they present is intended only to show that it is possible to write a safe garbage collector, and not meant to be a practical system. Many optimizations are mentioned throughout the paper as future work.
  • Implementation and Performance Evaluation of a Safe Runtime System in Cyclone: This paper by Fluet and Wang aims to show that a similar approach to that above can actually be done with reasonable performance. They implement a Scheme interpreter in Cyclone (a safe dialect of C). They also build a copying collector, using Cyclone’s region primitives to do so. The approach is relatively straightforward (and largely similar to the approach above) with one exception. Instead of a safe upcast with a construct which prevents later downcasts, they use linear types (unique pointers) along with some under-specified functions to create a sequence of region types, to support forwarding pointers. They compare their interpreter running some standard Scheme benchmarks to a couple others. Their interpreter running their garbage collector actually slightly outperforms their interpreter using the standard Cyclone conservative collector – consistently. That is fairly impressive, and shows the overhead for safe collection alone is not a significant bottleneck. Comparisons to MzScheme however are not so great – with the exception of the continuation-based benchmark, MzScheme far outperforms their implementation, by a factor of 2 to 3. And this was done in 2004, before MzScheme had a JIT compiler. This may be a result of the structure of their interpreter; it is a very basic interpreter, which performs a predefined number of steps on an abstract machine structure, checks to see if a collection is necessary, and repeats. They perform a comparison however of the total amount of unsafe code in each implementation, and come in at about 1/6th the amount of unsafe code as the next ‘cleanest’ implementation. Most of this apparently is from a highly optimized version of malloc() used to implement the region primitives.

Both papers write their main programs in slighly odd ways – the Wang/Appel paper does CPS and closure conversion to make stacks and environments safely traversable, and the Fluet/Wang paper sort of cheats by writing a straight interpreter in a separate language. Not to degrade the latter – they still write a garbage collector which is (itself) free of unsafe code, which wonderful. But the constructions as state machines and the CPS and closure conversions definitely hurt performance. They’re necessary, however, in order to safely collect stacks and environments.  My thesis adviser told my partners and me that he may be aware of a way to walk these structures without the efficiency loss of the explicit structures used in these papers.  So next on the block: Continuations from Generalized Stack Inspection.  That, and figuring out just how the two mysteriously underspecified primitives in the Cyclone paper really work (time to read the corresponding technical report), and contemplate the problems of forgetting to use the ‘only’ construct in the first paper.