Done with Undergrad, Procedural Generation, Model Checking

17 05 2008

I’m finally done with undergrad.  It feels like 4 years just blew by in the blink of an eye.  And come fall, I’ll be working, but also applying to grad schools.  What a ride.

Incidentally, I did finish my thesis.  It’s here: Type-Safe Stack Inspection for Garbage Collector Implementation.  Every time I write that title now I wish I’d cut it off as “Type-Safe Stack Inspection.”  Oh well, too late now.  I meant to post about it right after I finished, but the final edits ran into finals period, which consumed the past two weeks of my life.

On to other projects.

Currently, Brendan, Rob and I are working on building a fairly ambitious mapping tool.  We decided to use procedural modeling implement a multiple-level-of-detail real-time procedural generator for, well, planets.  The idea is that eventually we’ll start by rendering a planet with continents and oceans on screen, and the user can zoom in – to nearly any level of detail.  Moving closer to the planet gradually adds greater and greater levels of detail to the planet, laying out rivers, mountains and valleys, then cities and roads, laying out city streets and buildings, and possibly even building internals.  We think we can do it quickly and without using unreasonable amounts of memory by taking advantage of the fact that procedural generation is deterministic if you reuse the same random number generator seed.  So we can cache seeds for areas along with constraints on edges for some areas, and use negligible amounts of memory for storing enough information that you can back out or cross the planet, have the old region freed from memory, return and see the same things.  We’re aiming pretty high, so we’ll see what happens.

I also picked up a copy of a book my advisor told me about, Principles of Model Checking.  It was just released earlier this month, and it’s great so far.  I read a couple of the original LTL and CTL model checking papers (LTL: Model Checking Using Automata Theory, CTL: Automatic Verification of Finite-State Concurrent Systems Using Temporal Logic Specifications) a couple years ago in a seminar, but I didn’t have enough background in CS to really have all the details quite click for me back then (those papers were actually the first time I saw automata in CS, before I took a theory of computation class).  This book is shaping up to be a good refresher for me (also filling in gaps from what I missed the first time I looked at this material) and seems like it would be a good introduction for those new to model checking as well.  Sadly there is not yet a place to look inside it online.





Variant Interfaces

1 01 2008

Today I finished my first pass reading of Colimits for Concurrent Collectors, and was struck with an interesting idea. The paper is a demonstration of an approach to refine specifications using the notion of colimits. I don’t fully understand the refinement process yet (hence the “first pass reading”). The paper contains many examples specifications written in a variant of Specware, to specify a safe interface between a mutator and garbage collector without using stop-the-world semantics – this is the case study for the refinement.

An interesting pattern of sorts is used. A number of the specifications for portions of the system are parameterized by other specifications. A common idiom in the examples are lines similar to:

MORPHISM Cleaning-View = ONLY nodes,roots,free,marked,workset,unmark

What this line means is that the current specification, when it refers to an instance of the Cleaning-View specification, can only access the mentioned observers. This is interesting if we translate it to interfaces in an object-oriented environment. What this amounts to is essentially defining an interface (here, a specification) and then multiple subsets of this interface (here, imported subsets of observer functions). One application of this (and I think the most interesting) is using the sub-interfaces as restricted capabilities on objects. This is a reasonably common use of interfaces in OO languages.

You can already mimic this in normal object oriented languages by defining minimal subsets, and larger subsets as extending the smaller interfaces. But what about keeping siblings with overlap in sync? A nice alternative would be to be able to specify all of the siblings together, as a set of variant interfaces (similar to the notion of variant types). For example, in psuedo-Java:

public interface Primary {
    void func1();
    void func2(int);
    void func3(double);

    subset Takeargs {func2, func3};
    subset Noargs {func1};
    subset Nofloatingpoint {func1, func2}
}

This example would of course need to be cleaned up a bit to deal with method overloading.

The problem with doing the equivalent of this in normal Java is keeping these “sub-interfaces” in sync with the main interface, and keeping overlapping siblings in sync without just having an arbitrarily large hierarchy of interfaces. Allowing joint specifications, and then referring to variants through names like Primary.Takeargs (to say that a variable should be of type Primary, but the current scope should only have access to the Takeargs subset) would ease this difficulty, making the use of interfaces for restricted control slightly easier to deal with as a programmer.

I also wonder what could be done with this as an extension to a language such as Spec#, with pre- and post-conditions on methods. You could then deduce that objects using only a restricted subset of an interface might maintain certain invariants (such as making no modifications to an object), more easily than before. More importantly, you could guarantee it statically at compile time – the interface variants would be subtypes of the main interface type. Without the explicit sub- and sibling-interface relationships it would still be possible to statically verify properties of interest, as the compiler can see what methods are called on various objects, but this approach lets the programmer encode this restriction easily.

Ultimately this is just syntactic sugar for a construct which can already be used, but it encourages the programmer to make the type checker do some more verification work for the programmer. It’s also sort of a static version of something similar to the E caretaker for capability delegation or more generally the object capability model(lacking revocation, because all of this happens at compile time). I should read some of the EROS and Coyotos papers…

It would be interesting to hack up a prototype implementation of this, but I don’t feel like mucking about right in the internals of another compiler (thesis work…), and unfortunately I don’t think any of the good meta-programming languages are really appropriate for this. Scheme is dynamically-typed, Ruby is duck-typed (so really, dynamically typed), and you need static typing to get the programming benefits of this. Perhaps it’s finally time for me to learn MetaOCaml.





Provably Correct Garbage Collection

25 10 2007

Working towards my honors thesis, my partners and I have read several interesting papers on using type systems to prove correctness (safety) of garbage collection implementations:

  • Type-Preserving Garbage Collectors: In this paper, Wang and Appel present a way of writing garbage collectors in the language being collected, and relying on a statically checked type system to prove safety of their collector. They implement a copying collector as a function written in the ML-like language they are collecting, basically by parameterizing all types by memory regions. Each data type and language function are parametrized over a single region as a type. The garbage collection routines are a basic collector plus per-datatype collector routines parameterized over a from-space and a to-space. Each copy function deep-copies a structure in the from-space, and returns a copy located in the to-space. They address pointer sharing by providing an implementation with forwarding pointers (for which they perform an odd version of subtyping to guarantee the user program can’t modify the forwarding pointers in objects, since the collector is being written in the language being collected). To guarantee that the collector returns no pointers to data in the from-space, they introduce a language construct ‘only’ which takes one or more regions and a body, and executes the body with only the specified regions in scope. The problem with this is that if you forget to use the object, you could actually return the original The paper is fairly well-written, and the work is presented in a series of progressively more complete garbage collector functions. A fair amount of time is spent addressing particulars of their implementation, because they make a number of simplifying assumptions. The language supported is first-order only, and is written in CPS with closures converted to datastructures. Also presented are performance results, which are interesting, but largely inconclusive because the system they present is intended only to show that it is possible to write a safe garbage collector, and not meant to be a practical system. Many optimizations are mentioned throughout the paper as future work.
  • Implementation and Performance Evaluation of a Safe Runtime System in Cyclone: This paper by Fluet and Wang aims to show that a similar approach to that above can actually be done with reasonable performance. They implement a Scheme interpreter in Cyclone (a safe dialect of C). They also build a copying collector, using Cyclone’s region primitives to do so. The approach is relatively straightforward (and largely similar to the approach above) with one exception. Instead of a safe upcast with a construct which prevents later downcasts, they use linear types (unique pointers) along with some under-specified functions to create a sequence of region types, to support forwarding pointers. They compare their interpreter running some standard Scheme benchmarks to a couple others. Their interpreter running their garbage collector actually slightly outperforms their interpreter using the standard Cyclone conservative collector – consistently. That is fairly impressive, and shows the overhead for safe collection alone is not a significant bottleneck. Comparisons to MzScheme however are not so great – with the exception of the continuation-based benchmark, MzScheme far outperforms their implementation, by a factor of 2 to 3. And this was done in 2004, before MzScheme had a JIT compiler. This may be a result of the structure of their interpreter; it is a very basic interpreter, which performs a predefined number of steps on an abstract machine structure, checks to see if a collection is necessary, and repeats. They perform a comparison however of the total amount of unsafe code in each implementation, and come in at about 1/6th the amount of unsafe code as the next ‘cleanest’ implementation. Most of this apparently is from a highly optimized version of malloc() used to implement the region primitives.

Both papers write their main programs in slighly odd ways – the Wang/Appel paper does CPS and closure conversion to make stacks and environments safely traversable, and the Fluet/Wang paper sort of cheats by writing a straight interpreter in a separate language. Not to degrade the latter – they still write a garbage collector which is (itself) free of unsafe code, which wonderful. But the constructions as state machines and the CPS and closure conversions definitely hurt performance. They’re necessary, however, in order to safely collect stacks and environments.  My thesis adviser told my partners and me that he may be aware of a way to walk these structures without the efficiency loss of the explicit structures used in these papers.  So next on the block: Continuations from Generalized Stack Inspection.  That, and figuring out just how the two mysteriously underspecified primitives in the Cyclone paper really work (time to read the corresponding technical report), and contemplate the problems of forgetting to use the ‘only’ construct in the first paper.





STM and Haskell Attention

12 08 2007

Haskell has been garnering a fair bit of attention of late, largely as a result of Simon Peyton-Jonespresence at OSCON (and the ensuing flurry of Reddit postings of related blog posts). He gave a keynote on Software Transactional Memory (STM), which is an alternative to using explicit locks and condition variables in writing parallel programs. He also gave an introduction to Haskell, and a smaller talk on data parallelism.

I’m quite pleased by this recent attention, for a couple reasons.

The first is that I really agree with Simon that STM is going to be the way to go soon*. I also just think it’s damn cool – that’s why I did an independent study on it with Maurice Herlihy last semester, working with a slightly more recent version of Microsoft’s SXM implementation of STM for C# than that available at the link I just gave. Simon gives excellent motivation for STM in his talk, but there are still others if you’re unconvinced after watching it. Consider a datastructure which you traverse in multiple directions – perhaps a grid, which can be traversed in either direction in either dimension depending on the contents at a given location – or perhaps a mutable graph representation. What locking scheme do you use for scalable (read: not coarse-grained) access to a large instance of this datastructure? You could establish a total ordering of nodes and corresponding locks, or do a row/column at a time with an ordering on rows/columns in a table, and acquire locks only in that order. But since you can traverse in any direction, this would sometimes require building lists of locks to acquire, releasing later locks which were acquired in reaching the current point of traversal, and then walking the list reacquiring locks. How expensive is this, possibly acquiring the same lock over and over? How hard is it to write that code correctly? How much effort does it take to document, and for a new maintainer to learn, this locking scheme? All of this makes fine-grained locking a poor idea at best.

How much easier would it be to write such code naïvely and mark it as atomic? And with a proper implementation of STM, how much faster could this be on a large structure than a coarse-grained lock, or repeated acquisition and release of the same fine-grained locks? And this is a bonus on top easier coding for simple concurrent structures (and partial failures), and composition of atomic transactions, which cannot be done with normal locks.

If, like me, you’re curious about how STM is implemented, I suggest you check out a few papers. Understanding of all of these benefits from understanding the basic ideas and terminology of concurrency, as well as a bit of computer architecture knowledge.

There are certainly many, many more implementations, and many more papers, and more ways to tune an implementation than these couple papers spend much time on. Check out the related work in these papers.

I’m also pleased, because I feel Haskell has been underrated by the programming populus at large for a long time.** I’m interested in becoming more familiar with Haskell largely because of some of my pet peeves with OCaml, and because Haskell’s type system particulars are both richer and closer to category theory, which interests me mainly as an intellectual curiosity (and for its relation to type theory in general). Well written Haskell can also beat out many more well-known languages performance-wise (example, definitely poke around the language shootout). And it’s always nice to see a real functional language (read: one in which functional style is the norm, not just an option) getting mainstream attention.

I can only hope that people continue to take note of Haskell.


* Please note, those of you who are itching to start a flamewar about shared state vs. message passing, that STM does not claim to be the be-all and end-all of concurrent programming. It is designed to make programming with shared state easier than it is today. And shared state is not going away, because it’s the model with which chips are currently designed. So at the very least, operating systems, compilers, and some runtime systems will still need to deal with it, even if all other development switches to message passing concurrency.

** I should probably mention that I have not used Haskell a great deal, though this is not for lack of desire. A couple years ago I wrote a short assignment in Haskell (largely to play with lazy evaluation). I’ve also spent a fair amount of time writing OCaml over the past couple years, for the course I TAed and head TAed which uses OCaml to introduce students to currying and static typing (following an introduction to CS with Scheme), and last semester in a group software engineering course (perhaps I’ll elaborate on this experience another time). So I’m familiar with a similar type system.





Citations, Citations, Citations

20 07 2007

I recently got a paper published in the ASM Workshop ’07. Two of my co-authors presented it in person in Norway, which I was unable to attend.

This was my first academic paper, and while it could have been better, I’m fairly satisfied with it. Finishing it off sucked up most of my time the first two weeks that I was out of school for the summer. I basically left Brown on a Saturday, slept the rest of that day, watched TV and drank soda on Sunday, and on Monday began two weeks’ worth of writing, initially cleaning up the draft submission which resulted in acceptance to the conference, and eventually rewriting the paper from scratch after our advisor took a closer look at it. Two days after final submission I flew out to California to begin my internship at Sun. I must admit that most of the reason this ate those two weeks was because we all put off the revisions for so long because of finals, but from other things Shriram has said, this seems to be pretty common among both inexperienced and veteran academics.

One thing he commented on while he was helping us to revise it was that a number of the citations we had in the paper were originally wrong. Many were taken from the homepages of paper authors kind enough to provide BibTeX information. Most of the others were taken (and somewhat corrected) from CiteSeer. However, Shriram caught that a number of these were still incorrect, and said that many (most?) of the citation information given on CiteSeer is incorrect or incomplete. When I asked what the normal way to find solid citations is, whether it’s anything more than spending lots of time researching, he answered in the negative.

Why is there no central, reputable, correct repository for this information? Or alternatively, why is CiteSeer going to the trouble of archiving all of these papers, and not doing so completely correctly?








Follow

Get every new post delivered to your Inbox.