Done with Undergrad, Procedural Generation, Model Checking

17 05 2008

I’m finally done with undergrad.  It feels like 4 years just blew by in the blink of an eye.  And come fall, I’ll be working, but also applying to grad schools.  What a ride.

Incidentally, I did finish my thesis.  It’s here: Type-Safe Stack Inspection for Garbage Collector Implementation.  Every time I write that title now I wish I’d cut it off as “Type-Safe Stack Inspection.”  Oh well, too late now.  I meant to post about it right after I finished, but the final edits ran into finals period, which consumed the past two weeks of my life.

On to other projects.

Currently, Brendan, Rob and I are working on building a fairly ambitious mapping tool.  We decided to use procedural modeling implement a multiple-level-of-detail real-time procedural generator for, well, planets.  The idea is that eventually we’ll start by rendering a planet with continents and oceans on screen, and the user can zoom in – to nearly any level of detail.  Moving closer to the planet gradually adds greater and greater levels of detail to the planet, laying out rivers, mountains and valleys, then cities and roads, laying out city streets and buildings, and possibly even building internals.  We think we can do it quickly and without using unreasonable amounts of memory by taking advantage of the fact that procedural generation is deterministic if you reuse the same random number generator seed.  So we can cache seeds for areas along with constraints on edges for some areas, and use negligible amounts of memory for storing enough information that you can back out or cross the planet, have the old region freed from memory, return and see the same things.  We’re aiming pretty high, so we’ll see what happens.

I also picked up a copy of a book my advisor told me about, Principles of Model Checking.  It was just released earlier this month, and it’s great so far.  I read a couple of the original LTL and CTL model checking papers (LTL: Model Checking Using Automata Theory, CTL: Automatic Verification of Finite-State Concurrent Systems Using Temporal Logic Specifications) a couple years ago in a seminar, but I didn’t have enough background in CS to really have all the details quite click for me back then (those papers were actually the first time I saw automata in CS, before I took a theory of computation class).  This book is shaping up to be a good refresher for me (also filling in gaps from what I missed the first time I looked at this material) and seems like it would be a good introduction for those new to model checking as well.  Sadly there is not yet a place to look inside it online.





Provably Correct Garbage Collection

25 10 2007

Working towards my honors thesis, my partners and I have read several interesting papers on using type systems to prove correctness (safety) of garbage collection implementations:

  • Type-Preserving Garbage Collectors: In this paper, Wang and Appel present a way of writing garbage collectors in the language being collected, and relying on a statically checked type system to prove safety of their collector. They implement a copying collector as a function written in the ML-like language they are collecting, basically by parameterizing all types by memory regions. Each data type and language function are parametrized over a single region as a type. The garbage collection routines are a basic collector plus per-datatype collector routines parameterized over a from-space and a to-space. Each copy function deep-copies a structure in the from-space, and returns a copy located in the to-space. They address pointer sharing by providing an implementation with forwarding pointers (for which they perform an odd version of subtyping to guarantee the user program can’t modify the forwarding pointers in objects, since the collector is being written in the language being collected). To guarantee that the collector returns no pointers to data in the from-space, they introduce a language construct ‘only’ which takes one or more regions and a body, and executes the body with only the specified regions in scope. The problem with this is that if you forget to use the object, you could actually return the original The paper is fairly well-written, and the work is presented in a series of progressively more complete garbage collector functions. A fair amount of time is spent addressing particulars of their implementation, because they make a number of simplifying assumptions. The language supported is first-order only, and is written in CPS with closures converted to datastructures. Also presented are performance results, which are interesting, but largely inconclusive because the system they present is intended only to show that it is possible to write a safe garbage collector, and not meant to be a practical system. Many optimizations are mentioned throughout the paper as future work.
  • Implementation and Performance Evaluation of a Safe Runtime System in Cyclone: This paper by Fluet and Wang aims to show that a similar approach to that above can actually be done with reasonable performance. They implement a Scheme interpreter in Cyclone (a safe dialect of C). They also build a copying collector, using Cyclone’s region primitives to do so. The approach is relatively straightforward (and largely similar to the approach above) with one exception. Instead of a safe upcast with a construct which prevents later downcasts, they use linear types (unique pointers) along with some under-specified functions to create a sequence of region types, to support forwarding pointers. They compare their interpreter running some standard Scheme benchmarks to a couple others. Their interpreter running their garbage collector actually slightly outperforms their interpreter using the standard Cyclone conservative collector – consistently. That is fairly impressive, and shows the overhead for safe collection alone is not a significant bottleneck. Comparisons to MzScheme however are not so great – with the exception of the continuation-based benchmark, MzScheme far outperforms their implementation, by a factor of 2 to 3. And this was done in 2004, before MzScheme had a JIT compiler. This may be a result of the structure of their interpreter; it is a very basic interpreter, which performs a predefined number of steps on an abstract machine structure, checks to see if a collection is necessary, and repeats. They perform a comparison however of the total amount of unsafe code in each implementation, and come in at about 1/6th the amount of unsafe code as the next ‘cleanest’ implementation. Most of this apparently is from a highly optimized version of malloc() used to implement the region primitives.

Both papers write their main programs in slighly odd ways – the Wang/Appel paper does CPS and closure conversion to make stacks and environments safely traversable, and the Fluet/Wang paper sort of cheats by writing a straight interpreter in a separate language. Not to degrade the latter – they still write a garbage collector which is (itself) free of unsafe code, which wonderful. But the constructions as state machines and the CPS and closure conversions definitely hurt performance. They’re necessary, however, in order to safely collect stacks and environments.  My thesis adviser told my partners and me that he may be aware of a way to walk these structures without the efficiency loss of the explicit structures used in these papers.  So next on the block: Continuations from Generalized Stack Inspection.  That, and figuring out just how the two mysteriously underspecified primitives in the Cyclone paper really work (time to read the corresponding technical report), and contemplate the problems of forgetting to use the ‘only’ construct in the first paper.





Slight Changes

22 10 2007

Some slight changes have been brewing for some time at this blog.

Mainly, I will start writing here about as many papers as I can of those I read. The motivation for this is twofold: increasing frequency of writing, and understanding papers better. My thesis advisor has been telling me for some time now that I need to write a one paragraph summary of a paper after I finish reading it, that I should not need to refer back to the paper in order to do this, and that I should not move on to the next paper without producing a summary for the previous. I’ll try to group these into sets of relevant papers which also tie together my thoughts on the papers and ideas I get from them.

The next few papers will be on Singularity and provably correct garbage collectors. The JIT JavaScript VM I was originally going to write with friends this semester split into multiple groups when another team member decided to pursue another project and took half the group with him. I intended to stay with the original project, but only two others made the same choice. And three people is not enough to implement a complete JIT compiler from scratch in 3 months. We meandered through a number of topics, and still don’t have a clear final direction (which is worrisome considering this is supposed to turn into my honors thesis). We’ve settled more or less into the broad area of cross language memory management and runtime systems. We’ve read a number of papers: several on conservative garbage collection, a couple on Haskell FFI, the Xerox PARC Portable Common Runtime, and a few others. I’ll try to write about all of them at some point, again grouped into logical sets. That’s all for now – I have to spend time getting ready for upcoming midterms and interviews.





STM and Haskell Attention

12 08 2007

Haskell has been garnering a fair bit of attention of late, largely as a result of Simon Peyton-Jonespresence at OSCON (and the ensuing flurry of Reddit postings of related blog posts). He gave a keynote on Software Transactional Memory (STM), which is an alternative to using explicit locks and condition variables in writing parallel programs. He also gave an introduction to Haskell, and a smaller talk on data parallelism.

I’m quite pleased by this recent attention, for a couple reasons.

The first is that I really agree with Simon that STM is going to be the way to go soon*. I also just think it’s damn cool – that’s why I did an independent study on it with Maurice Herlihy last semester, working with a slightly more recent version of Microsoft’s SXM implementation of STM for C# than that available at the link I just gave. Simon gives excellent motivation for STM in his talk, but there are still others if you’re unconvinced after watching it. Consider a datastructure which you traverse in multiple directions – perhaps a grid, which can be traversed in either direction in either dimension depending on the contents at a given location – or perhaps a mutable graph representation. What locking scheme do you use for scalable (read: not coarse-grained) access to a large instance of this datastructure? You could establish a total ordering of nodes and corresponding locks, or do a row/column at a time with an ordering on rows/columns in a table, and acquire locks only in that order. But since you can traverse in any direction, this would sometimes require building lists of locks to acquire, releasing later locks which were acquired in reaching the current point of traversal, and then walking the list reacquiring locks. How expensive is this, possibly acquiring the same lock over and over? How hard is it to write that code correctly? How much effort does it take to document, and for a new maintainer to learn, this locking scheme? All of this makes fine-grained locking a poor idea at best.

How much easier would it be to write such code naïvely and mark it as atomic? And with a proper implementation of STM, how much faster could this be on a large structure than a coarse-grained lock, or repeated acquisition and release of the same fine-grained locks? And this is a bonus on top easier coding for simple concurrent structures (and partial failures), and composition of atomic transactions, which cannot be done with normal locks.

If, like me, you’re curious about how STM is implemented, I suggest you check out a few papers. Understanding of all of these benefits from understanding the basic ideas and terminology of concurrency, as well as a bit of computer architecture knowledge.

There are certainly many, many more implementations, and many more papers, and more ways to tune an implementation than these couple papers spend much time on. Check out the related work in these papers.

I’m also pleased, because I feel Haskell has been underrated by the programming populus at large for a long time.** I’m interested in becoming more familiar with Haskell largely because of some of my pet peeves with OCaml, and because Haskell’s type system particulars are both richer and closer to category theory, which interests me mainly as an intellectual curiosity (and for its relation to type theory in general). Well written Haskell can also beat out many more well-known languages performance-wise (example, definitely poke around the language shootout). And it’s always nice to see a real functional language (read: one in which functional style is the norm, not just an option) getting mainstream attention.

I can only hope that people continue to take note of Haskell.


* Please note, those of you who are itching to start a flamewar about shared state vs. message passing, that STM does not claim to be the be-all and end-all of concurrent programming. It is designed to make programming with shared state easier than it is today. And shared state is not going away, because it’s the model with which chips are currently designed. So at the very least, operating systems, compilers, and some runtime systems will still need to deal with it, even if all other development switches to message passing concurrency.

** I should probably mention that I have not used Haskell a great deal, though this is not for lack of desire. A couple years ago I wrote a short assignment in Haskell (largely to play with lazy evaluation). I’ve also spent a fair amount of time writing OCaml over the past couple years, for the course I TAed and head TAed which uses OCaml to introduce students to currying and static typing (following an introduction to CS with Scheme), and last semester in a group software engineering course (perhaps I’ll elaborate on this experience another time). So I’m familiar with a similar type system.





JIT VMs and a Compiler Book

30 07 2007

Well, a few friends and I are doing a group independent study this semester where we’ll be writing something at least similar to a JIT JavaScript virtual machine. At least, the original goal was to do this, but we’re now being sidetracked (perhaps productively so) by OpenMoko, an all-open-source smart phone intended to have a feature set comparable to (or exceeding) the iPhone. It’s wide open, and rather than being one-upped by new developments to Tamarin or by new/upcoming efforts by a company we’ve heard rumors might be doing their own JS VM (who would almost certainly beat our performance if they attempt this), we’ll just be the first on a new platform, and rather than focusing on the main goal we had in mind before (doing a JIT compiler for purely performance reasons) we can consider other issues related to using an embedded platform (memory usage being more imporatant, power consumption, etc.) as well as being the first on a new platform. It’s not very mature. But I think it will be interesting, and fun. And educational.

As a result of this, our professor asked us to each get a copy of one of Andrew W. Appel‘s compiler books and finish it by the end of August. Since many (/most/all) of us want to us either Haskell of OCaml, we decided to standardize on Modern Compiler Implementation in ML. I got my copy yesterday, and read 40 pages (I read none today, discarding it temporarily in favor of wine tasting in Napa and Sonoma). So far, the book is well written, concise, and explains both the theory behind, and practical aspects of, the topics it addresses. I’m partway through parsing, which has been a nice refresher on FSMs (deterministic and nondeterministic) and regular expressions. One thing which has bothered me slightly is that he sort of plays fast and loose with models of computation. The example which sticks for me is that he gives a pretty good explanation of FSMs, and then begins to talk about arbitrarily deeply nested comments. The main issue with them being that you can’t use a finite state automata to parse them, because you’d need an infinite number of states to handle an arbitrarily large nesting depth. He gives the (accurate) practical solution that when specifying this in ML-lex (used in the book), states can be explicitly specified, and hitting the open and close comment representations can run ML code which increments and decrements a counter. This works fine, but he continues referring to the resulting implementation as a finite state automata – which it isn’t. Adding the counter transforms it into a pushdown automata with a single (unspecified) letter stack alphabet.

I haven’t finished the whole group of parsing chapters – maybe he doubles back to this. Another consideration is that there are also C and Java versions of this text, both of which have been updated since initial publication, while the ML version has not. It’s possible this was changed in one of those newer versions.

I’m not even sure why this bothers me so much. I used to be all about getting the intuition to work, and leaving formalism and particulars to be secondary concerns. Perhaps the years of hard work from my sometimes overly-pedantic discrete math, models of computation, and algorithms TAs finally sunk in. Or maybe it was writing a theory paper. I’m turning into a stickler for formalism.

Regardless of this silly irritation, I’m still excited for the rest of the book.





Monads! (and Why Monad Tutorials Are All Awful)

26 07 2007

If you have an affinity for programming language theory (as I do), and wander around the internet searching for topics of interest, eventually you will run into references to monads. They’re neither trivial, nor as complex as the dearth of proper explanations of them might suggest.

In a nutshell, they are the PLT manifestation of a concept from category theory, and abstract away propagation of results and side effects, without actually performing side effects or leaving a pure-functional paradigm. Any attempt at description beyond that point tends to either

  • a) get terribly botched*, or
  • b) turn into a real category theory explanation, which most people don’t have the background to understand easily

and for those reasons I will refrain from attempting further explanation myself. I don’t have confidence in my own ability to explain in a non-interactive medium, and I certainly don’t have the mathematics background to formalize it.

The internet is littered with tutorials and explanations “in laymen’s terms” of what monads are, what they are for, why we care, etc. Every such tutorial I have started (at least a dozen, probably several more – all highly recommended) has either bored me to tears, struck me as completely inane, or simply confused me.

I had decided that I was simply going to have to learn abstract algebra and category theory in order to really get a handle on what they were, until a wonderful post appeared on LtU, linking to one of the earlier papers on monads. The paper, by Philip Wadler, gives not only a clear explanation of what a monad is, but also provides a strong motivation for looking for monads, an approachable description of how they are derived from a practical point of view for use in a type system, and a number of useful, nontrivial examples. Probably one of the most well-written academic papers I’ve read. You should be familiar with an ML-style type system and notation to understand the examples. Or you need be be good at picking up such things on the fly.

Monads for Functional Programming. Enjoy.


* The canonical example of a monad is for some reason the Maybe monad, though I don’t know why – it’s certainly a monad, but it’s actually not the most intuitive first example. Most of the monad explanations I found online immediately launch into using Maybe as an example of why monads aren’t hard. They’re not, but the example is usually provided far too early, without enough background, motivation, or explanation, making the reader simply feel like he or she is missing some greater complexity (which is true) which was already explained (which usually is not the case). Wadler’s example of a simple interpreter is much clearer.