Done with Undergrad, Procedural Generation, Model Checking

17 05 2008

I’m finally done with undergrad.  It feels like 4 years just blew by in the blink of an eye.  And come fall, I’ll be working, but also applying to grad schools.  What a ride.

Incidentally, I did finish my thesis.  It’s here: Type-Safe Stack Inspection for Garbage Collector Implementation.  Every time I write that title now I wish I’d cut it off as “Type-Safe Stack Inspection.”  Oh well, too late now.  I meant to post about it right after I finished, but the final edits ran into finals period, which consumed the past two weeks of my life.

On to other projects.

Currently, Brendan, Rob and I are working on building a fairly ambitious mapping tool.  We decided to use procedural modeling implement a multiple-level-of-detail real-time procedural generator for, well, planets.  The idea is that eventually we’ll start by rendering a planet with continents and oceans on screen, and the user can zoom in – to nearly any level of detail.  Moving closer to the planet gradually adds greater and greater levels of detail to the planet, laying out rivers, mountains and valleys, then cities and roads, laying out city streets and buildings, and possibly even building internals.  We think we can do it quickly and without using unreasonable amounts of memory by taking advantage of the fact that procedural generation is deterministic if you reuse the same random number generator seed.  So we can cache seeds for areas along with constraints on edges for some areas, and use negligible amounts of memory for storing enough information that you can back out or cross the planet, have the old region freed from memory, return and see the same things.  We’re aiming pretty high, so we’ll see what happens.

I also picked up a copy of a book my advisor told me about, Principles of Model Checking.  It was just released earlier this month, and it’s great so far.  I read a couple of the original LTL and CTL model checking papers (LTL: Model Checking Using Automata Theory, CTL: Automatic Verification of Finite-State Concurrent Systems Using Temporal Logic Specifications) a couple years ago in a seminar, but I didn’t have enough background in CS to really have all the details quite click for me back then (those papers were actually the first time I saw automata in CS, before I took a theory of computation class).  This book is shaping up to be a good refresher for me (also filling in gaps from what I missed the first time I looked at this material) and seems like it would be a good introduction for those new to model checking as well.  Sadly there is not yet a place to look inside it online.





Provably Correct Garbage Collection

25 10 2007

Working towards my honors thesis, my partners and I have read several interesting papers on using type systems to prove correctness (safety) of garbage collection implementations:

  • Type-Preserving Garbage Collectors: In this paper, Wang and Appel present a way of writing garbage collectors in the language being collected, and relying on a statically checked type system to prove safety of their collector. They implement a copying collector as a function written in the ML-like language they are collecting, basically by parameterizing all types by memory regions. Each data type and language function are parametrized over a single region as a type. The garbage collection routines are a basic collector plus per-datatype collector routines parameterized over a from-space and a to-space. Each copy function deep-copies a structure in the from-space, and returns a copy located in the to-space. They address pointer sharing by providing an implementation with forwarding pointers (for which they perform an odd version of subtyping to guarantee the user program can’t modify the forwarding pointers in objects, since the collector is being written in the language being collected). To guarantee that the collector returns no pointers to data in the from-space, they introduce a language construct ‘only’ which takes one or more regions and a body, and executes the body with only the specified regions in scope. The problem with this is that if you forget to use the object, you could actually return the original The paper is fairly well-written, and the work is presented in a series of progressively more complete garbage collector functions. A fair amount of time is spent addressing particulars of their implementation, because they make a number of simplifying assumptions. The language supported is first-order only, and is written in CPS with closures converted to datastructures. Also presented are performance results, which are interesting, but largely inconclusive because the system they present is intended only to show that it is possible to write a safe garbage collector, and not meant to be a practical system. Many optimizations are mentioned throughout the paper as future work.
  • Implementation and Performance Evaluation of a Safe Runtime System in Cyclone: This paper by Fluet and Wang aims to show that a similar approach to that above can actually be done with reasonable performance. They implement a Scheme interpreter in Cyclone (a safe dialect of C). They also build a copying collector, using Cyclone’s region primitives to do so. The approach is relatively straightforward (and largely similar to the approach above) with one exception. Instead of a safe upcast with a construct which prevents later downcasts, they use linear types (unique pointers) along with some under-specified functions to create a sequence of region types, to support forwarding pointers. They compare their interpreter running some standard Scheme benchmarks to a couple others. Their interpreter running their garbage collector actually slightly outperforms their interpreter using the standard Cyclone conservative collector – consistently. That is fairly impressive, and shows the overhead for safe collection alone is not a significant bottleneck. Comparisons to MzScheme however are not so great – with the exception of the continuation-based benchmark, MzScheme far outperforms their implementation, by a factor of 2 to 3. And this was done in 2004, before MzScheme had a JIT compiler. This may be a result of the structure of their interpreter; it is a very basic interpreter, which performs a predefined number of steps on an abstract machine structure, checks to see if a collection is necessary, and repeats. They perform a comparison however of the total amount of unsafe code in each implementation, and come in at about 1/6th the amount of unsafe code as the next ‘cleanest’ implementation. Most of this apparently is from a highly optimized version of malloc() used to implement the region primitives.

Both papers write their main programs in slighly odd ways – the Wang/Appel paper does CPS and closure conversion to make stacks and environments safely traversable, and the Fluet/Wang paper sort of cheats by writing a straight interpreter in a separate language. Not to degrade the latter – they still write a garbage collector which is (itself) free of unsafe code, which wonderful. But the constructions as state machines and the CPS and closure conversions definitely hurt performance. They’re necessary, however, in order to safely collect stacks and environments.  My thesis adviser told my partners and me that he may be aware of a way to walk these structures without the efficiency loss of the explicit structures used in these papers.  So next on the block: Continuations from Generalized Stack Inspection.  That, and figuring out just how the two mysteriously underspecified primitives in the Cyclone paper really work (time to read the corresponding technical report), and contemplate the problems of forgetting to use the ‘only’ construct in the first paper.





Preparing to Build a Rails App

24 10 2007

I’ve been thinking for some time about building a system to organize all of the papers I need (or just want) to read, whether I’ve read them, etc., along with some indexing and tagging system so I can search for papers both by topic, and by my motivation for reading them.

As I’ve been enjoying playing with Ruby lately while working on crawling online postings of course textbooks for Mocha, and there’s been so much hullabaloo about Ruby on Rails, I’ve decided to give Rails a shot.

The instructions given for getting up and running are pretty straightforward, and short. Downloading, building, and installing Ruby was no problem. Downloading and installing RubyGems was also without incident. Pretty smooth so far. The next step in the instructions looks very straightforward: run gem install rails –include-dependencies. This yielded an error about not finding the rails package, which seems like a pretty big problem for one of Ruby’s flagship apps. Googling revealed that for some reason, you need to purge and regenerate your package cache sometimes before this works. So I did that. Still, no luck, but the error this time is that the package server refused my connection. More googling yielded the helpful suggestion “Your error message seems like a Rubyforge problem. It happens occasionally. Just try it again later.” For something that bills itself as being akin to apt-get, it has a long way to go when a user’s first interaction is:

[colin@sliver:~/Desktop/rubygems-0.9.4]$ gem install rails --include-dependencies
Bulk updating Gem source index for: http://gems.rubyforge.org
ERROR:  While executing gem ... (Gem::GemNotFoundException)
    Could not find rails (> 0) in any repository
[colin@sliver:~/Desktop/rubygems-0.9.4]$ gem env
RubyGems Environment:
  - VERSION: 0.9.4 (0.9.4)
  - INSTALLATION DIRECTORY: /usr/local/lib/ruby/gems/1.8
  - GEM PATH:
     - /usr/local/lib/ruby/gems/1.8
  - REMOTE SOURCES:
     - http://gems.rubyforge.org
[colin@sliver:~/Desktop/rubygems-0.9.4]$ rm -f /usr/local/lib/ruby/gems/1.8/source_cache
[colin@sliver:~/Desktop/rubygems-0.9.4]$ gem install rails --include-dependencies
Bulk updating Gem source index for: http://gems.rubyforge.org
ERROR:  While executing gem ... (Gem::GemNotFoundException)
    Could not find rails (> 0) in any repository
[colin@sliver:~/Desktop/rubygems-0.9.4]$ gem update
Updating installed gems...
Bulk updating Gem source index for: http://gems.rubyforge.org
Gems: [] updated
[colin@sliver:~/Desktop/rubygems-0.9.4]$ gem install rails --include-dependencies
ERROR:  While executing gem ... (Gem::RemoteFetcher::FetchError)
    Errno::ECONNREFUSED reading http://gems.rubyforge.org/gems/rake-0.7.3.gem
[colin@sliver:~/Desktop/rubygems-0.9.4]$ gem install rails --include-dependencies
ERROR:  While executing gem ... (Gem::RemoteFetcher::FetchError)
    Errno::ECONNREFUSED reading http://gems.rubyforge.org/gems/actionwebservice- 1.2.5.gem
[colin@sliver:~/Desktop/rubygems-0.9.4]$ gem install rails --include-dependencies
ERROR:  While executing gem ... (Gem::RemoteFetcher::FetchError)
    Errno::ECONNREFUSED reading http://gems.rubyforge.org/gems/rails-1.2.5.gem
[colin@sliver:~/Desktop/rubygems-0.9.4]$ gem install rails --include-dependencies
ERROR:  While executing gem ... (Gem::RemoteFetcher::FetchError)
    Errno::ECONNREFUSED reading http://gems.rubyforge.org/gems/rails-1.2.5.gem
[colin@sliver:~/Desktop/rubygems-0.9.4]$ gem install rails --include-dependencies
Successfully installed rails-1.2.5
[colin@sliver:~/Desktop/rubygems-0.9.4]$

Hm. At least the next step to generate a simple rails app works fine.





Slight Changes

22 10 2007

Some slight changes have been brewing for some time at this blog.

Mainly, I will start writing here about as many papers as I can of those I read. The motivation for this is twofold: increasing frequency of writing, and understanding papers better. My thesis advisor has been telling me for some time now that I need to write a one paragraph summary of a paper after I finish reading it, that I should not need to refer back to the paper in order to do this, and that I should not move on to the next paper without producing a summary for the previous. I’ll try to group these into sets of relevant papers which also tie together my thoughts on the papers and ideas I get from them.

The next few papers will be on Singularity and provably correct garbage collectors. The JIT JavaScript VM I was originally going to write with friends this semester split into multiple groups when another team member decided to pursue another project and took half the group with him. I intended to stay with the original project, but only two others made the same choice. And three people is not enough to implement a complete JIT compiler from scratch in 3 months. We meandered through a number of topics, and still don’t have a clear final direction (which is worrisome considering this is supposed to turn into my honors thesis). We’ve settled more or less into the broad area of cross language memory management and runtime systems. We’ve read a number of papers: several on conservative garbage collection, a couple on Haskell FFI, the Xerox PARC Portable Common Runtime, and a few others. I’ll try to write about all of them at some point, again grouped into logical sets. That’s all for now – I have to spend time getting ready for upcoming midterms and interviews.





Improving Solaris Resource Management

8 08 2007

Over at my Sun blog, I just recently posted the slides from the KTD (Kernel Technical Discussion) presentation I gave on Monday.





Source Control and Build Systems

18 07 2007

I’ve been thinking lately about the shortcomings with SCMs (Source Control Management systems) and build systems. Mainly, that by integrating a certain kind of SCM with a build system, builds of large projects could be made to go a bit faster, as well as gaining some semantic information.

SCMs I’ve used:

  • CVS – About as basic as you can get while still being useful. Basic checkout-modify-checkin model which forms the basis of most SCMs.
  • Subversion – A slightly nicer version of CVS for most intents and purposes. This is what I consider baseline for source control. The only difference from CVS that I’ve ever cared about is that it screws up less frequently with binary files than CVS.
  • Mercurial – A nice distributed SCM.
  • SCCS / Teamware – Sun’s internal SCM. It’s a distributed SCM with very explicit parent-child repository relationships (which are of course flexible, more later)
  • Perforce – Pretty obnoxious in my experience, and not singularly any particular SCM paradigm.

I like distributed SCMs, largely because it also provides a way for a developer to have their own version control without dealing with the main repository. There’s no need to make the SCM work with another used locally, or with another copy of the same SCM – checkins can be done on the local repository without issue. This is important for developers who may for example just be starting on an existing project and not have commit access yet, or for a developer who wants to work on a laptop or other local machine under circumstances which lack access to the main repository. I also like a noticable parent-child relationship between repositories. SCCS has this. Mercurial has a notion of a parent repository (by default the one you cloned from), but it isn’t emphasized in use of the tool, so it isn’t as semantically meaningful – this is a personal nitpick, and I don’t really have a technical reason for backing up this preference. Perforce is mostly just obnoxious in my opinion*. It is similar to a DSCM, in that each developer has their own copy of the respository which they do direct file checkouts from. It keeps checkout state on a central server, however. And to save space, it enforces a requirement that all copies of the repository be on the same filesystem, because it uses hardlinks to do an initial clone of the repository. This is novel, and certainly saves space, but in addition to making remote work impossible also creates an emormous problem. Developers do occasionally screw up their umask on UNIX systems, so they do occasionally need to chmod files. Or more commonly, an editor will allow users to easily (sometimes accidentally) override read-only protection on files by chmod’ing them independently. Then the editing begins. It’s bad enough when someone commits bad code to a main repository, and those who sync up before the changes are backed out have their workspaces broken. But when someone accidentally chmod’s a file in the Perforce situation above, it can simultaneously break building in every checked out workspace that hasn’t done a local checkout of that file, which would remove the hardlink and replace it with an actual copy. For this reason, I dislike SCMs which make hardlinks for checkouts. One thing I like about both Perforce and SCCS/Teamware is that each has an explicit list of files being edited, which is the inspiration for the main point of this post.

Build systems I’ve used:

  • make (and parallel versions of it) – The baseline build system from a UNIX/*nix perspective.
  • ant – Used for Java.

The main problem with make and ant is that they are stateless – so for partial builds, they still need to do a lot of initial work to determine what needs to be compiled, and for languages like C, linked. Both traverse the filesystem, stat’ing every file listed such that the build target is dependent on it. For small projects, this isn’t significant. For larger projects (such as OpenSolaris, it becomes noticable, and increases the build time – particularly on systems with other slowdowns for file operations, such as older systems with slow disks, or systems using network mounts for files. An incremental x86 build of OpenSolaris, on the build machines at work, takes between 10 and 40 minutes depending on the number of users on the build servers, after changing one file. Recompiling any one of the main files in the genunix module takes less than 30 seconds. Re-linking all of the object files which compose genunix doesn’t take more than a minute or so. And rebuilding cpio archives for the bfu script only takes about 2 to 3 minutes. Which means that each OS build I do takes between 2 and 8 times as long as it should. This is partly a result of the OS makefile structure – there has been some discussion recently on some OpenSolaris lists about the construction of the makefiles. But the main issue is that either way, hundreds of files are stat’ed which are known not to have changed – the build system simply lacks this information.

One possible solution to this (which I might try to code up this weekend) is to combine a distributed SCM with explicit local checkout (so it has a full list of all potentially modified files), and a build system. This would allow the tools, given a dependency graph, to not even look at source files which don’t need to be rebuilt. Only modified files and those which depend explicitly on them would need to be checked. This could shave some noticable time off the OS build system. Of course, this doesn’t address things like header-dependency unless those are added explicitly to the dependency tree requirements. But for object files, this could be a win.


* It is probably worth noting that this is from my experience at one company where Perforce is in use. I haven’t looked into it too much, so these problems could just be a particular configuration of Perforce.