• An example of why software composition is hard

    Combining two concepts into a program is rarely as simple as import A; import B;. Almost always, when you combine two libraries, additional code must be written to handle their interaction. The following example comes from Wolfgang Scholz. I learned about it from the paper An Overview of Feature-Oriented Software Development, by Apel and Kästner.

    Read on →

  • The History of the Frame Problem

    This is my synopsis of the paper, “The History of the Frame Problem”.

    Read on →

  • Automatic redis through static differentiation

    A new project, “Incremental λ-Calculus”, obviates my previous posts on automatic redis. The team has created an algorithm, called static differentiation, which performs a source to source translation on functions in the simply typed lambda calculs. The resulting function takes twice as many arguments as the previous program, with every other argument being a diff, or derivative, on the previous argument. When further optimizations are applied to the source, such as constant reduction and dead code elimination, the non-derivative arguments can sometimes be removed entirely. Here is an example from the paper:

    Read on →

  • Using unsafeInterleaveIO to lift haskell's lazy semantics into a toy interpreter

    The main challenge of writing a lazy interpreter is sharing structure: in particular, making sure that an individual closure is not evaluated more than once. Obvious but tedious solutions in Haskell include using IORefs and monadic state. The interpreter below uses a completely different tactic: exploiting unsafeInterleaveIO. All function arguments are evaluated “right away”, but in the context of an unsafeInterleaveIO (so, in fact, they are actually not evaluated right away). With this hack, we get to write an interpreter which looks like an interpreter for a strict functional language, but actually behaves lazily (by lifting haskell’s own lazy semantics into our interpreter).

    Read on →

  • Partial evaluation of fat languages

    Language theory has always been my favorite part of computer science, and recently I have been playing around with partial evaluation. Creating an optimal, self-applicable specializer is really tricky. I thought that I was helping myself by working in a very minimal language, but this turned out to be counter-productive. It is easier to write a specializer in a language that has a large number of unnecessary primitives. The additional complexity of each primitive is very localized: just add another case to the giant switch statement, which does nothing more than “lift” the container language’s primitive into the contained language, and is a small price to pay for easing the coding of the rest of the specializer.

    Read on →