A new project,
“Incremental λ-Calculus”,
obviates my previous posts on automatic redis.
The team has created an algorithm, called static differentiation, which performs a
source to source translation on functions in the simply typed lambda calculs.
The resulting function takes twice as many arguments as the previous program, with every
other argument being a diff, or derivative, on the previous argument. When further
optimizations are applied to the source, such as constant reduction and dead code elimination,
the non-derivative
arguments can sometimes be removed entirely. Here is an example from the paper:
1234567891011
typeMultiSet=MapStringNat-- | grandTotal counts the number of elements in each set and adds themgrandTotal::MultiSet->MultiSet->NatgrandTotalxsys=fold(+)0(mergexsys)where-- Imported:fold::(Nat->Nat->Nat)->Nat->MultiSet->Nat(+)::Nat->Nat->Nat0::Natmerge::MultiSet->MultiSet->MultiSet
-- The derivative of a natural number is an integer, since-- the natural number can either increase or decrease.typeNat'=InttypeMultiSet'=MapStringNat'grandTotal'::MultiSet->MultiSet'->MultiSet->MultiSet'->Nat'grandTotal'xsxs'ysys'=fold'(+)(+') 0 (derive 0) (merge xs ys) (merge'xsdxsysdys)where-- Imported:fold'::(Nat->Nat->Nat)->(Nat->Nat'->Nat->Nat'->Nat')->Nat->Nat'->MultiSet->MultiSet'->Nat'(+)::Nat->Nat->Nat(+') :: Nat -> Nat'->Nat->Nat'->Nat'0::Natderive::Nat->Nat'merge::MultiSet->MultiSet->MultiSetmerge'::MultiSet->MultiSet'->MultiSet->MultiSet'->MultiSet
When optimizations are applied, grandTotal' becomes the implementation
that a programmer would have written:
In this case, the resulting grandTotal' makes no reference to the original multisets at all.
The authors of the paper call this “self-maintainability”, by analogy to self-maintainable
views in databases.
The problem of infering redis update operations from database update operations, then,
is simply a matter of differentiating and then optimizing the cache schema. (“Cache schema” is
the mapping from redis keys to the database queries that populate those keys.)
The mappings whose derivatives are self-maintainable can be translated into redis commands.
Here is the source transformation described in the paper:
moduleDifferentiatewheretypeId=StringdataTermp=Primitivep|LambdaId(Termp)|App(Termp)(Termp)|VarIdderiving(Eq,Ord,Read,Show)differentiate::MonadIdm=>(p->m(Termp))->Termp->m(Termp)differentiatedifferentiatePrimitive=diffwherediffterm=casetermofPrimitivep->differentiatePrimitivepLambdavarterm->doletdvar="d"++varrememberIdvarvar$generateIddvar$\var'->doterm'<-rememberIddvarvar'$difftermreturn(Lambdavar(Lambdavar'term'))Appst->dos'<-diffs-- t and t' will often share common sub-expressions.-- A better implementation would factor their commonalities out,-- to avoid redundant computation at runtime.t'<-difftreturn(App(Apps't)t')Varvar->dovar'<-recallIdvarreturn(Varvar')classMonadm=>MonadIdmwhere-- Return a unique string that starts with the given string.generateId::String->(String->ma)->ma-- Add mapping from old variable name to new variable namerememberId::String->String->ma->ma-- Lookup the new variable name that was mapped to the given old variable name.recallId::String->mString
1: I’m being a little imprecise when I define
the derivative of a type as another type, since the type of the derivative can vary
depending on the value. The derivative of 3 is all integers from -3 to positive infinity,
not all integers.
This post is part of a sequence I am calling
automatic redis, which is my attempt to solve
the cache invalidation problem.
In my previous post, I demonstrated that a
library could infer cache update operations from database insert operations by performing
algebraic manipulations on the queries that define the cache keys. The algebraic
laws needed were the distribution laws between monoids. e.g. count distributes
over the Set monoid to produce the Sum monoid. A library could also
infer the arguments of the cache keys (e.g. taskIds.{userId} -> taskIds.65495) by
performing functional logical evaluation on the cache key’s query. If the library’s goal
became suspended during evaluation, it could proceed by unifying expressions
of low multiplicity with all possible values. For instance, if the goal for a filter
query became suspended, the library could proceed by considering the true and
false cases of the filter separately.
In this post I would like to talk about sorting and limiting, as well as flesh out some of
the data structures that might be used in an automatic redis library.
Set
Set is the simplest data structure,
and forms the foundation for two of our other collection types.
1
typeSeta=Data.Set.Set
The monoidal operation for Set is simply set union.
List
List is a Set with an embedded sorting function. Tracking the sorting function
enables us to compute redis sorted set keys if necessary.
1
dataListab=(Ordb)=>List(a->b)(Seta)
A commonly used sorting function would be x => x.modifiedDate.
The monoidal operation for List is the merge operation from merge-sort, with
one restriction: the sorting functions of both lists must be the same
sorting function.
LimitedList
LimitedList is a List with an upper bound on its size.
The length of the contained List must be less than or equal to the upper bound.
Tracking the length enables us to know how to trim cache entries, e.g.
when using the ZREMRANGEBYRANK command.
The monoidal operation for LimitedList is to merge-sort the two lists and truncate
the result to the limit. Similarly to List, the library expects both lists to have
the same
upper limit.
First and Last
First and Last are essentially LimitedLists whose upper bound is 1. Making
specialized types for singleton LimitedLists makes working with non-collection redis
data structures easier.
Although First and Last have the same representation, they have different monoidal
operations, namely (x,y) => x and (x,y) => y.
Maybe
The Maybe type is useful for queries that always generate a unique result (such
as lookup by primary key), and as such the Maybe type
does not need to contain a sorting function.
1
dataMaybea=Nothing|Justa
The monoidal operation is to pick Just over Nothing, but with the restriction
that both arguments cannot be Justs.
12345
instanceMonoidMaybewhereNothing`mappend`Nothing=NothingNothing`mappend`(Justx)=Justx(Justx)`mappend`Nothing=Justx(Justx)`mappend`(Justy)=error"This should never happen."
Collision of Justs can happen if the application developer misuses the The operation
(defined below). Unfortunately this error cannot be caught by an automatic redis
library, because
the library never actually computes the value of mappend. The library only
tracks monoidal types so that it can know what the final redis commands will
be.
Speaking of query operations, it’s about time I defined them. But first…
one more monoid.
-- QO = Query OperationdataQOinputoutputwhere-- The operations Where, Count, Sum, The, and SortBy are not concerned with the ordering-- of their input, so they can work on Sets, Lists, LimitedLists, Firsts, Lasts,-- and Maybes. In these constructor definitions, 'coll' can mean any of those types.-- A real implementation might have multiples versions of these query operations,-- e.g. WhereSet, WhereList, WhereLimitedList, ..., CountSet, CountList, etc.Where::Expr(a->Boolean)->QO(colla)(colla)Count::QO(colla)SumSum::QO(collInteger)Sum-- 'The' takes a collection which is expected to have no more than one element-- and extracts the element.The::QO(colla)(Maybea)-- SortBy converts any kind of collection into a List.SortBy::(Ordb)=>Expr(a->b)->QO(colla)(Lista)-- Limit, First, and Last, are defined for any (seq)uence:-- Lists, LimitedLists, Firsts, and Lasts.Limit::Integer->QO(seqa)(LimitedLista)First::QO(seqa)(Firsta)Last::QO(seqa)(Lasta)-- Mapping only works on Set!Select::Expr(a->b)->QO(Seta)(Setb)-- Well technically Select also works on Maybe, but we'll make a separate-- query operation for Maybes.Apply::Expr(a->b)->QO(Maybea)(Maybeb)-- Lists contain their sorting function, so we cannot allow arbitrary-- mapping on lists. We can, however, support monotonic mappings.SelectMonotonic::Expr(a->b)->QO(seqa)(seqb)-- Mappings which scramble the order are also allowed, as long as we-- have a way to recover the order. i.e. 'a -> c' has to be monotonic,-- even though 'a -> b' and 'b -> c' do not.SelectReversible::Expr(a->b)->Expr(b->c)->QO(seqa)(seqb)
A few more data structures and we will have all the pieces necessary for
an application developer to define a cache schema.
1234567891011121314
dataTablet=TableString-- A Query is a sequence of query operations that begins with a tabledataQueryoutputwhereFrom::Tablet->Query(Sett)Compose::Queryinput->QOinputoutput->Queryoutput-- convenience constructor(+>)=ComposedataCacheKeyDefinition=CacheKeyDefinition{keyTemplate::String,-- e.g. "taskIds.{userId}"query::Query-- e.g. from tasks where task.userId = userId select task.id}
Putting it all together, we can showcase the cache schema for a simple task management
website.
typeTaskId=StringtypeUserId=StringdataTask={taskId::TaskId,ownerId::UserId,title::String,completed::Boolean,dueDate::Integer}deriving(Eq,Ord,Read,Show)taskTable=Table"tasks"::TableTaskschema=do-- The task objects.-- type: String-- expected redis commands on insert:-- SET"task.{taskId}"$=\tid->FromtaskTable+>Where(\t->taskIdt==tid)+>The+>Applyshow-- For each user, the ids of her most urgent tasks.-- type: Sorted Set, where the keys are the dueDate and the values are the taskIds.-- expected redis commands on insert:-- ZADD-- ZREMRANGEBYRANK"activeTaskIds.{userId}"$=\uid->FromtaskTable+>Where(\t->ownerIdt==uid&¬(completedt))+>SortBydueDate+>Limit100+>SelectReversible(\t->(dueDatet,taskIdt))fst-- The number of tasks a user has successfully completed.-- type: integer-- expected redis commands on insert:-- INCR"numCompleted.{userId}"$=\uid->FromtaskTable+>Where(\t->ownerIdt==uid&&completedt)+>Count
It’s important to keep in mind that although I have made the above code look
like haskell, no library in haskell could actually use the above code. The variables
occuring after the $= sign are logic variables, not function parameters. An
EDSL could get close to something like the above, but the normal types for
== and && are unusable, and the lambdas inside the Where clauses
would need to be reified anyway.
Still to come: deletes, updates, uniqueness constraints (maybe?), and psuedo-code
for the generation of redis commands.
This post is part of a sequence I am calling
automatic redis, which is my attempt to solve
the cache invalidation problem.
These are some initial thoughts on how to automate cache updates.
The question I want to answer is this: given a mapping from redis
keys to the queries that produce their values, how can I
infer which redis commands should be run when I add, remove, and update items in the collections
which are my source of truth?
The code in this post is psuedo-haskell. What appears to the left of an = sign is not
always a function, and the . is used for record field lookup as well as function
composition.
I’ll start with a simple example. Suppose I run a website which is a task manager, and
I want to display on my website the number of users who
have signed
up for an account. i.e. I want to display count users. I don’t want to count the entire collection
every time I add an item to it, so instead I keep the count in redis, and increment it whenever
a new account is created. Proving that INCR is the right command
to send to redis is straightforward:
Obviously a pipeline of SADDs will be correct, and the expression to the right
of the ++ gives my automatic cache system a procedure for determining which SADD
operations to perform. When the cache system gets the user object to be added, it
will learn that
the number of SADD operations is either
zero or one, but it doesn’t have to know that ahead of time.
A computer can easily verify the above three proofs, as long as they are properly annotated.
But can I get
the computer to create the proof in the first place?
Rewriting the activeUserIds example to use function composition suggests one approach.
provided f, g, h, etc. all distribute over mappend. The actual value of mappend will determine
which redis operation to perform. Integer addition becomes INCR, set union becomes SADD,
sorted set union becomes ZADD, list concatenation becomes
LPUSH or RPUSH, etc. An
important monoid which may not be obvious is the Last
monoid (mappend x y = y), which becomes SET.
So much for updates on constant cache keys. Parameterized cache keys are much more
interesting.
On my task manager website, I want to have one cache entry per user. The user’s id
will determine the cache key that I use.
It’s tempting to think of this definition as a function:
1
taskIds::UserId->[TaskId]
But an automatic caching system will not benefit from this perspective.
From it’s perspective, the
input is a task object, and the output is any number of redis commands. The system has to implicitly
discover the userId from the task object it receives. The userId parameter of taskIds.{userId}
is therefore more like a logic variable (e.g. from prolog) than a variable in imperative or functional
languages.
The monoidal shortcut rule is still valid for parameterized redis keys.
The caching system does not need to reduce this expression further, until it receives
the task object. When it does,
it can evaluate the addend as an expression
in a functional-logical language (similar to Curry).
In the false case, userId remains unbound, but that’s ok, because the expression reduces to a no-op:
12345678910
taskIds_'userId'_new=taskIds_'userId'++(maptaskId(iffalsethentask:filter(\t->t.owner==userId)[]elsefilter(\t->t.owner==userId)[]))taskIds_'userId'_new=taskIds_'userId'++(maptaskId(filter(\t->t.owner==userId)[]))taskIds_'userId'_new=taskIds_'userId'++(maptaskId[])taskIds_'userId'_new=taskIds_'userId'++[]-- nothing to do
In general, whenever the cache system’s
goals become suspended, it can resume narrowing/residuation by picking a subexpression
with low multiplicity (e.g. booleans, enums) and nondeterministically
unifying it with all possible values.
Most of the time, each unification will result in either a no-op, or a redis command with all
parameters bound. An exception (are there others?)
is queries which affect an inifinite number of redis keys,
e.g. caching all tasks that do NOT belong to a user.
This is clearly a bug, so the caching system can just log an error and perform no
cache updates.
It may even be possible for the caching system
to catch the bug at compile time by letting the inserted entity (e.g. a task)
be an unbound variable, and seeing if a non-degenerate redis command
with unbound redis key
parameters can
be produced.
This post has focused mostly on inserts and queries that fit the monoidal pattern. In
another post I’ll take a look at deletes and queries which are not so straightforward.
What will programming languages look like one hundred years from now? Where
will all of those wasted cycles end up going?
I think it is safe to say that the programming language of the future, if it
exists at all, will involve some kind of artificial intelligence. This post
is about why I think that theorem provers will be standard in languages of the future.
The hundred year function
1
solve::(a->Bool)->Size->Random(Maybea)
This simple function takes two arguments. The first is a predicate
distinguishing between desirable (True) and undesirable (False) values for A.
The second is a size restriction on A (e.g. number of bytes).
The function returns a random value of A, if one exists, meeting two
constraints:
It satisfies the predicate.
It is no larger than the size constraint.
Also, the solve function is guaranteed to terminate whenever the predicate
terminates.
First I will try to convince you that the solve function is more important than any of your petty opinions about syntax, object-orientation, type theory, or macros. After that I will make a fool of myself by explaining how to build the solve function with today’s technology.
Why it matters
It can find fix-points:
“Put down fahrenheit,” said the explorer. “I don’t expect it to matter.”
defthe_obvious_max_subarray(A):answer=0forstartinrange(0,len(A)-1):forendinrange(start+1,len(A)):answer=max(answer,sum(A[start:end]))returnanswerdefthe_fast_max_subarray(A):max_ending_here=max_so_far=0forxinA:max_ending_here=max(x,max_ending_here+x)max_so_far=max(max_so_far,max_ending_here)returnmax_so_fardefdifferentiates(input):returnthe_obvious_max_subarray(input)!=the_fast_max_subarray(input)# Prints None if the two functions are equal for all# input sequences of length 5 and smaller.# Otherwise prints a counter-example.printsolve(differentiates,4*5)
So it’s useful for detecting the introduction of bugs when you are optimizing things.
In fact, the solve function can find a more efficient implementation on your behalf.
My computer is smarter than Kadane, if you’ll just be patient.
123456789101112131415
defsteps(algorithm,input):(_result,steps)=eval_with_steps(algorithm,input)returnstepsdefis_fast_max_subarray(algorithm):# Check that algorithm is equivalent to the_obvious_max_subarrayifsolve(lambdainput:the_obvious_max_subarray(input)!=eval(algorithm,input),4*5):returnFalse# Check that algorithm is faster than the_obvious_max_subarrayforexampleinexample_inputs:ifsteps(algorithm,input)>steps(the_obvious_max_subarray,input):returnFalsereturnTrueprintsolve(is_fast_max_subarray,1000)# prints a function definition
The speed check is crude, but the idea is there.
Keeping the size constraint reasonable prevents the solve function from just creating a giant table
mapping inputs to outputs.
Curry and Howard tell us that
programs and proofs are one and the same thing. If our solve function can generate programs, then it
can also generate mathematical proofs.
Ten years too late for Uncle Petros
1234567891011
goldbach=parse("forall a > 2: exists b c: even(a) => prime(b) && prime(c) && b + c == a")defproves_goldbach(proof):ifproof[-1]!=goldbach:returnFalseforstepinrange(0,len(proof)-1):ifnotproof[step].follows_from(proof[0:step]):returnFalsereturnTrueprintsolve(proves_goldbach,10000)
If the proof is ugly, we can decrease the search size, and we will get a
more elegant proof.
The solve function will never get people to stop arguing, but it will at least change the dynamic
vs static types argument from a pragmatic one to an artistic one.
One last example:
Test-driven development advocates writing tests which are sufficient to construct the missing
parts of a program. So why write the program at all?
Beck’s revenge
12345678
defpasses_tests(patches):returnunit_tests.pass(partial_program.with(patches))patches=solve(passes_tests,10000)ifpatches:printpartial_program.with(patches)elseprint"Tests not passable within search space"
In fact, unit_tests can be replaced with any assertion about the desired program: e.g. that it type
checks under Hindley-Milner, that it terminates within a certain number of steps, that it does
not deadlock within the first X cycles of the program’s execution, and so on.
Are you excited yet? Programming in the future is awesome!
Correct, but useless. If the predicate consisted of only one floating point operation, the Sequoia
supercomputer would take 17 minutes to solve a mere 8 bytes.
The complexity of solve is clear. The variable num can be non-deterministically chosen from the range in
linear time (size * 8), decode takes linear time, and predicate takes polynomial time in most of
our examples from above. So solve is usually in NP, and no worse than NP-complete as long as
our predicate is in P.
It’s a hard problem. Were you surprised? Or did you get suspicious when the programmers of the
future started exemplifying godlike powers?1
Thankfully, a lot of work has been put into solving hard problems.
Today’s sat solvers can solve problems with 10 million variables. That’s 1.2 megabytes of search
space, which is large enough for almost all of the examples above, if we’re clever enough. (The
Kadane example is the definite exception, since the predicate takes superpolynomial time.)
The Cook-Levin theorem gives us a
procedure for writing the solve function more efficiently.
Imagine a large number of processors, each with its own memory, lined up and
connected so that the output state of each processor and memory becomes the input state of the next processor and memory.
The state of the entire assembly is determined solely by the state of the first processor. The state
of the whole system is static.
Represent each (unchanging) bit in the assembly with a boolean variable, and generate constraints
on those variables by examining the logic gates connecting the bits.
Assign values to some of the variables in a way that corresponds to the first processor containing
the machine code of the predicate.
Likewise, assign values so that the accumulator register of the last processor contains the value True.
Apply a sat solver to the variables and their constraints.
Extrapolate a solution by examining the first processor’s total state.
I call this approach “solving the interpreter trace” because the imaginary processors act as an
interpreter for the predicate, and we ask the sat solver to trace out the processor execution.
The approach is elegant, but it has three major problems:
The formula given to the sat solver is enormous, even for small predicates and input sizes. (It’s
polynomial, but the coefficient is large.)
The formula is highly symmetrical, which means the sat solver will perform a lot of redundant computation.
The meaning of bits in later processors is highly dependent on the value of bits in earlier
processors (especially if the predicate starts off with a loop). This will force our sat solver to
work a problem from beginning to end, even when a different order (such as end to beginning) would
be more intelligent.
We can get rid of these problems if we compile our predicate directly into a boolean formula.
Compilation is easy enough if our predicate contains neither loops nor conditionals.
A sat solver would immediately assign w2 the value 0. If we were solving over an interpretational
trace, w2 wouldn’t be a single variable, but would be one of two variables depending on whether
b was True or False.
By compiling the predicate, we have enabled the solver to work from end to beginning (if it so chooses).
One approach is to unroll the loop a finite number of times.
A six is a six is a six is a
1234567891011121314151617181920
defis_palindrome(str):i=0j=len(str)-1ifi<j:ifstr[i]!=str[j]:returnFalsei+=1j-=1ifi<j:ifstr[i]!=str[j]:returnFalsei+=1j-=1ifi<j:ifstr[i]!=str[j]:returnFalsei+=1j-=1ifi<j:_longer_loop_needed=Truei=arbitrary_value()# in case rest of function depends on i or jj=arbitrary_value()# (It doesn't in this example.)returnTrue
With branching and conditionals, we are turing complete. Function calls can be in-lined up until
recursion. Tail recursive calls can be changed to while loops, and the rest can be reified as
loops around stack objects with explicit push and pop operations. These stack objects will
introduce symmetry into our sat formulas, but at least it will be contained.
When solving, we assume the loops make very few iterations, and increase our unroll depth as
that assumption is violated. The solver might then look something like this:
Solver for a predicate with one loop
1234567891011121314151617
defsolve(predicate,size):unroll_count=1sat_solver=SatSolver()limit=max_unroll_count(predicate,size)whileTrue:unrolled=unroll_loop(predicate,unroll_count)formula=compile(unrolled)sat_solver.push(formula)sat_solver.push("_longer_loop_needed == 0")sol=sat_solver.solve()ifsol:returnsolsat_solver.pop()sol=sat_solver.solve()ifsol==None:returnNone# even unrolling more iterations won't help ussat_solver.pop()ifunroll_count==limit:returnNoneunroll_count=min(unroll_count*2,limit)
max_unroll_count does static analysis to figure out the maximum number of
unrolls that are needed. The number of unrolls will either be a constant
(and so can be found out by doing constant reduction within the predicate), or it
will somehow depend on the size of the predicate argument (and so an upper bound can be found by
doing inference on the predicate).
The solver is biased toward finding solutions that use fewer loop iterations, since each loop
iteration sets another boolean variable to 1, and thus cuts the solution space down by half.
If the solver finds a solution, then we return it. If not, then we try again, this time allowing
_longer_loop_needed to be true. If it still can’t find a solution, then we know no solution
exists, since i and j were set to arbitrary values. By “arbitrary”, I mean that, at compilation
time, no constraints will connect the later usages of i and j (there are none in this example)
with the earlier usages.
I admit that this approach is ugly, but the alternative, solving an interpreter trace, is even more
expensive. The hacks are worth it, at least until somebody proves P == NP.
Some of the examples I gave in the first section used eval. Partial evaluation
techniques can be used to make these examples more tractable.
I’ve only talked about sat solvers. You can probably get better results with an smt solver or a
domain-specific constraint solver.
In thinking about this problem, I’ve realized that there are several parallels between compilers
and sat solvers. Constant reduction in a compiler does the same work as the unit clause heuristic
in a sat solver. Dead code removal corresponds to early termination. Partial evaluation reduces
the need for symmetry breaking. Memoization corresponds to clause learning. Is there a name for
this correspondance? Do compilers have an analogue for the pure symbol heuristic? Do sat solvers
have an analogue for attribute grammars?
Today
If you want to use languages which are on the evolutionary path toward the language of the future,
you should consider C# 4.0, since it is the only mainstream language I know of that comes with
a built-in theorem prover.
Update (2013-11-24):
I am happy to report that I am not alone in having these ideas. “Search-assisted programming”,
“solver aided languages”, “computer augmented programming”, and “satisfiability based inductive
program synthesis” are some of the
names used to describe these techniques. Emily Torlak has
developed an exciting language called
Rosette, which is a dsl for creating
solver aided languages. Ras Bodik has also done much work
combining constraint solvers and programming languages. The ExCAPE
project focuses on program synthesis. Thanks to Jimmy Koppel for
letting me know these people exist.
1: Even many computer scientists do not seem to appreciate how different the world would be if we
could solve NP-complete problems efficiently. I have heard it said, with a straight face, that a
proof of P = NP would be important because it would let airlines schedule their flights better, or
shipping companies pack more boxes in their trucks! One person who did understand was Gödel. In
his celebrated 1956 letter to von Neumann, in which he first raised the P versus NP question,
Gödel says that a linear or quadratic-time procedure for what we now call NP-complete problems
would have “consequences of the greatest magnitude.” For such a procedure “would clearly indicate
that, despite the unsolvability of the Entscheidungsproblem, the mental effort of the mathematician
in the case of yes-or-no questions could be completely replaced by machines.” But it would indicate
even more. If such a procedure existed, then we could quickly find the smallest Boolean circuits
that output (say) a table of historical stock market data, or the human genome, or the complete
works of Shakespeare. It seems entirely conceivable that, by analyzing these circuits, we could make
an easy fortune on Wall Street, or retrace evolution, or even generate Shakespeare’s 38th play. For
broadly speaking, that which we can compress we can understand, and that which we can understand we
can predict. — Scott Aaronson
Many languages have adopted some form of the “foreach” keyword, for traversing elements of a collection. The advantages are obvious: fencepost errors are impossible, and programs are easier to read. Foreach loops are not places where I expect to find bugs. But about a month ago, I found one, in a piece of code similar to the code below. The expected behavior of the program is to print out the numbers zero and one, on separate lines. Do not read past the end of the program if you want to find the bug yourself, because I explain it below.
nums=[0,1]numclosures=[]fornuminnums:numclosures.append(lambda:num)fornumclosureinnumclosures:printnumclosure()# output from Python 2.5.2# 1# 1
The solution has to do with the implementation of the for loop in python. (I ran the program in cpython; it may be interesting to see what other implementations of python do.) Rather than creating a new binding for the num variable on every iteration of the loop, the num variable is mutated (probably for efficiency or just simplicity of implementation). Thus, even though numclosures is filled with distinct anonymous functions, they both refer to the same instance of num.
I tried writing similar routines in other languages. Ruby and C# do the same thing as Python:
usingSystem;usingSystem.Collections.Generic;publicstaticclassForeach{publicdelegateintNumClosure();// Func<int> does not exist in the latest Mono???publicstaticvoidMain(){int[]nums=newint[]{0,1};List<NumClosure>numclosures=newList<NumClosure>();foreach(intnuminnums){numclosures.Add(delegate(){returnnum;});}foreach(NumClosurenumclosureinnumclosures){Console.WriteLine(numclosure());}}}// output from Mono 1.2.6// 1// 1
Please excuse the use of the NumClosure delegate. For some reason I could not get Mono to compile with Func.
Fortunately, all of these languages provide some kind of work-around. Ruby has Array#each, and C# has List<>.ForEach. Python has the map built-in.
usingSystem;usingSystem.Collections.Generic;publicstaticclassForeach{publicdelegateintNumClosure();// Func<int> does not exist in the latest Mono???publicstaticvoidMain(){int[]nums=newint[]{0,1};List<NumClosure>numclosures=newList<NumClosure>();newList<int>(nums).ForEach(delegate(intnum){numclosures.Add(delegate(){returnnum;});});foreach(NumClosurenumclosureinnumclosures){Console.WriteLine(numclosure());}}}// output from Mono 1.2.6// 0// 1
Not everybody mutates their enumerators, however. Lisp, the language which normally requires every programmer to be an expert in variable scoping, handles iteration very cleanly:
A friend has pointed out to me that the command line and web interface in my last post do not need to interact with the main game through an iterator. He proposed that the web interface could pause the execution of the game by using exceptions. I played with the idea, and discovered that he was right. The upshot is that continuation-based web serving can be faked in any language which has exceptions. (Lexical closures are also helpful, but can also be faked using objects.) The approach given below relies on neither CPS nor monads, and so has the added advantage of being fairly idiomatic in most mainstream languages.
As before, our game is the children’s “Guess a number between 1 and 100” game:
importrandomrgen=random.Random()classRandomNumberGame(object):def__init__(self,interface):self.interface=interfacedefstart_game(self):name=self.interface.prompt_string("Greetings. What is your name? ")self.interface.display("Hello, %s."%(name,))self.__play_game()def__play_game(self):self.interface.display("I am thinking of a number between 1 and 100.")my_number=self.interface.get(lambda:rgen.randint(1,100))num_guesses=0whileTrue:user_number=self.interface.prompt_int("Guess: ")num_guesses+=1ifmy_number==user_number:breakelifmy_number<user_number:self.interface.display("Try lower.")else:self.interface.display("Try higher.")self.interface.display("Correct in %s guesses."%num_guesses)play_again=self.interface.prompt_yes_no("Play again? ")ifplay_again:self.__play_game()else:self.interface.display("Thank you for playing.")
I like this version much better because the ugly wart of before:
The trouble with calling a generator from a generator
12345678
# The coroutine equivalent of self.__play_game()iter=self.__play_game()try:y=yielditer.next()whileTrue:y=yielditer.send(y)exceptStopIteration:pass
has been simplified to:
Regular method invocation
1
self.__play_game()
The interface between the game and the user interface is the same as before, with one addition:
Interface
123456
display(text)# displays the given text to the user and returns Noneprompt_string(text)# displays the given text to the user and returns a string input by the userprompt_int(text)# display the given text to the user and returns an int input by the userprompt_yes_no(text)# display the given text to the user and returns True for yes and False for noget(callback)# invokes the callback and returns what it returns
The new method “get” is made use of in the game when generating the answer for the game:
Retrieving the random number through self.interface.get ensures that the game will not be constantly changing its answer while a user is playing through the web interface.
As before, the command line interface is very simple:
The web interface works by raising a StopWebInterface exception when execution of the game needs to be paused so that the user can input some data into a form. Our abstraction is thus slightly leaky, in that a game which at some point generically caught all types of exceptions might interfere with the behavior of the web interface. The yield lambda solution did not have this problem.
#!/usr/bin/env pythonimportcgifromwsgiref.simple_serverimportmake_server# Everytime we generate an html page, we create a hidden input element# with this name. It lets us know which saved history must be resumed# in order to continue the program.HISTORY_FORM_NAME='WebInterface_history_index'classHistory(object):"""A record of past values returned by the routine. Useful for playback. self.get_pending is a function which takes a dictionary like object corresponding to the POST variables, and returns the new value to be added to the history. self.old_values is all of the results of functions yielded by the routine. """def__init__(self,get_pending,old_values):self.get_pending=get_pendingself.old_values=old_valuesclassStopWebInterface(Exception):def__init__(self,get_pending):self.get_pending=get_pendingclassWebInterface(object):"""WebInterface wraps around a routine class to allow for the routine to be executed through a browser. It works by remembering the results of functions that were yielded by the routine. In order to pick up where it left off, the routine is re-run with the remembered values, a new value is parsed from the POST variables, and the routine keeps running until it reaches another prompt."""def__init__(self,routine_class):self.routine_class=routine_classself.histories=[History(None,[])]defrespond(self,environ,start_response):responder=WebInterface.Response(self,environ,start_response)returnresponder.respond()classResponse:"""The Response class is instantiated for every HTTP request. It grabs the appropriate history according to the value in the POST variable 'WebInterface_history_index'. Using that history, it re-runs the routine with the contained old_values, parses the POST variables for any new values using get_pending, and continues the routine until it needs to make another request. For simplicity, it modifies web_interface.histories directly. A real implementation would need to protect this variable using locks (or find a mutation-free solution), since it is possible for the wsgi server to make concurrent calls to WebInterface.respond. """def__init__(self,web_interface,environ,start_response):self.web_interface=web_interfaceself.environ=environself.start_response=start_responsedefrespond(self):routine=self.web_interface.routine_class(self)self.form=cgi.FieldStorage(fp=self.environ['wsgi.input'],environ=self.environ)history_index=int(self.form.getvalue(HISTORY_FORM_NAME,default="0"))iflen(self.web_interface.histories)<=history_index:self.start_response('412 Cannot read the future',[('Content-type','text/html')])return["That history has not yet been written."]# Copy the history in order to create a new history.history_orig=self.web_interface.histories[history_index]self.history=History(history_orig.get_pending,history_orig.old_values[:])self.start_response('200 OK',[('Content-type','text/html')])self.output=['<form method="POST">\n']self.paused=Falseself.iter=self.iterate_old_values()try:routine.start_game()exceptStopWebInterface,inst:self.history.get_pending=inst.get_pendingself.web_interface.histories.append(self.history)self.output+='<input type="hidden" name="%s" value="%s">\n'%(HISTORY_FORM_NAME,len(self.web_interface.histories)-1)self.output+='</form>\n'returnself.outputdefiterate_old_values(self):forvalinself.history.old_values:yieldvalifself.history.get_pending!=None:val=self.history.get_pending(self)self.history.old_values.append(val)yieldvaldefget(self,f):try:returnself.iter.next()exceptStopIteration:val=f()self.history.old_values.append(val)returnvaldefdisplay(self,str):try:returnself.iter.next()exceptStopIteration:self.output+=str+"<br/>\n"self.history.old_values.append(None)defprompt_string(self,prompt):returnself.prompt_type(prompt,str)defprompt_int(self,prompt):returnself.prompt_type(prompt,int)defprompt_yes_no(self,prompt):returnself.prompt_and_pause(prompt,[("submit","btn_yes","Yes"),("submit","btn_no","No")],lambdaform:form.has_key('btn_yes'))defprompt_type(self,prompt,type_parse):returnself.prompt_and_pause(prompt,[("text","prompt",""),("submit","btn_submit","Enter")],lambdaform:type_parse(form["prompt"].value))defprompt_and_pause(self,prompt,inputs,parse_form):try:val=self.iter.next()returnvalexceptStopIteration:self.output+=promptforinputininputs:self.output+='<input type="%s" name="%s" value="%s">\n'%inputdefread_from_form(responder):returnparse_form(responder.form)raiseStopWebInterface(read_from_form)if__name__=="__main__":fromgameimportRandomNumberGameinterface=WebInterface(RandomNumberGame)httpd=make_server('',8000,interface.respond)httpd.serve_forever()
Though separation of concerns may be the most important design principle in software, its effective implementation is often elusive. A common problem in web design is how to link a sequence of pages together without scattering their logic all over the application. While this problem has been almost completely solved by continuation based web servers, not every language supports continuations. There is a middle ground however: coroutines. This post describes a light-weight approach to doing continuation-style web programming using Python’s coroutines.
Our target application will be the following “guess a number” game.
#!/usr/bin/env pythonimportrandomrgen=random.Random()defstart_game():name=raw_input("Greetings. What is your name? ")print"Hello, %s."%(name,)play_game()defplay_game():print"I am thinking of a number between 1 and 100."my_number=rgen.randint(1,100)num_guesses=0whileTrue:user_number=int(raw_input("Guess: "))num_guesses+=1ifmy_number==user_number:breakelifmy_number<user_number:print"Try lower."else:print"Try higher."print"Correct in %s guesses."%num_guessesplay_again=raw_input("Play again? ")ifplay_again.startswith('y')orplay_again.startswith('Y'):play_game()else:print"Thank you for playing."if__name__=="__main__":start_game()
Here is what the program looks like using coroutines:
importrandomrgen=random.Random()classRandomNumberGame(object):def__init__(self,interface):self.interface=interfacedef__iter__(self):name=yieldlambda:self.interface.prompt_string("Greetings. What is your name? ")yieldlambda:self.interface.display("Hello, %s."%(name,))# The coroutine equivalent of self.__play_game()iter=self.__play_game()try:y=yielditer.next()whileTrue:y=yielditer.send(y)exceptStopIteration:passdef__play_game(self):yieldlambda:self.interface.display("I am thinking of a number between 1 and 100.")my_number=yieldlambda:rgen.randint(1,100)num_guesses=0whileTrue:user_number=yieldlambda:self.interface.prompt_int("Guess: ")num_guesses+=1ifmy_number==user_number:breakelifmy_number<user_number:yieldlambda:self.interface.display("Try lower.")else:yieldlambda:self.interface.display("Try higher.")yieldlambda:self.interface.display("Correct in %s guesses."%num_guesses)play_again=yieldlambda:self.interface.prompt_yes_no("Play again? ")ifplay_again:# The coroutine equivalent of self.__play_game()iter=self.__play_game()try:y=yielditer.next()whileTrue:y=yielditer.send(y)exceptStopIteration:passelse:yieldlambda:self.interface.display("Thank you for playing.")
Essentially, all read and write actions with the outside world have been replaced with the yield lambda pattern. That includes the call to rgen.randint, because rgen has been initialized according to the current time.
All we need now is an interface that implements the following methods:
Interface
1234567891011
# displays the given text to the user and returns Noneinterface.display(text)# displays the given text to the user and returns a string input by the userinterface.prompt_string(text)# display the given text to the user and returns an int input by the userinterface.prompt_int(text)# display the given text to the user and returns True for yes and False for nointerface.prompt_yes_no(text)
We’ll start with the simpler command line version:
The behavior of cli.py + game.py is completely identical to simple.py. Remarkably, though, the core logic of the game (in game.py) is now re-usable with any user interface supporting the four methods given above.
A typical web-MVC-style solution to the “guess a number” game would probably have a controller which dispatched on one of three different situations: the user has input her name, the user has input a guess, or the user has told us whether or not she would like to keep playing. The three different situations would likely be represented as distinct URIs. In our game.py, however, a situation corresponds to the “yield lambda” at which execution has been paused.
The essential idea to writing a coroutine-based web interface is this: only run the game routine up to the point where more information is needed. Store the result of every lambda yielded so far. On successive page requests, replay the routine with the stored results, but only invoke the lambdas that were not invoked on a previous page request. The medium for storing the results of the lambdas does not matter. It could be embedded in hidden input elements in HTML (though this raises issues of trust), or stored in a database tied to a session ID. For simplicity, the following implementation stores the values in memory, tied to a value stored in a hidden input element.
#!/usr/bin/env pythonimportcgifromwsgiref.simple_serverimportmake_server# Everytime we generate an html page, we create a hidden input element# with this name. It lets us know which saved history must be resumed# in order to continue the program.HISTORY_FORM_NAME='WebInterface_history_index'classHistory(object):"""A record of past values returned by the routine. Useful for playback. self.get_pending is a function which takes a dictionary like object corresponding to the POST variables, and returns the new value to be added to the history. self.old_values is all of the results of functions yielded by the routine. """def__init__(self,get_pending,old_values):self.get_pending=get_pendingself.old_values=old_valuesclassWebInterface(object):"""WebInterface wraps around a routine class to allow for the routine to be executed through a browser. It works by remembering the results of functions that were yielded by the routine. In order to pick up where it left off, the routine is re-run with the remembered values, a new value is parsed from the POST variables, and the routine keeps running until it reaches another prompt."""def__init__(self,routine_class):self.routine_class=routine_classself.histories=[History(None,[])]defrespond(self,environ,start_response):responder=WebInterface.Response(self,environ,start_response)returnresponder.respond()classResponse(object):"""The Response class is instantiated for every HTTP request. It grabs the appropriate history according to the value in the POST variable 'WebInterface_history_index'. Using that history, it re-runs the routine with the contained old_values, parses the POST variables for any new values using get_pending, and continues the routine until it needs to make another request. For simplicity, it modifies web_interface.histories directly. A real implementation would need to protect this variable using locks (or find a mutation-free solution), since it is possible for the wsgi server to make concurrent calls to WebInterface.respond. """def__init__(self,web_interface,environ,start_response):self.web_interface=web_interfaceself.environ=environself.start_response=start_responsedefrespond(self):routine=self.web_interface.routine_class(self)self.form=cgi.FieldStorage(fp=self.environ['wsgi.input'],environ=self.environ)history_index=int(self.form.getvalue(HISTORY_FORM_NAME,default="0"))iflen(self.web_interface.histories)<=history_index:self.start_response('412 Cannot read the future',[('Content-type','text/html')])return["That history has not yet been written."]# Copy the history in order to create a new history.history_orig=self.web_interface.histories[history_index]self.history=History(history_orig.get_pending,history_orig.old_values[:])self.start_response('200 OK',[('Content-type','text/html')])self.output=['<form method="POST">\n']self.paused=Falseiter=routine.__iter__()try:# Re-run the routine over all old_values.action=iter.next()forhistory_valueinself.history.old_values:action=iter.send(history_value)# If a get_pending was previously set, invoke it in order# to parse the POST variables for any new values.ifself.history.get_pending!=None:val=self.history.get_pending(self)self.history.old_values.append(val)action=iter.send(val)# Continue the routine until another prompt is made.whilenotself.paused:new_value=action()ifnotself.paused:self.history.old_values.append(new_value)action=iter.send(new_value)exceptStopIteration:passself.web_interface.histories.append(self.history)self.output+='<input type="hidden" name="%s" value="%s">\n'%(HISTORY_FORM_NAME,len(self.web_interface.histories)-1)self.output+='</form>\n'returnself.outputdefdisplay(self,str):self.output+=str+"\n"defprompt_string(self,prompt):self.prompt_type(prompt,str)defprompt_int(self,prompt):self.prompt_type(prompt,int)defprompt_yes_no(self,prompt):self.prompt_and_pause(prompt,[("submit","btn_yes","Yes"),("submit","btn_no","No")],lambdaform:form.has_key('btn_yes'))defprompt_type(self,prompt,type_parse):self.prompt_and_pause(prompt,[("text","prompt",""),("submit","btn_submit","Enter")],lambdaform:type_parse(form["prompt"].value))defprompt_and_pause(self,prompt,inputs,parse_form):self.output+=promptforinputininputs:self.output+='<input type="%s" name="%s" value="%s">\n'%inputself.paused=Truedefread_from_form(responder):returnparse_form(responder.form)self.history.get_pending=read_from_formif__name__=="__main__":fromgameimportRandomNumberGameinterface=WebInterface(RandomNumberGame)httpd=make_server('',8000,interface.respond)httpd.serve_forever()
But I must explain to you how all this mistaken idea of denouncing pleasure and praising pain was born and I will give you a complete account of the system, and expound the actual teachings of the great explorer of the truth, the master-builder of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it is pleasure, but because those who do not know how to pursue pleasure rationally encounter consequences that are extremely painful. Nor again is there anyone who loves or pursues or desires to obtain pain of itself, because it is pain, but because occasionally circumstances occur in which toil and pain can procure him some great pleasure. To take a trivial example, which of us ever undertakes laborious physical exercise, except to obtain some advantage from it? But who has any right to find fault with a man who chooses to enjoy a pleasure that has no annoying consequences, or one who avoids a pain that produces no resultant pleasure?