This post is part of a sequence I am calling automatic redis, which is my attempt to solve the cache invalidation problem.
In my previous post, I demonstrated that a
library could infer cache update operations from database insert operations by performing
algebraic manipulations on the queries that define the cache keys. The algebraic
laws needed were the distribution laws between monoids. e.g. count
distributes
over the Set
monoid to produce the Sum
monoid. A library could also
infer the arguments of the cache keys (e.g. taskIds.{userId} > taskIds.65495
) by
performing functional logical evaluation on the cache key’s query. If the library’s goal
became suspended during evaluation, it could proceed by unifying expressions
of low multiplicity with all possible values. For instance, if the goal for a filter
query became suspended, the library could proceed by considering the true
and
false
cases of the filter separately.
In this post I would like to talk about sorting and limiting, as well as flesh out some of the data structures that might be used in an automatic redis library.
Set
Set
is the simplest data structure,
and forms the foundation for two of our other collection types.
1


The monoidal operation for Set
is simply set union.
List
List
is a Set
with an embedded sorting function. Tracking the sorting function
enables us to compute redis sorted set keys if necessary.
1


A commonly used sorting function would be x => x.modifiedDate
.
The monoidal operation for List
is the merge operation from mergesort, with
one restriction: the sorting functions of both lists must be the same
sorting function.
LimitedList
LimitedList
is a List
with an upper bound on its size.
1


The length of the contained List
must be less than or equal to the upper bound.
Tracking the length enables us to know how to trim cache entries, e.g.
when using the ZREMRANGEBYRANK command.
The monoidal operation for LimitedList
is to mergesort the two lists and truncate
the result to the limit. Similarly to List
, the library expects both lists to have
the same
upper limit.
First and Last
First
and Last
are essentially LimitedList
s whose upper bound is 1
. Making
specialized types for singleton LimitedLists makes working with noncollection redis
data structures easier.
1 2 

Although First
and Last
have the same representation, they have different monoidal
operations, namely (x,y) => x
and (x,y) => y
.
Maybe
The Maybe
type is useful for queries that always generate a unique result (such
as lookup by primary key), and as such the Maybe
type
does not need to contain a sorting function.
1


The monoidal operation is to pick Just
over Nothing
, but with the restriction
that both arguments cannot be Just
s.
1 2 3 4 5 

Collision of Just
s can happen if the application developer misuses the The
operation
(defined below). Unfortunately this error cannot be caught by an automatic redis
library, because
the library never actually computes the value of mappend
. The library only
tracks monoidal types so that it can know what the final redis commands will
be.
Speaking of query operations, it’s about time I defined them. But first… one more monoid.
1 2 3 4 

Query operations
Query operations are parameterized over an input type and an output type.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 

A few more data structures and we will have all the pieces necessary for an application developer to define a cache schema.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 

Putting it all together, we can showcase the cache schema for a simple task management website.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 

It’s important to keep in mind that although I have made the above code look
like haskell, no library in haskell could actually use the above code. The variables
occuring after the $=
sign are logic variables, not function parameters. An
EDSL could get close to something like the above, but the normal types for
==
and &&
are unusable, and the lambdas inside the Where
clauses
would need to be reified anyway.
Still to come: deletes, updates, uniqueness constraints (maybe?), and psuedocode for the generation of redis commands.