## Caching In Web Apps

From: andrew cooke <andrew@...>

Date: Tue, 2 Oct 2012 07:51:17 -0300

Making a note here as this is too long to read right now, but looks
interesting.

http://martin.kleppmann.com/2012/10/01/rethinking-caching-in-web-apps.html

Also http://news.ycombinator.com/item?id=4599685

Andrew

### Dynamic vs static generation

From: Daniel Yokomizo <daniel.yokomizo@...>

Date: Tue, 2 Oct 2012 13:02:30 -0300

Hi Andrew,

IME working with mostly read scenarios (e.g. blogs) it's better to
serve everything from a static webserver and generate the html on
updates. So every time you edit a post, add a comment, etc., some
piece of code (perhaps asynchronously) regenerate the associated html
you can't precompute every possible search results page) but you may
do a mixed scenario (e.g. precompute the item elements in the results
page and do the search using lucene over them, just assembling the
pieces and writing the directly on the response stream). This avoids
the entire cache invalidation/expiration issue, but you need to write
some tricky code to atomically replace the files in the filesystem (or
assume some read inconsistencies), there are many interesting choices
in this, some add a little logic to Apache, but overall the system
becomes way faster.

Daniel Yokomizo.

### Re: Dynamic vs static generation

From: andrew cooke <andrew@...>

Date: Wed, 3 Oct 2012 18:46:00 -0300

Yes, I agree; that's how this blog works (email comes in and is used to
generate HTML via some simple Python; that's then pushed out to the web
server).  It's very efficient, but a pig to edit - hence all the typos that
never get fixed :o)

But I think the post I linked to is aimed at more dynamic processes, where
that's simply not possible.

I'm actually working on a similar problem at the moment.  A client has a
service that is rather slow.  So they want a facade that does three things:

- It works as a cache, so that distributed clients, requesting the same
data, don't hit the service multiple times.

- It provides snapshotting / versioning / generationed data so that a
client, as it makes several requests in a "transaction", sees consistent
data.

- It coallesces multiple "thin" calls into a single "fat" call to the API.

To do this I am using memcache with generational keys and two new components.

One component is a client-side library that reads from memcache where
possible.  If the data are not in memcache then the library calls the
underlying service.  The other component is a "primer".  Each time the service
starts a new generation the primer pre-loads memcache, before signalling the
client libraries to switch to a new prefix.

In terms of that work, the post I linked to is making a "programmable primer".
Which is very cool, but outside the scope of my work (in my case I have the
luxury of being able to call the service for unpredicted requests; the primer
only has to cover the common cases).

[Reading the above I haven't explained how coallescing works - the primer
pushes "fat" data; the client libraries load "fat" data and contain teh logic
to serve "thin" requests from that.]

Andrew