> the Varnish HTTP cache has been used very successfully to speed up WordPress. But Varnish doesn’t help a lot with logged-in traffic
> This is caching that Varnish and other “normal” HTTP caches (including CloudFlare) could not have done
Varnish supports the ESI (Edge-Side Includes) standard, which allows it to cache fragments of a page, and for the cache server to build them again. It also allows you to completely bypass the cache for certain fragments. This is also supported by a number of CDNs (Fastly, Akamai). I've used the ESI technique several times and have been able to achieve a >98% cache hit rate on Fastly for a site with dynamic per-user content. Even the cache misses are only responsible for rendering a small component of the page
Good to know. Using edge side includes may be easier than trying to change the app to a semi single page app. But that only solves half of the problem. The other half is varying the response based on the value of a specific cookie.
I've updated the blog post with information regarding edge side include.
I couldn't (quickly) find documentation on how to get the value of a specific cookie, but the server could send a user ID in a header or something Varnish can easily access to be used in the above function.
Agreed – it's possible and for simple tasks such as stripping an analytics cookie it's workable but for anything more serious you'd want something like https://github.com/lkarsten/libvmod-cookie
I understand that you want to offer something that 'beats' varnish, and it shines through in the article.
But I don't think it matters if your cache is better than every other cache. Rather, as long as you offer a convenient, easily implemented cache, built into the webserver, that's great in itself. We're using Passenger on all our production servers and are most satisfied, because of its ease of use.
Perhaps you could just write "this could be accomplished with Varnish, which has a lot of benefits for advanced cases, but we think our cache will be useful for those that prefer not to manage a separate caching tier."
> I understand that you want to offer something that 'beats' varnish, and it shines through in the article.
I am the author of the article. No, the point is not to "beat" Varnish. It is an article describing various ideas and a call for help. See https://news.ycombinator.com/item?id=8844905
Perhaps the writing style gave a competitive impression, so I've updated the article to mention that we're not out to beat Varnish, but to research the possibilities.
Knowing that Varnish can accomplish some of the things is good, because that way we can draw from an existing pool of experience.
Everything described in the article is covered by Varnish. You can hash on Vary headers, on individual cookie values, on the sum of the digits in the user's IP address if you want. ESI lets you provide partial caching of pages as the article describes - it's actually a separate standard that's existed since 2001 (http://en.wikipedia.org/wiki/Edge_Side_Includes).
Varnish also gives us things like ACLs for managing access to various resources, on-demand content purges, multiple routable backends with different cache/grace rules, and more powerfully, request pre-processing - one thing we do is process the request and determine if the agent is capable of accepting WebP images, and if they are, we add that to the hash key with a corresponding header for the app to key on and determine whether to serve JPEG or WebP images. This lets us serve WebP images to modern agents for faster downloads, while gracefully falling back to JPEG for anything we're not sure of.
Varnish is way more than a "make Wordpress not destroy your server" cache.
Varnish also supports plugins for extreme flexibility. For example, I wrote a plugin for our Varnish install which performs HMAC validation of a specific signed cookie and then sets a header which is used downstream in the caching rules.
Varnish is mature, powerful, and fast as hell. It would take a lot of work to reach a point where I'd swap it out for something else.
Varnish supports the ESI (Edge-Side Includes) standard, which allows it to cache fragments of a page, and for the cache server to build them again. It also allows you to completely bypass the cache for certain fragments. This is also supported by a number of CDNs (Fastly, Akamai). I've used the ESI technique several times and have been able to achieve a >98% cache hit rate on Fastly for a site with dynamic per-user content. Even the cache misses are only responsible for rendering a small component of the page