Varnish Delivery Processor for parallel ESI includes
| misc/coccinelle | ||
| src | ||
| .clang-tidy | ||
| .dir-locals.el | ||
| .gitignore | ||
| .gitlab-ci.yml | ||
| bootstrap | ||
| CHANGES.rst | ||
| configure.ac | ||
| CONTRIBUTING.rst | ||
| INSTALL.rst | ||
| LICENSE | ||
| Makefile.am | ||
| README.rst | ||
=========
vmod_pesi
=========
----------------------------------------------------
Varnish Delivery Processor for parallel ESI includes
----------------------------------------------------
See branches for support of older versions.
.. _Varnish-Cache: https://varnish-cache.org/
This project provides parallel ESI processing for `Varnish-Cache`_ as
a module (VMOD).
INTRODUCTION
============
.. _Standard ESI processing: https://varnish-cache.org/docs/trunk/users-guide/esi.html
`Standard ESI processing`_ in `Varnish-Cache`_ is sequential. In
short, it works like this:
1. Process the (sub)request
2. For a cache-miss or pass, fetch the requested object and parse it
on the backend side, if ESI parsing is enabled. Store the object in
a parsed, pre-segmented form.
3. Back on the client side, process the parsed, pre-segmented ESI
object. For all includes, create a sub-request and start with it at
step 1.
Simply put, this process is very efficient if Step 2 does not need to
be done because the requested object is already in cache.
Conversely, the total time it takes to generate an ESI response is
roughly the sum of all fetch times.
This is where parallel ESI processing can help: In step 3, all the
sub-requests for any particular object are run in parallel, such that
the total time it takes to generate an ESI response at a particular
include level is reduced to the longest of the fetch times.
"At a particular include level" is important, because the optimization
only helps if there are many includes at a particular level: For
example, if object A includes object B, which includes object C and no
object is cacheable, they still need to be fetched in order: The
request for B can only be started once A is available, and likewise
for B and C.
To summarize:
* Parallel ESI can *substantially* improve the response times for ESI
if cacheable objects include many uncacheable objects. The maximum
benefit, compared with standard, serial processing, is achieved
through parallel ESI in cases where all nodes of an ESI tree are
cacheable and at least some leaves are not.
* If basically all objects are cacheable, parallel ESI only provides a
relevant benefit on an empty cache, or if cache TTLs are low, such
that cache misses are likely.
Example
-------
Consider this ESI tree, where an object A includes B1 and B2, which,
in turn, include C1 to C3 and C4 to C6, respectively::
A
__/ \__
/ \
B1 B2
/ | \ / | \
C1 C2 C3 C4 C5 C6
Let's assume that A, B1 and B2 are cacheable and already in cache and
all C objects are uncacheable (passes). Let's also assume that C1 to
C6 take their number times 100ms to fetch from the backend - that is,
C1 takes 100ms, C2 200ms etc.
With `Standard ESI processing`_, the total response time will be
roughly 100ms + 200ms + ... 600ms = 2100ms = 2.1s. If the response is
a web page, the top bit will load relatively fast, the next part half
as fast, the third part again 100ms slower etc.
With parallel ESI, the total response time will be roughly 600ms =
0.6s. There will still be a delay for each fragment of the page, but
it will be 100ms for each part.
DESCRIPTION
===========
.. _standard ESI processing: https://varnish-cache.org/docs/trunk/users-guide/esi.html
VDP pesi is a Varnish Delivery Processor for parallel Edge Side
Includes (ESI). The VDP implements content composition in client
responses as specified by ``<esi>`` directives in the response body,
just as Varnish does with its `standard ESI processing`_. While
standard Varnish processes ESI subrequests serially, in the order in
which the ``<esi>`` directives appear in the response, the pesi VDP
executes the subrequests in parallel. This can lead to a significant
reduction in latency for the complete response, if Varnish has to wait
for backend fetches for more than one of the included requests.
Backend applications that use ESI includes for standard Varnish can be
expected to work without changes with the VDP, provided that they do
not depend on assumptions about the serialization of ESI subrequests.
Serial ESI requests are processed in a predictable order, one after
the other, but the pesi VDP executes them at roughly the same time. A
backend may conceivably receive a request forwarded for the second
include in a response before the first one. If the logic of ESI
composition in a standard Varnish deployment does not depend on the
serial order, then it will work the same way with VDP pesi.
Parallel ESI processing is enabled by invoking |pesi.activate()|_ in
``vcl_deliver {}``::
import pesi;
sub vcl_backend_response {
set beresp.do_esi = true;
}
sub vcl_deliver {
pesi.activate();
}
Other functions provided by the VDP serve to set configuration
parameters (or return the VDP version string). If your deployment uses
the default configuration, then |pesi.activate()|_ in ``vcl_deliver``
may be the only modification to VCL that you need.
The invocation of |pesi.activate()|_ can of course be subject to
logic in VCL::
sub vcl_deliver {
# Use parallel ESI only if the request header X-PESI is present.
if (req.http.X-PESI) {
pesi.activate();
}
}
INSTALLATION
============
See `INSTALL.rst <INSTALL.rst>`_ in the source repository
SEE ALSO
========
.. _Content composition with Edge Side Includes: https://varnish-cache.org/docs/trunk/users-guide/esi.html
* `varnishd(1)`_
* `vcl(7)`_
* `varnishstat(1)`_
* `varnish-counters(7)`_
* `varnishadm(1)`_
* `Content composition with Edge Side Includes`_ in the `Varnish User's Guide`_
CONTRIBUTING
============
See `CONTRIBUTING.rst <CONTRIBUTING.rst>`_ in the source repository
ACKNOWLEDGEMENTS
================
.. _Otto GmbH & Co KG: https://www.otto.de/
Most of the development work of this vmod in 2019 and 2020 has been
sponsored by `Otto GmbH & Co KG`_.
.. _BoardGameGeek: https://boardgamegeek.com/
The initial release to the public in 2021 has been supported by
`BoardGameGeek`_.
SUPPORT
=======
.. _gitlab.com issues: https://gitlab.com/uplex/varnish/libvdp-pesi/-/issues
To report bugs, use `gitlab.com issues`_.
For enquiries about professional service and support, please contact
info@uplex.de\ .