Update README.rst

parent 2737a4b7
......@@ -79,6 +79,15 @@ you *really* need to know:
* If you call ``pesi.activate()``, call it unconditionally and on all
ESI levels. Read this documentation for details.
It is possible that your current configuration of system resources,
such as thread pools, workspaces, memory allocation and so forth, will
suffice after this simple change, and will need no further
optimization.
But that is by no means ensured, since pESI uses system resources
differently from standard ESI. Understanding these difference, and how
to monitor and manage resource usage affected by pESI, is a main focus
of the detailed discussion that follows.
DESCRIPTION
===========
......@@ -364,12 +373,12 @@ are pre-allocated; they are all taken from the memory pool described
below.
Ideally, ``max_nodes`` matches the number of includes any one ESI
Object can have plus the number of fragments before, after and
inbetween the includes. For all practical purposes, ``max_nodes``
object can have plus the number of fragments before, after and
in between the includes. For all practical purposes, ``max_nodes``
should match twice the number of expected ESI includes. However, if
the number of ESI includes across objects varies substantially, it
might be better to use less memory and set ``max_nodes`` according to
the number of includes of a typical object, such that objects with
the number of includes of a typical object, so that objects with
more includes use the memory pool.
When ``pesi.workspace_prealloc()`` is called, its configuration becomes
......@@ -717,10 +726,6 @@ are:
same thread as for the including request), because no thread was
available from the thread pools.
Considerations about tuning the configuration and interpreting the
statistics are beyond the scope of this manual. For a deeper
discussion, see $EXTERNAL_DOCUMENT.
THREADS
=======
......@@ -738,8 +743,8 @@ the entire ESI request anyway and will send data to the client in the
meantime).
With the `thread`_ setting to ``true`` (the default), this is what
happens, but a thread may not be immediately available if the thread
pool is not sufficiently sized for the current load and thus the
happens. But a thread may not be immediately available if the thread
pool is not sufficiently sized for the current load, and thus the
include request may have to be queued.
With the `thread`_ setting to ``false``, include processing happens in
......@@ -755,10 +760,10 @@ the following reasons:
* While processing of the include may take an arbitrarily long time
(for example because it requires a lengthy backend fetch), we know
that the ESI object is fully available in the stevedore (and usually
in memory already) when we parse an include because streaming is not
supported for ESI. So we know that completing the processing of the
current ESI object will be quick, while descending into a subtree
may be take a long time.
in memory already) when we parse an include, because streaming is
not supported for ESI. So we know that completing the processing of
the current ESI object will be quick, while descending into a
subtree may be take a long time.
* Except for ESI level 0, the current thread will become available as
soon as ESI processing has completed.
......@@ -768,7 +773,7 @@ the following reasons:
se.
In short, keeping the `thread`_ setting at the default ``true`` should
be the right option, the alternative exists just in case.
be the right option, but the alternative exists just in case.
LIMITATIONS
......@@ -833,6 +838,14 @@ sponsored by an undisclosed company.
The initial release to the public in 2021 has been supported by
`BoardGameGeek`_.
SUPPORT
=======
For community support, please use `Gitlab Issues`_.
For commercial support, please contact varnish-support@uplex.de
.. _Gitlab Issues: https://gitlab.com/uplex/varnish/libvdp-pesi/-/issues
SEE ALSO
========
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment