Update README.rst

parent 2737a4b7
...@@ -79,6 +79,15 @@ you *really* need to know: ...@@ -79,6 +79,15 @@ you *really* need to know:
* If you call ``pesi.activate()``, call it unconditionally and on all * If you call ``pesi.activate()``, call it unconditionally and on all
ESI levels. Read this documentation for details. ESI levels. Read this documentation for details.
It is possible that your current configuration of system resources,
such as thread pools, workspaces, memory allocation and so forth, will
suffice after this simple change, and will need no further
optimization.
But that is by no means ensured, since pESI uses system resources
differently from standard ESI. Understanding these difference, and how
to monitor and manage resource usage affected by pESI, is a main focus
of the detailed discussion that follows.
DESCRIPTION DESCRIPTION
=========== ===========
...@@ -364,12 +373,12 @@ are pre-allocated; they are all taken from the memory pool described ...@@ -364,12 +373,12 @@ are pre-allocated; they are all taken from the memory pool described
below. below.
Ideally, ``max_nodes`` matches the number of includes any one ESI Ideally, ``max_nodes`` matches the number of includes any one ESI
Object can have plus the number of fragments before, after and object can have plus the number of fragments before, after and
inbetween the includes. For all practical purposes, ``max_nodes`` in between the includes. For all practical purposes, ``max_nodes``
should match twice the number of expected ESI includes. However, if should match twice the number of expected ESI includes. However, if
the number of ESI includes across objects varies substantially, it the number of ESI includes across objects varies substantially, it
might be better to use less memory and set ``max_nodes`` according to might be better to use less memory and set ``max_nodes`` according to
the number of includes of a typical object, such that objects with the number of includes of a typical object, so that objects with
more includes use the memory pool. more includes use the memory pool.
When ``pesi.workspace_prealloc()`` is called, its configuration becomes When ``pesi.workspace_prealloc()`` is called, its configuration becomes
...@@ -717,10 +726,6 @@ are: ...@@ -717,10 +726,6 @@ are:
same thread as for the including request), because no thread was same thread as for the including request), because no thread was
available from the thread pools. available from the thread pools.
Considerations about tuning the configuration and interpreting the
statistics are beyond the scope of this manual. For a deeper
discussion, see $EXTERNAL_DOCUMENT.
THREADS THREADS
======= =======
...@@ -738,8 +743,8 @@ the entire ESI request anyway and will send data to the client in the ...@@ -738,8 +743,8 @@ the entire ESI request anyway and will send data to the client in the
meantime). meantime).
With the `thread`_ setting to ``true`` (the default), this is what With the `thread`_ setting to ``true`` (the default), this is what
happens, but a thread may not be immediately available if the thread happens. But a thread may not be immediately available if the thread
pool is not sufficiently sized for the current load and thus the pool is not sufficiently sized for the current load, and thus the
include request may have to be queued. include request may have to be queued.
With the `thread`_ setting to ``false``, include processing happens in With the `thread`_ setting to ``false``, include processing happens in
...@@ -755,10 +760,10 @@ the following reasons: ...@@ -755,10 +760,10 @@ the following reasons:
* While processing of the include may take an arbitrarily long time * While processing of the include may take an arbitrarily long time
(for example because it requires a lengthy backend fetch), we know (for example because it requires a lengthy backend fetch), we know
that the ESI object is fully available in the stevedore (and usually that the ESI object is fully available in the stevedore (and usually
in memory already) when we parse an include because streaming is not in memory already) when we parse an include, because streaming is
supported for ESI. So we know that completing the processing of the not supported for ESI. So we know that completing the processing of
current ESI object will be quick, while descending into a subtree the current ESI object will be quick, while descending into a
may be take a long time. subtree may be take a long time.
* Except for ESI level 0, the current thread will become available as * Except for ESI level 0, the current thread will become available as
soon as ESI processing has completed. soon as ESI processing has completed.
...@@ -768,7 +773,7 @@ the following reasons: ...@@ -768,7 +773,7 @@ the following reasons:
se. se.
In short, keeping the `thread`_ setting at the default ``true`` should In short, keeping the `thread`_ setting at the default ``true`` should
be the right option, the alternative exists just in case. be the right option, but the alternative exists just in case.
LIMITATIONS LIMITATIONS
...@@ -833,6 +838,14 @@ sponsored by an undisclosed company. ...@@ -833,6 +838,14 @@ sponsored by an undisclosed company.
The initial release to the public in 2021 has been supported by The initial release to the public in 2021 has been supported by
`BoardGameGeek`_. `BoardGameGeek`_.
SUPPORT
=======
For community support, please use `Gitlab Issues`_.
For commercial support, please contact varnish-support@uplex.de
.. _Gitlab Issues: https://gitlab.com/uplex/varnish/libvdp-pesi/-/issues
SEE ALSO SEE ALSO
======== ========
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment