Commit 05cebd46 authored by Nils Goroll's avatar Nils Goroll

doc: move the detailed discussion of set(thread, bool)

parent e4949298
......@@ -241,6 +241,8 @@ Example::
}
}
.. _thread:
``thread``
----------
......@@ -257,53 +259,8 @@ Whether we always request a new thread for includes, default is
Request a new thread, potentially waiting for one to become
available.
XXX move the longer discussion to a document dedicated to the subjects
of tuning, efficiency etc
For parallel ESI to work as efficiently as possible, it should
traverse the ESI tree *breadth first*, processing any ESI object
completely, with new threads scheduled for any includes
encountered. Completing processing of an ESI object allows for data
from the subtree (the ESI object and anything below) to be sent to the
client concurrently. As soon as ESI object processing is complete, the
respective thread will be returned to the thread pool and become
available for any other varnish task (except for the request for
esi_level 0, which _has_ to wait for completion of the entire ESI
request anyway and will send data to the client in the meantime).
With this setting to ``true`` (the default), this is what happens
always, but a thread may not be immediately available if the thread
pool is not sufficiently sized for the current load and thus the
include request may have to be queued.
With this setting to ``false``, include processing happens in the same
thread as if ``serial`` mode had been activated, but only in the case
where there is no new thread available. While this may sound like the
more sensible option at first, we did not make this the default for
the following reasons:
See the detailled discussion in `THREADS`_ for details.
* Before completion of the ESI processing, the subtree below it is not
yet available for delivery to the client because additional VDPs
behind pesi cannot be called from a different thread.
* While processing of the include may take an arbitrarily long time
(for example because it requires a lengthy backend fetch), we know
that the ESI object is fully available in the stevedore (and usually
in memory already) when we parse an include because streaming is not
supported for ESI. So we know that completing the processing of the
current ESI object will be quick, while descending into a subtree
may be take a long time.
* Except for ESI level 0, the current thread will become available as
soon as ESI processing has completed.
* The thread herder may breed new threads and other threads may
terminate, so queuing a thread momentarily is not a bad thing per
se.
In short, keeping ``thread`` at the default ``true`` should be the
right option, the alternative exists just in case.
$Function VOID workspace_prealloc(BYTES min_free=4096, INT max_nodes=32)
Configure workspace pre-allocation of objects in variable-sized
......@@ -686,6 +643,54 @@ Considerations about tuning the configuration and interpreting the
statistics are beyond the scope of this manual. For a deeper
discussion, see $EXTERNAL_DOCUMENT.
THREADS
=======
For parallel ESI to work as efficiently as possible, it should
traverse the ESI tree *breadth first*, processing any ESI object
completely, with new threads scheduled for any includes
encountered. Completing processing of an ESI object allows for data
from the subtree (the ESI object and anything below) to be sent to the
client concurrently. As soon as ESI object processing is complete, the
respective thread will be returned to the thread pool and become
available for any other varnish task (except for the request for
esi_level 0, which _has_ to wait for completion of the entire ESI
request anyway and will send data to the client in the meantime).
With the `thread`_ setting to ``true`` (the default), this is what
happens, but a thread may not be immediately available if the thread
pool is not sufficiently sized for the current load and thus the
include request may have to be queued.
With the `thread`_ setting to ``false``, include processing happens in
the same thread as if ``serial`` mode had been activated if there is
no new thread immediately available. While this may sound like the
more sensible option at first, we did not make this the default for
the following reasons:
* Before completion of ESI processing, the subtree below it is not yet
available for delivery to the client because additional VDPs behind
pesi cannot be called from a different thread.
* While processing of the include may take an arbitrarily long time
(for example because it requires a lengthy backend fetch), we know
that the ESI object is fully available in the stevedore (and usually
in memory already) when we parse an include because streaming is not
supported for ESI. So we know that completing the processing of the
current ESI object will be quick, while descending into a subtree
may be take a long time.
* Except for ESI level 0, the current thread will become available as
soon as ESI processing has completed.
* The thread herder may breed new threads and other threads may
terminate, so queuing a thread momentarily is not a bad thing per
se.
In short, keeping the `thread`_ setting at the default ``true`` should
be the right option, the alternative exists just in case.
LIMITATIONS
===========
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment