Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
L
libvdp-pesi
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Charts
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Jobs
Commits
Open sidebar
uplex-varnish
libvdp-pesi
Commits
05cebd46
Commit
05cebd46
authored
Aug 28, 2019
by
Nils Goroll
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
doc: move the detailed discussion of set(thread, bool)
parent
e4949298
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
51 additions
and
46 deletions
+51
-46
vdp_pesi.vcc
src/vdp_pesi.vcc
+51
-46
No files found.
src/vdp_pesi.vcc
View file @
05cebd46
...
...
@@ -241,6 +241,8 @@ Example::
}
}
.. _thread:
``thread``
----------
...
...
@@ -257,52 +259,7 @@ Whether we always request a new thread for includes, default is
Request a new thread, potentially waiting for one to become
available.
XXX move the longer discussion to a document dedicated to the subjects
of tuning, efficiency etc
For parallel ESI to work as efficiently as possible, it should
traverse the ESI tree *breadth first*, processing any ESI object
completely, with new threads scheduled for any includes
encountered. Completing processing of an ESI object allows for data
from the subtree (the ESI object and anything below) to be sent to the
client concurrently. As soon as ESI object processing is complete, the
respective thread will be returned to the thread pool and become
available for any other varnish task (except for the request for
esi_level 0, which _has_ to wait for completion of the entire ESI
request anyway and will send data to the client in the meantime).
With this setting to ``true`` (the default), this is what happens
always, but a thread may not be immediately available if the thread
pool is not sufficiently sized for the current load and thus the
include request may have to be queued.
With this setting to ``false``, include processing happens in the same
thread as if ``serial`` mode had been activated, but only in the case
where there is no new thread available. While this may sound like the
more sensible option at first, we did not make this the default for
the following reasons:
* Before completion of the ESI processing, the subtree below it is not
yet available for delivery to the client because additional VDPs
behind pesi cannot be called from a different thread.
* While processing of the include may take an arbitrarily long time
(for example because it requires a lengthy backend fetch), we know
that the ESI object is fully available in the stevedore (and usually
in memory already) when we parse an include because streaming is not
supported for ESI. So we know that completing the processing of the
current ESI object will be quick, while descending into a subtree
may be take a long time.
* Except for ESI level 0, the current thread will become available as
soon as ESI processing has completed.
* The thread herder may breed new threads and other threads may
terminate, so queuing a thread momentarily is not a bad thing per
se.
In short, keeping ``thread`` at the default ``true`` should be the
right option, the alternative exists just in case.
See the detailled discussion in `THREADS`_ for details.
$Function VOID workspace_prealloc(BYTES min_free=4096, INT max_nodes=32)
...
...
@@ -686,6 +643,54 @@ Considerations about tuning the configuration and interpreting the
statistics are beyond the scope of this manual. For a deeper
discussion, see $EXTERNAL_DOCUMENT.
THREADS
=======
For parallel ESI to work as efficiently as possible, it should
traverse the ESI tree *breadth first*, processing any ESI object
completely, with new threads scheduled for any includes
encountered. Completing processing of an ESI object allows for data
from the subtree (the ESI object and anything below) to be sent to the
client concurrently. As soon as ESI object processing is complete, the
respective thread will be returned to the thread pool and become
available for any other varnish task (except for the request for
esi_level 0, which _has_ to wait for completion of the entire ESI
request anyway and will send data to the client in the meantime).
With the `thread`_ setting to ``true`` (the default), this is what
happens, but a thread may not be immediately available if the thread
pool is not sufficiently sized for the current load and thus the
include request may have to be queued.
With the `thread`_ setting to ``false``, include processing happens in
the same thread as if ``serial`` mode had been activated if there is
no new thread immediately available. While this may sound like the
more sensible option at first, we did not make this the default for
the following reasons:
* Before completion of ESI processing, the subtree below it is not yet
available for delivery to the client because additional VDPs behind
pesi cannot be called from a different thread.
* While processing of the include may take an arbitrarily long time
(for example because it requires a lengthy backend fetch), we know
that the ESI object is fully available in the stevedore (and usually
in memory already) when we parse an include because streaming is not
supported for ESI. So we know that completing the processing of the
current ESI object will be quick, while descending into a subtree
may be take a long time.
* Except for ESI level 0, the current thread will become available as
soon as ESI processing has completed.
* The thread herder may breed new threads and other threads may
terminate, so queuing a thread momentarily is not a bad thing per
se.
In short, keeping the `thread`_ setting at the default ``true`` should
be the right option, the alternative exists just in case.
LIMITATIONS
===========
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment