- 28 Feb, 2024 1 commit
-
-
Nils Goroll authored
Due to a relatively recent change in varnish-cache, VDP_Close() in the parallel esi thread can race VDP_Deliver in the deliver thread, resulting in a panic. See #19 for details
-
- 13 Feb, 2024 1 commit
-
-
Nils Goroll authored
With this change, pesi.set(onerror_continue, true); can be used to restore behavior from before varnish-cache 7.3. See pesi.set() documentation for details. The test case is a modified version of the one by Geoff Simmons posted in https://github.com/varnishcache/varnish-cache/issues/4053#issuecomment-1936000064
-
- 31 Jan, 2024 2 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
- 02 Jan, 2024 2 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
this also fixes a build issue introduced by Varnish-Cache 13cf51e70c00b912ce39110d7eff50ccc01b7bb9
-
- 01 Aug, 2023 1 commit
-
-
Nils Goroll authored
-
- 12 Jul, 2023 1 commit
-
-
- 08 Jul, 2023 1 commit
-
-
Nils Goroll authored
Ref #15
-
- 26 Jun, 2023 1 commit
-
-
Nils Goroll authored
and check that the setting is actually applied (it is).
-
- 21 Jun, 2023 1 commit
-
-
Daniel Karp authored
-
- 12 Jun, 2023 14 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
Now that only the top thread delivers, we can remove the support for multiple unpending and delivering threads.
-
Nils Goroll authored
This concludes the fix for !11 Some bits were already implemented in previous commits. See https://github.com/varnishcache/varnish-cache/issues/3938 for why this took so long. Also implemented a fix for https://github.com/varnishcache/varnish-cache/issues/3939 (which is not yet in varnish-cache)
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
In order to properly handle vped_include() failing, we need a way to insert a node only when it's ready and free it when not. Thus, we link to the parent in node_new(), but do not insert the child to the parent's list. As before, this happens in node_insert. On the way, we also add node_free() to free a node without finalizing it. Partly fixes !11
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
- 11 Jun, 2023 12 commits
-
-
Nils Goroll authored
Related prep-work for !6
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
found by flexelint
-
Nils Goroll authored
Now that we got rid of front unpending, we can also remove all the complications due to the ST_OPEN state. Begin with removing that state itself.
-
Nils Goroll authored
-
Nils Goroll authored
It was effectively removed in 913c4653 but dead code was left until now.
-
Nils Goroll authored
found by flexelint
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Found by flexelint
-
Nils Goroll authored
Ever since the first release of vmod_pesi, we knew that this feature was probably not useful: As explained in the THREADS section of the vcc / man page, we can not push to VDPs, so the only case where this could work was when there are no VDPs. The only case pESI itself does not need any is non-esi, non-gzip uncacheable streaming. Also, the only case where it made a significant difference from pushing from the level 0 / front thread was when there are no threads available and the front thread runs the current include. Since then, we never encountered a situation where we would have needed this feature.
-
- 09 Jun, 2023 2 commits
-
-
Nils Goroll authored
Now we can keep references also to private leaf objects. For ESI objects, which we also need to inspect when building the delivery tree, we still need to make copies in pesi_buf_bytes because, if the final flag to ObjIterate() is set, the storage engine can (and usually will) free segments "behind" delivery.
-
Nils Goroll authored
-
- 10 May, 2023 1 commit
-
-
Nils Goroll authored
vmod_pesi works by saving the resulting data from a sub request to a tree structure, which gets delivered to the client in the top request's thread, once it is ready. For cacheable objects which do not require ESI processing, we simply keep the original request with an additional reference to the object. So basically we hand delivery from one worker to another. subreq_fixup() is responsible for converting the saved request to a state as if it was handled by the request handling the top level request, so one of the changes it applies is to change the wrk pointer to the worker of the top level request. Yet that change was incomplete and we missed an additional pointer in struct vdp_ctx. This should hopefully fix #14
-