- 30 Jul, 2019 5 commits
-
-
Nils Goroll authored
in the varnish-cache variant of the code this works because not VDP_Push is used but the internal push to the parent
-
Nils Goroll authored
When subreqs still used the topreq's VDPs, we could not do this, but now that we properly layer the VDPs again, we can (and should)
-
Nils Goroll authored
-
Nils Goroll authored
waiting for the boc happened at the wrong place *** v1 debug|Assert error in vped_gzgz_init(), foreign/cache_esi_deliver.c line 163: *** v1 debug| Condition(foo->start > 0 && foo->start < foo->olen * 8) not true.
-
Nils Goroll authored
... because of T_FINAL. Because the parent's thread (except for the root) will terminate eventually, this is also all good, except for the case that we spend all available threads on T_FINAL. Leave the hard problems for later, *sigh*
-
- 29 Jul, 2019 23 commits
-
-
Nils Goroll authored
for esi_level = 0, we still need to close the root in the vdp, but for levels down, we want to make sure that the vdps are right before anything is happening below our node
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
the producer should have the T_FINAL lock before making the node available by releasing the tree lock
-
Nils Goroll authored
those VSL(SLT_Debug, 0, ...) trigger false positives while testing
-
Nils Goroll authored
or at least I fail to see how it should with just one req->vdc
-
Nils Goroll authored
...why I could have meant by topa-rent
-
Nils Goroll authored
This restores src/foreign/cache_esi_deliver.c to basically the code from varnish-cache and avoids any unlayered pushes / crc.
-
Geoff Simmons authored
-
Geoff Simmons authored
Never mind that this currently never happens anyway.
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Nils Goroll authored
in for nexus nodes (= ESI subrequests), we fini the pesi vdp (and, conceptually, the buf also, but that requires details still) when we are done with parsing, such that any other vdps are still in place. Also we push a vdp which pushes bytes up one parent like ved_ved in varnish-cache. So we should now have a VDP chain for each subreq basically matching that of varnish-cache
-
Geoff Simmons authored
-
Nils Goroll authored
I always got confused about this myself, so the name was probably bad
-
Nils Goroll authored
gzip status needs to be per esi inlcude (= per nexus): If the esi is gzipped and is used in an ungzip context, the ungzip vdp gets pushed by varnish-cache, yet still all content below needs to be (pretend-)gzipped for that ungzip to work.
-
Nils Goroll authored
-
Geoff Simmons authored
-
Nils Goroll authored
-
Nils Goroll authored
geoffs load tests have shown that my previous idea of trying to fool varnish-cache code in temporarily changing the objcore flags was bad and leads to all kinds of nasty races. So so we add yet another method which not only keeps the subreq intact, but also halts its thread until deliery is ready. This requires another thread to exist for longer, hopyfully will get us towards the goals described in d9c36c7e.
-
- 28 Jul, 2019 9 commits
-
-
Nils Goroll authored
We called pesi_finish() after notifying the topreq, so they could both be running pesi_finish(), breaking out has_task assertion
-
Nils Goroll authored
firstly, this is a cleanup The relevant fix is that we raced a backend fetches objcore->flags modification, which simply was wrong. Fiddling the flags under the oh->mtx should solve this race.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
and add back assertions that we hopefully do not mess it up
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Geoff Simmons authored
unsigned is always >= 0.
-
- 27 Jul, 2019 3 commits
-
-
Nils Goroll authored
actually, toosmall should happen never or exactly once: If our test from mpl_init() is still in the pool when we MPL_Get the first node, it will be too small.
-
Nils Goroll authored
much else needs to be changed, but this commit still works with the previous concept, so it might be helpful... Before this commit, our concept basically was: - start esi include requests on separate threads as quickly as possible - copy or reference bytes received via a VDP bytes callback - have the top request thread push these bytes - run additional VDPs on the subrequest threads This concept has some fundamental drawbacks: - varnish-cache core uses the gzgz and pretendgzip vdps to strip intermediate gzip headers and calculate the CRC Because the CRC needs to be calculated in the order of delivery, we cannot calculate it in the subrequest threads. We would thus need to reinvent all of the CRC calculation, with many special cases to consider. - even if we did this, our support for additional VDPs at esi_level > 0 would be either limited or really complicated: For one, we currently always need the pesi vdp first (which differs from standard varnish) and we probably would need many more cases where we copy data In general, our current concept complicates things and requires work to be done multiple times. This commit shows the basic idea to avoid all this complication. It is far from clean, but already survives a varnishtest -j40 -n1000 src/tests/*vtc It does not yet change the vdp context, but it will allow to get much closer to the original varnish behavior: We return from the subreq thread without invoking any delivery, we just save the references to the request and (busy) object to continue delivery later (in the top request thread). The only uglyness this requires is that we need to keep varnish-cache core code from removing a private (pass/hfm/hfp) object from under our feet. Then the top request can deliver non-esi objects with the already built vdp without any additional copying whatsoever, the delivery bit of the requests is simply continued in a different thread. This will allow us to switch back to the varnish-cache esi concepts: ESI subrequests push their gzgz/pretendgzip VDPs and are otherwise compatible with other VDPs. And they do not require the esi VDP to be present for subrequests. Via our transport, I think we will at least be able to ensure pesi is used on subrequests if level 0 has esi, but we might even get to pesi/esi interop to the extend where starting with esi and continuing with pesi at some deeper level could work. For pesi objects we will need to continue to ref/buffer VDP_bytes, because we simply need to do the ESI parse in parallel and at least for private objects where is no second chance, the object will be gone once we have seen the VDP_bytes once. Copying could still be optimized to use less storage objects.
-
Nils Goroll authored
see also e6b9b0f1: So it seems logexpects really got some issues - when starting with -start, there seems to exist no synchronization for the following vtc steps, to the extend that a logexpect -wait may wait for one which has already finished and a client may run before a logexpect has actually started (see above commit) - yet also with running on the log head with -d1, we got no guarantee that all the request have pushed their logs, so add a dirty delay
-