POC/MVP: introduce T_SUBREQ: ref the subrequest
much else needs to be changed, but this commit still works with the previous concept, so it might be helpful... Before this commit, our concept basically was: - start esi include requests on separate threads as quickly as possible - copy or reference bytes received via a VDP bytes callback - have the top request thread push these bytes - run additional VDPs on the subrequest threads This concept has some fundamental drawbacks: - varnish-cache core uses the gzgz and pretendgzip vdps to strip intermediate gzip headers and calculate the CRC Because the CRC needs to be calculated in the order of delivery, we cannot calculate it in the subrequest threads. We would thus need to reinvent all of the CRC calculation, with many special cases to consider. - even if we did this, our support for additional VDPs at esi_level > 0 would be either limited or really complicated: For one, we currently always need the pesi vdp first (which differs from standard varnish) and we probably would need many more cases where we copy data In general, our current concept complicates things and requires work to be done multiple times. This commit shows the basic idea to avoid all this complication. It is far from clean, but already survives a varnishtest -j40 -n1000 src/tests/*vtc It does not yet change the vdp context, but it will allow to get much closer to the original varnish behavior: We return from the subreq thread without invoking any delivery, we just save the references to the request and (busy) object to continue delivery later (in the top request thread). The only uglyness this requires is that we need to keep varnish-cache core code from removing a private (pass/hfm/hfp) object from under our feet. Then the top request can deliver non-esi objects with the already built vdp without any additional copying whatsoever, the delivery bit of the requests is simply continued in a different thread. This will allow us to switch back to the varnish-cache esi concepts: ESI subrequests push their gzgz/pretendgzip VDPs and are otherwise compatible with other VDPs. And they do not require the esi VDP to be present for subrequests. Via our transport, I think we will at least be able to ensure pesi is used on subrequests if level 0 has esi, but we might even get to pesi/esi interop to the extend where starting with esi and continuing with pesi at some deeper level could work. For pesi objects we will need to continue to ref/buffer VDP_bytes, because we simply need to do the ESI parse in parallel and at least for private objects where is no second chance, the object will be gone once we have seen the VDP_bytes once. Copying could still be optimized to use less storage objects.
Showing
Please register or sign in to comment