1. 30 Jul, 2019 5 commits
  2. 29 Jul, 2019 23 commits
  3. 28 Jul, 2019 9 commits
  4. 27 Jul, 2019 3 commits
    • Nils Goroll's avatar
      fix MEMPOOL.pesi expectation · 4b75f33a
      Nils Goroll authored
      actually, toosmall should happen never or exactly once: If our test from
      mpl_init() is still in the pool when we MPL_Get the first node, it will
      be too small.
      4b75f33a
    • Nils Goroll's avatar
      POC/MVP: introduce T_SUBREQ: ref the subrequest · d9c36c7e
      Nils Goroll authored
      much else needs to be changed, but this commit still works with the
      previous concept, so it might be helpful...
      
      Before this commit, our concept basically was:
      
      - start esi include requests on separate threads as quickly as
        possible
      
      - copy or reference bytes received via a VDP bytes callback
      
      - have the top request thread push these bytes
      
      - run additional VDPs on the subrequest threads
      
      This concept has some fundamental drawbacks:
      
      - varnish-cache core uses the gzgz and pretendgzip vdps to strip
        intermediate gzip headers and calculate the CRC
      
        Because the CRC needs to be calculated in the order of delivery, we
        cannot calculate it in the subrequest threads. We would thus need to
        reinvent all of the CRC calculation, with many special cases to
        consider.
      
      - even if we did this, our support for additional VDPs at esi_level >
        0 would be either limited or really complicated: For one, we
        currently always need the pesi vdp first (which differs from
        standard varnish) and we probably would need many more cases where
        we copy data
      
      In general, our current concept complicates things and requires work
      to be done multiple times.
      
      This commit shows the basic idea to avoid all this complication. It is
      far from clean, but already survives a varnishtest -j40 -n1000
      src/tests/*vtc
      
      It does not yet change the vdp context, but it will allow to get much
      closer to the original varnish behavior:
      
      We return from the subreq thread without invoking any delivery, we
      just save the references to the request and (busy) object to continue
      delivery later (in the top request thread).
      
      The only uglyness this requires is that we need to keep varnish-cache
      core code from removing a private (pass/hfm/hfp) object from under our
      feet.
      
      Then the top request can deliver non-esi objects with the already
      built vdp without any additional copying whatsoever, the delivery bit
      of the requests is simply continued in a different thread.
      
      This will allow us to switch back to the varnish-cache esi concepts:
      ESI subrequests push their gzgz/pretendgzip VDPs and are otherwise
      compatible with other VDPs. And they do not require the esi VDP to be
      present for subrequests.
      
      Via our transport, I think we will at least be able to ensure pesi is
      used on subrequests if level 0 has esi, but we might even get to
      pesi/esi interop to the extend where starting with esi and continuing
      with pesi at some deeper level could work.
      
      For pesi objects we will need to continue to ref/buffer VDP_bytes,
      because we simply need to do the ESI parse in parallel and at least
      for private objects where is no second chance, the object will be gone
      once we have seen the VDP_bytes once.
      
      Copying could still be optimized to use less storage objects.
      d9c36c7e
    • Nils Goroll's avatar
      logexpect really is too racy · ebfcebb5
      Nils Goroll authored
      see also e6b9b0f1:
      
      So it seems logexpects really got some issues
      
      - when starting with -start, there seems to exist no synchronization for
        the following vtc steps, to the extend that a logexpect -wait may
        wait for one which has already finished and a client may run before a
        logexpect has actually started (see above commit)
      
      - yet also with running on the log head with -d1, we got no guarantee
        that all the request have pushed their logs, so add a dirty delay
      ebfcebb5