1. 13 Dec, 2023 8 commits
    • Nils Goroll's avatar
      Rename logbuffer dskrsv/dskreqs -> dskpool · e89ff1ee
      Nils Goroll authored
      e89ff1ee
    • Nils Goroll's avatar
      Generalize the "pooled" async allocation idea from logbuffer dskreqs · 45ef5a8d
      Nils Goroll authored
      A pool tries to always have allocations ready. There are two buddy_reqs,
      and when one is depleted, allocations are taken from the other, while
      the empty reqs are being filled again.
      45ef5a8d
    • Nils Goroll's avatar
      buddy_test: output OK to stdout · 7435451b
      Nils Goroll authored
      such that the test can easily be run with 2>/dev/null if the dbg output
      is of no interest.
      7435451b
    • Nils Goroll's avatar
      Flexelint · 98432f7c
      Nils Goroll authored
      98432f7c
    • Nils Goroll's avatar
    • Nils Goroll's avatar
      Rework fellow_cache_obj_iter and read ahead · e8d54546
      Nils Goroll authored
      Issue #41 has shown a deadlock scenario where various object iterators
      would wait for memory.
      
      While reviewing this issue, we noticed a couple of shortcomings in the
      existing code:
      
      * fellow_cache_seg_ref_in() would always wait for allocation requests
        for readahead segments. Yet, when under memory pressure, we should
        not wait at all for memory for readahead.
      
      * fellow_cache_obj_iter() would hold onto already sent segments also
        when waiting for synchronous I/O and memory allocations.
      
      To improve on these shortcomings and further optimize the code, some
      of fellow_cache_obj_iter() and all of the readahead code has been
      rewritten. Improvements comprise the following:
      
      * For read ahead, we now use asynchronous memory allocations. If they
        succeed right away, we issue I/O right away also, but if allocations
        are delayed, we continue delivery and check back later. By chance,
        memory allocations will succeed until then.
      
      * We decouple memory allocations from specific segments and only care
        about the right size of the allocation. Because many segments will
        be of chunk_bytes size, this will allow more efficient use of
        available asynchronous allocations.
      
      * We now de-reference already sent segments also whenever we need to
        wait for anything, be it a memory allocation or I/O. This should
        help overall efficiency and reduce memory pressure, because already
        sent segments can be LRUd earlier.
      
        The drawback is that we flush the VDP pipeline more often (we need
        to before we can deref segments).
      
      We also cap the readahead parameter at the equivalent of 1/16 of
      memory in order to avoid inefficiencies because of single requests
      holding too much of the memory cache hostage.
      
      An additional hard cap at 31 is required to keep the default esi depth
      supported with the default stack size of varnish-cache.
      e8d54546
    • Nils Goroll's avatar
      Decide a race between obj_get() and lru · c8b01760
      Nils Goroll authored
      we only set stobj->priv after returning from obj_get(), so
      assert(oc->stobj->priv == fco) could trigger in the lru thread.
      
      We now set the priv right before inserting into LRU.
      c8b01760
    • Nils Goroll's avatar
      Integrate fellow_busy allocation in fellow_cache_obj_new() · 2235fc09
      Nils Goroll authored
      It does not make sense to use up memory for busy objects if we can not
      create their cache and disk counterparts in memory.
      
      Motivated by #41
      2235fc09
  2. 11 Dec, 2023 4 commits
  3. 10 Dec, 2023 7 commits
  4. 08 Dec, 2023 1 commit
  5. 29 Nov, 2023 3 commits
  6. 28 Nov, 2023 17 commits