1. 14 Dec, 2023 13 commits
  2. 13 Dec, 2023 14 commits
    • Nils Goroll's avatar
      Use BUDDY_POOL() for regionlists (addresses a lockup) · 5fbbf9a3
      Nils Goroll authored
      The oh-so-smart idea from 39c2568e was
      pretty dumb after all:
      
      While testing with the low RAM config (16MB on 10GB), a lockop was found
      with only ~45% of RAM occupied. The reason for the lockup was the
      un-crammed 64KB (16bits) request for a regionlist with priority 4, which
      was blocking all other requests.
      
      So: No, trying to allocate something "just in case" is never a good
      idea.
      5fbbf9a3
    • Nils Goroll's avatar
      BUDDY_POOL() for DLE changes · 573533ec
      Nils Goroll authored
      573533ec
    • Nils Goroll's avatar
      Lower prio for bitmap alloc in rewrite · eca7f1d3
      Nils Goroll authored
      It should allow log transactions to complete first
      eca7f1d3
    • Nils Goroll's avatar
      Use BUDDY_POOL() for logblks · 102667f9
      Nils Goroll authored
      102667f9
    • Nils Goroll's avatar
      a7fe43fb
    • Nils Goroll's avatar
      4fe6f1f0
    • Nils Goroll's avatar
      Rename logbuffer dskrsv/dskreqs -> dskpool · e89ff1ee
      Nils Goroll authored
      e89ff1ee
    • Nils Goroll's avatar
      Generalize the "pooled" async allocation idea from logbuffer dskreqs · 45ef5a8d
      Nils Goroll authored
      A pool tries to always have allocations ready. There are two buddy_reqs,
      and when one is depleted, allocations are taken from the other, while
      the empty reqs are being filled again.
      45ef5a8d
    • Nils Goroll's avatar
      buddy_test: output OK to stdout · 7435451b
      Nils Goroll authored
      such that the test can easily be run with 2>/dev/null if the dbg output
      is of no interest.
      7435451b
    • Nils Goroll's avatar
      Flexelint · 98432f7c
      Nils Goroll authored
      98432f7c
    • Nils Goroll's avatar
    • Nils Goroll's avatar
      Rework fellow_cache_obj_iter and read ahead · e8d54546
      Nils Goroll authored
      Issue #41 has shown a deadlock scenario where various object iterators
      would wait for memory.
      
      While reviewing this issue, we noticed a couple of shortcomings in the
      existing code:
      
      * fellow_cache_seg_ref_in() would always wait for allocation requests
        for readahead segments. Yet, when under memory pressure, we should
        not wait at all for memory for readahead.
      
      * fellow_cache_obj_iter() would hold onto already sent segments also
        when waiting for synchronous I/O and memory allocations.
      
      To improve on these shortcomings and further optimize the code, some
      of fellow_cache_obj_iter() and all of the readahead code has been
      rewritten. Improvements comprise the following:
      
      * For read ahead, we now use asynchronous memory allocations. If they
        succeed right away, we issue I/O right away also, but if allocations
        are delayed, we continue delivery and check back later. By chance,
        memory allocations will succeed until then.
      
      * We decouple memory allocations from specific segments and only care
        about the right size of the allocation. Because many segments will
        be of chunk_bytes size, this will allow more efficient use of
        available asynchronous allocations.
      
      * We now de-reference already sent segments also whenever we need to
        wait for anything, be it a memory allocation or I/O. This should
        help overall efficiency and reduce memory pressure, because already
        sent segments can be LRUd earlier.
      
        The drawback is that we flush the VDP pipeline more often (we need
        to before we can deref segments).
      
      We also cap the readahead parameter at the equivalent of 1/16 of
      memory in order to avoid inefficiencies because of single requests
      holding too much of the memory cache hostage.
      
      An additional hard cap at 31 is required to keep the default esi depth
      supported with the default stack size of varnish-cache.
      e8d54546
    • Nils Goroll's avatar
      Decide a race between obj_get() and lru · c8b01760
      Nils Goroll authored
      we only set stobj->priv after returning from obj_get(), so
      assert(oc->stobj->priv == fco) could trigger in the lru thread.
      
      We now set the priv right before inserting into LRU.
      c8b01760
    • Nils Goroll's avatar
      Integrate fellow_busy allocation in fellow_cache_obj_new() · 2235fc09
      Nils Goroll authored
      It does not make sense to use up memory for busy objects if we can not
      create their cache and disk counterparts in memory.
      
      Motivated by #41
      2235fc09
  3. 11 Dec, 2023 4 commits
  4. 10 Dec, 2023 7 commits
  5. 08 Dec, 2023 1 commit
  6. 29 Nov, 2023 1 commit