• Nils Goroll's avatar
    Rework fellow_cache_obj_iter and read ahead · e8d54546
    Nils Goroll authored
    Issue #41 has shown a deadlock scenario where various object iterators
    would wait for memory.
    
    While reviewing this issue, we noticed a couple of shortcomings in the
    existing code:
    
    * fellow_cache_seg_ref_in() would always wait for allocation requests
      for readahead segments. Yet, when under memory pressure, we should
      not wait at all for memory for readahead.
    
    * fellow_cache_obj_iter() would hold onto already sent segments also
      when waiting for synchronous I/O and memory allocations.
    
    To improve on these shortcomings and further optimize the code, some
    of fellow_cache_obj_iter() and all of the readahead code has been
    rewritten. Improvements comprise the following:
    
    * For read ahead, we now use asynchronous memory allocations. If they
      succeed right away, we issue I/O right away also, but if allocations
      are delayed, we continue delivery and check back later. By chance,
      memory allocations will succeed until then.
    
    * We decouple memory allocations from specific segments and only care
      about the right size of the allocation. Because many segments will
      be of chunk_bytes size, this will allow more efficient use of
      available asynchronous allocations.
    
    * We now de-reference already sent segments also whenever we need to
      wait for anything, be it a memory allocation or I/O. This should
      help overall efficiency and reduce memory pressure, because already
      sent segments can be LRUd earlier.
    
      The drawback is that we flush the VDP pipeline more often (we need
      to before we can deref segments).
    
    We also cap the readahead parameter at the equivalent of 1/16 of
    memory in order to avoid inefficiencies because of single requests
    holding too much of the memory cache hostage.
    
    An additional hard cap at 31 is required to keep the default esi depth
    supported with the default stack size of varnish-cache.
    e8d54546