1. 21 Jul, 2024 6 commits
    • Nils Goroll's avatar
      fellow_cache: Add panic formatter · 66be0a6a
      Nils Goroll authored
      66be0a6a
    • Nils Goroll's avatar
      fellow_cache: Fix calculation of required number of segments · f6c863b2
      Nils Goroll authored
      it needs to be based on the space to be returned, not on the space to be still
      allocated on disk.
      f6c863b2
    • Nils Goroll's avatar
      fellow_cache: Fix fellow_cache_seglist_init() size parameter semantics · 98d58f39
      Nils Goroll authored
      before this change, it was taken as excluding struct fellow_cache_seglist,
      and was correctly used at the call sites changed with this commit
      (where (size - sizeof *fcsl) was used as the argument).
      
      However, for the calls from
      - fellow_busy_obj_alloc()
      - fellow_cache_obj_get()
      
      there was a mismatch with the caller's size value.
      98d58f39
    • Nils Goroll's avatar
      fellow_cache: wait for any I/O before delete and fix assertion · 280d1627
      Nils Goroll authored
      seen during internal testing:
      
       #13 0x00007efc94253e32 in __GI___assert_fail (assertion=0x7efc935ef349 "FCO_REFCNT(fco) <= 2",
          file=0x7efc935ee10f "fellow_cache.c", line=6106,
          function=0x7efc935ef2d4 "void fellow_cache_obj_delete(struct fellow_cache *, struct fellow_cache_obj *, const uint8_t *)") at ./assert/assert.c:101
       #14 0x00007efc935c7f66 in fellow_cache_obj_delete (fc=0x7efc93a41300, fco=fco@entry=0x7efc4778a000,
          hash=hash@entry=0x7efc2ea04270 "\016\351S~\a\346\353҄B\256x\346Mx\375P\211Hz\377U\337\030ol\207Y\276䯒")
          at fellow_cache.c:6106
      
      reason: ongoing I/O on segments:
      
      (gdb) p fco->fdo_fcs.refcnt
      $19 = 3
      (gdb) p fco->fcsl->lsegs
      $20 = 3
      (gdb) set $i = 0
      (gdb) p fco->fcsl->segs[$i++]->state
      $21 = FCS_INCORE
      (gdb) p fco->fcsl->segs[$i++]->state
      $22 = FCS_READING
      (gdb) p fco->fcsl->segs[$i++]->state
      $23 = FCS_READING
      
      so:
      
      - we can not make assumptions on the number of references
      - we need to wait for any I/O, not just writing and seglist read
      280d1627
    • Nils Goroll's avatar
    • Nils Goroll's avatar
      6f6ffe13
  2. 22 May, 2024 17 commits
  3. 10 May, 2024 2 commits
  4. 25 Mar, 2024 2 commits
  5. 01 Mar, 2024 3 commits
  6. 19 Feb, 2024 3 commits
  7. 14 Feb, 2024 6 commits
    • Nils Goroll's avatar
      Bring back cramming to fellow_busy_seg_memalloc() · c57d7879
      Nils Goroll authored
      Ref #60
      c57d7879
    • Nils Goroll's avatar
      Add a cramlimit function on page sizes · bef98385
      Nils Goroll authored
      bef98385
    • Nils Goroll's avatar
      Rename for clarity · 3b9df28e
      Nils Goroll authored
      3b9df28e
    • Nils Goroll's avatar
      Optimize segment memory allocation further · bee79167
      Nils Goroll authored
      If the page from the segmem pool is too big, do not just trim it, but
      rather trade it for a smaller page if that is sufficient.
      
      Ref #60
      bee79167
    • Nils Goroll's avatar
      Neuter fetch_chunksize from Varnish-Cache and allocate chunksize · 0dd59f23
      Nils Goroll authored
      ... if it looks like we were handling chunked encoding.
      
      fellow has (and needs to have) its own strategy for allocating growing
      objects and is basically working around the fetch_chunksize coming
      from varnish-cache by recording if subsequent allocation requests are
      growing the object (for chunked encoding) or are converging onto a
      maximum (for content-length).
      
      Now this strategy had an undesired side effect, that the newly
      introduced fbo_segmem pool is always allocating the chunk size, but
      the disk segment allocation was using the size from varnish-cache,
      which, for the example in the ticket, would lead to 1MB chunks being
      allocated, but trimmed down to only 16kb - for each allocation.
      
      We now explicitly test if varnish-cache is requesting fetch_chunksize
      and, if so, allocate the chunk size.
      
      This brings the disk segment allocation in line with the mempool.
      
      On the other hand, for chunked encoding, we will still over-allocate
      and trim when the actual object is smaller than the chunk size, but
      this is by design.
      
      Fixes #60
      0dd59f23
    • Nils Goroll's avatar
      Rename variable for clarity · f58dca47
      Nils Goroll authored
      while working on #60
      f58dca47
  8. 09 Feb, 2024 1 commit