1. 27 May, 2024 2 commits
  2. 22 May, 2024 7 commits
  3. 21 May, 2024 9 commits
  4. 10 May, 2024 3 commits
  5. 25 Mar, 2024 2 commits
  6. 01 Mar, 2024 3 commits
  7. 19 Feb, 2024 3 commits
  8. 14 Feb, 2024 6 commits
    • Nils Goroll's avatar
      Bring back cramming to fellow_busy_seg_memalloc() · c57d7879
      Nils Goroll authored
      Ref #60
      c57d7879
    • Nils Goroll's avatar
      Add a cramlimit function on page sizes · bef98385
      Nils Goroll authored
      bef98385
    • Nils Goroll's avatar
      Rename for clarity · 3b9df28e
      Nils Goroll authored
      3b9df28e
    • Nils Goroll's avatar
      Optimize segment memory allocation further · bee79167
      Nils Goroll authored
      If the page from the segmem pool is too big, do not just trim it, but
      rather trade it for a smaller page if that is sufficient.
      
      Ref #60
      bee79167
    • Nils Goroll's avatar
      Neuter fetch_chunksize from Varnish-Cache and allocate chunksize · 0dd59f23
      Nils Goroll authored
      ... if it looks like we were handling chunked encoding.
      
      fellow has (and needs to have) its own strategy for allocating growing
      objects and is basically working around the fetch_chunksize coming
      from varnish-cache by recording if subsequent allocation requests are
      growing the object (for chunked encoding) or are converging onto a
      maximum (for content-length).
      
      Now this strategy had an undesired side effect, that the newly
      introduced fbo_segmem pool is always allocating the chunk size, but
      the disk segment allocation was using the size from varnish-cache,
      which, for the example in the ticket, would lead to 1MB chunks being
      allocated, but trimmed down to only 16kb - for each allocation.
      
      We now explicitly test if varnish-cache is requesting fetch_chunksize
      and, if so, allocate the chunk size.
      
      This brings the disk segment allocation in line with the mempool.
      
      On the other hand, for chunked encoding, we will still over-allocate
      and trim when the actual object is smaller than the chunk size, but
      this is by design.
      
      Fixes #60
      0dd59f23
    • Nils Goroll's avatar
      Rename variable for clarity · f58dca47
      Nils Goroll authored
      while working on #60
      f58dca47
  9. 09 Feb, 2024 2 commits
  10. 08 Feb, 2024 1 commit
  11. 07 Feb, 2024 2 commits
    • Nils Goroll's avatar
      fellow_log: Bring back obj deref during FP_INIT · 63b003ec
      Nils Goroll authored
      8134e93b broke object deletions during
      FP_INIT, because fellow_dskbuddy() waits for FP_OPEN:
      
       #3  0x00007f3f51a85d2b in fellow_wait_open (ffd=0x7f3f445a8000) at fellow_log.c:847
       #4  fellow_dskbuddy (ffd=0x7f3f445a8000) at fellow_log.c:6381
       #5  0x00007f3f51aa31f7 in fellow_cache_obj_delete (fc=0x7f3f446d4000, fco=<optimized out>,
           hash=hash@entry=0x7f3f404ce670 "ǹ\216N\032\217\230},p\245\205\361i \002\253\253Rn\372ز\303\307\355,\254\342\024\360M")
           at fellow_cache.c:6032
       #6  0x00007f3f51a597a9 in sfedsk_objfree (wrk=0x7f3f40dfc5d0, dskoc=0x7f3f404d5440) at fellow_storage.c:655
       #7  0x0000564d8e23c14a in ObjFreeObj (wrk=wrk@entry=0x7f3f40dfc5d0, oc=0x7f3f404d5440) at cache/cache_obj.c:412
       #8  0x0000564d8e232a9f in HSH_DerefObjCore (wrk=0x7f3f40dfc5d0, ocp=ocp@entry=0x7fffcd3303d0, rushmax=rushmax@entry=-1)
           at cache/cache_hash.c:1065
       #9  0x00007f3f51a5022f in festash_work_fes (fet=fet@entry=0x7fffcd33bbf0, fes=0x7f3e7a006640, ban=ban@entry=0x7ee790c56160)
           at fellow_stash.h:195
       #10 0x00007f3f51a54be2 in festash_top_work (fet=fet@entry=0x7fffcd33bbf0, has_bans=1) at fellow_stash.h:226
       #11 0x00007f3f51a586b8 in sfe_resurrect_ban (e=0x7f3e401d7c98, sfer=0x7fffcd33bbb0) at fellow_storage.c:2078
       #12 sfe_resurrect (priv=0x7fffcd33bbb0, e=0x7f3e401d7c98) at fellow_storage.c:2111
       #13 0x00007f3f51a81163 in fellow_logs_iter_block (flics=flics@entry=0x7fffcd332b80, flivs=flivs@entry=0x7fffcd337050,
           logblk=logblk@entry=0x7f3e401d7000) at fellow_log.c:4834
       #14 0x00007f3f51a82864 in fellow_logs_iter (flics=0x7fffcd332b80, flivs=flivs@entry=0x7fffcd337050, active_logregion=0x7f3f445a8360,
          empty_logregion=0x7f3f445a8370, off=594695172096, off@entry=656178581504) at fellow_log.c:5294
       #15 0x00007f3f51a84886 in fellow_logs_rewrite (ffd=ffd@entry=0x7f3f445a8000, new_log_fdr=new_log_fdr@entry=0x0,
           resur_f=resur_f@entry=0x7f3f51a57da0 <sfe_resurrect>, resur_priv=resur_priv@entry=0x7fffcd33bbb0) at fellow_log.c:5789
       #16 0x00007f3f51a8763b in fellow_log_open (ffd=0x7f3f445a8000, resur_f=resur_f@entry=0x7f3f51a57da0 <sfe_resurrect>,
           resur_priv=resur_priv@entry=0x7fffcd33bbb0) at fellow_log.c:6809
       #17 0x00007f3f51a5516a in sfe_open_task (priv=0x7fffcd33bbb0, wrk=<optimized out>) at fellow_storage.c:2199
      
      But rather than bringing this back, we postpone deletion work with a
      thin delete.
      63b003ec
    • Nils Goroll's avatar
      build: fence stack usage on ubuntu · 67e9f228
      Nils Goroll authored
      for interesting detail, read the issue
      
      Fixes #57
      67e9f228