- 22 May, 2024 5 commits
-
-
Nils Goroll authored
avoid comparison between signed and unsigned. We know that (have) is positive here because of the explicit test for < 0 two lines above, so it is save to cast to the type of the rhd operand. Ref a9700c63 Ref #65
-
Nils Goroll authored
to match the result of io_uring Ref a9700c63 Ref #65
-
Nils Goroll authored
explicit cast for errno assignment Ref a9700c63 Ref #65
-
Nils Goroll authored
Found by flexelint Ref a9700c63 Ref #65
-
Nils Goroll authored
Adjust tests to changes in varnish-cache Ref f1fb85fed1f943a2005d7da06de94ff6fa6cc0e2
-
- 10 May, 2024 2 commits
-
-
Nils Goroll authored
Ref #65
-
Nils Goroll authored
-
- 25 Mar, 2024 2 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
- 01 Mar, 2024 3 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
Add O_LARGEFILE just in case Fix mode to 0600
-
Nils Goroll authored
-
- 19 Feb, 2024 3 commits
-
-
Nils Goroll authored
Fixes the second part of #60
-
Nils Goroll authored
-
-
- 14 Feb, 2024 6 commits
-
-
Nils Goroll authored
Ref #60
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
If the page from the segmem pool is too big, do not just trim it, but rather trade it for a smaller page if that is sufficient. Ref #60
-
Nils Goroll authored
... if it looks like we were handling chunked encoding. fellow has (and needs to have) its own strategy for allocating growing objects and is basically working around the fetch_chunksize coming from varnish-cache by recording if subsequent allocation requests are growing the object (for chunked encoding) or are converging onto a maximum (for content-length). Now this strategy had an undesired side effect, that the newly introduced fbo_segmem pool is always allocating the chunk size, but the disk segment allocation was using the size from varnish-cache, which, for the example in the ticket, would lead to 1MB chunks being allocated, but trimmed down to only 16kb - for each allocation. We now explicitly test if varnish-cache is requesting fetch_chunksize and, if so, allocate the chunk size. This brings the disk segment allocation in line with the mempool. On the other hand, for chunked encoding, we will still over-allocate and trim when the actual object is smaller than the chunk size, but this is by design. Fixes #60
-
Nils Goroll authored
while working on #60
-
- 09 Feb, 2024 2 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
- 08 Feb, 2024 1 commit
-
-
Nils Goroll authored
-
- 07 Feb, 2024 10 commits
-
-
Nils Goroll authored
8134e93b broke object deletions during FP_INIT, because fellow_dskbuddy() waits for FP_OPEN: #3 0x00007f3f51a85d2b in fellow_wait_open (ffd=0x7f3f445a8000) at fellow_log.c:847 #4 fellow_dskbuddy (ffd=0x7f3f445a8000) at fellow_log.c:6381 #5 0x00007f3f51aa31f7 in fellow_cache_obj_delete (fc=0x7f3f446d4000, fco=<optimized out>, hash=hash@entry=0x7f3f404ce670 "ǹ\216N\032\217\230},p\245\205\361i \002\253\253Rn\372ز\303\307\355,\254\342\024\360M") at fellow_cache.c:6032 #6 0x00007f3f51a597a9 in sfedsk_objfree (wrk=0x7f3f40dfc5d0, dskoc=0x7f3f404d5440) at fellow_storage.c:655 #7 0x0000564d8e23c14a in ObjFreeObj (wrk=wrk@entry=0x7f3f40dfc5d0, oc=0x7f3f404d5440) at cache/cache_obj.c:412 #8 0x0000564d8e232a9f in HSH_DerefObjCore (wrk=0x7f3f40dfc5d0, ocp=ocp@entry=0x7fffcd3303d0, rushmax=rushmax@entry=-1) at cache/cache_hash.c:1065 #9 0x00007f3f51a5022f in festash_work_fes (fet=fet@entry=0x7fffcd33bbf0, fes=0x7f3e7a006640, ban=ban@entry=0x7ee790c56160) at fellow_stash.h:195 #10 0x00007f3f51a54be2 in festash_top_work (fet=fet@entry=0x7fffcd33bbf0, has_bans=1) at fellow_stash.h:226 #11 0x00007f3f51a586b8 in sfe_resurrect_ban (e=0x7f3e401d7c98, sfer=0x7fffcd33bbb0) at fellow_storage.c:2078 #12 sfe_resurrect (priv=0x7fffcd33bbb0, e=0x7f3e401d7c98) at fellow_storage.c:2111 #13 0x00007f3f51a81163 in fellow_logs_iter_block (flics=flics@entry=0x7fffcd332b80, flivs=flivs@entry=0x7fffcd337050, logblk=logblk@entry=0x7f3e401d7000) at fellow_log.c:4834 #14 0x00007f3f51a82864 in fellow_logs_iter (flics=0x7fffcd332b80, flivs=flivs@entry=0x7fffcd337050, active_logregion=0x7f3f445a8360, empty_logregion=0x7f3f445a8370, off=594695172096, off@entry=656178581504) at fellow_log.c:5294 #15 0x00007f3f51a84886 in fellow_logs_rewrite (ffd=ffd@entry=0x7f3f445a8000, new_log_fdr=new_log_fdr@entry=0x0, resur_f=resur_f@entry=0x7f3f51a57da0 <sfe_resurrect>, resur_priv=resur_priv@entry=0x7fffcd33bbb0) at fellow_log.c:5789 #16 0x00007f3f51a8763b in fellow_log_open (ffd=0x7f3f445a8000, resur_f=resur_f@entry=0x7f3f51a57da0 <sfe_resurrect>, resur_priv=resur_priv@entry=0x7fffcd33bbb0) at fellow_log.c:6809 #17 0x00007f3f51a5516a in sfe_open_task (priv=0x7fffcd33bbb0, wrk=<optimized out>) at fellow_storage.c:2199 But rather than bringing this back, we postpone deletion work with a thin delete.
-
Nils Goroll authored
for interesting detail, read the issue Fixes #57
-
Nils Goroll authored
flush_active() can be called multiple times from logbuffer_flush(), and so can logbuffer_ref()
-
Nils Goroll authored
so far, this has only been seen with clang 10.0.0-4ubuntu1 Using different compilers never disappoints...
-
Nils Goroll authored
-
Nils Goroll authored
ioerr_log and allocerr_log have appropriate NOTEs attached.
-
Nils Goroll authored
I think statistics are good at this point and timeouts do not sound like a good idea to me any more: What are we going to do when a timeout is hit?
-
Nils Goroll authored
Spotted by Coverity, CID#486184
-
Nils Goroll authored
-
Nils Goroll authored
-
- 06 Feb, 2024 6 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Closes #46
-
Nils Goroll authored
https://github.com/varnishcache/varnish-cache/commit/dcaf616c66d93de69735237967cc091fa490bb93 removed the option to specifically build a single VSC header. Closes #56
-
Nils Goroll authored
-
Nils Goroll authored
... controlled by limiting parameters, which can also be used, in the worst case, to deactivate the feature. Closes #54 Closes !3 (implemented somehow similarly)
-