- 22 Jul, 2024 4 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
we can not reset the growing flag
-
- 21 Jul, 2024 9 commits
-
-
Nils Goroll authored
I believe the actual issue reported in #22 is that, with the old disk region allocation scheme, we could need one more disk segment list, such that FCO_REGIONS_RESERVE was too small. But pondering this issue, I went back to square one and re-thought the allocation plan. I now think that there were some fundamental flaws in the previous allocation code: - we did not plan for how many segment lists we would actually need - we would cram the segment allocation, which could lead to the number of segment lists growing - for growing allocations, we would switch from "assume we have enough regions" to "assume we have no regions at all any more" when FCO_REGIONS_RESERVE was reached. Hopefully, this new allocation plan code now takes a more sensible, holistic approach: Whenever we need more disk space (= another disk region), we calculate how many regions we are sensibly going to need in total. For no cram, this is just one, and for abs(cram) >=1 it is going to be the number of one-bits (popcount) of the size. Then we calculate the chunk size which we need to go to in order to fit all segments into segment lists. Based on this outcome, we calculate a maximum cam which we can allow for region allocations. This approach is fundamentally different to before in that we no longer cram segment sizes - which was wrong, because we do not have an infinite number of segment lists. Fixes #22 for real now, I hope
-
Nils Goroll authored
an unintended side effect of the last rework was that the main LRU loop would only run once the disk buddy had waiting requests, such that the reserve would not be built up unless it was needed at least once. Which, to some extent, defeats the purpose for freshly loaded caches.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
because for fellow_cache_test there is no disk LRU, there is no instance to trigger a log flush besides the periodic wake up of the logwatcher thread. So when the disk buddy ran out of space, but the log buffer still had regions to free, those would only be freed after two seconds, drastically slowing down fellow_cache_test execution. To avoid special casing just for tests, we now run a thread to care about this issue specifically, to some extent emulating what otherwise the lru thread would do.
-
Nils Goroll authored
-
Nils Goroll authored
the periodic flush via FLW_MAYFLUSH (which writes a new header) also needs to be active when there are disk blocks to be freed, otherwise we might deadlock.
-
Nils Goroll authored
-
Nils Goroll authored
It seems some OSes fail the call despite having it in the header file. Fixes #68
-
- 17 Jul, 2024 2 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
- 12 Jul, 2024 1 commit
-
-
Nils Goroll authored
-
- 06 Jul, 2024 2 commits
-
-
Nils Goroll authored
it got lost in 2b239f08
-
Nils Goroll authored
-
- 05 Jul, 2024 9 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
it needs to be based on the space to be returned, not on the space to be still allocated on disk.
-
Nils Goroll authored
before this change, it was taken as excluding struct fellow_cache_seglist, and was correctly used at the call sites changed with this commit (where (size - sizeof *fcsl) was used as the argument). However, for the calls from - fellow_busy_obj_alloc() - fellow_cache_obj_get() there was a mismatch with the caller's size value.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
seen during internal testing: #13 0x00007efc94253e32 in __GI___assert_fail (assertion=0x7efc935ef349 "FCO_REFCNT(fco) <= 2", file=0x7efc935ee10f "fellow_cache.c", line=6106, function=0x7efc935ef2d4 "void fellow_cache_obj_delete(struct fellow_cache *, struct fellow_cache_obj *, const uint8_t *)") at ./assert/assert.c:101 #14 0x00007efc935c7f66 in fellow_cache_obj_delete (fc=0x7efc93a41300, fco=fco@entry=0x7efc4778a000, hash=hash@entry=0x7efc2ea04270 "\016\351S~\a\346\353҄B\256x\346Mx\375P\211Hz\377U\337\030ol\207Y\276䯒") at fellow_cache.c:6106 reason: ongoing I/O on segments: (gdb) p fco->fdo_fcs.refcnt $19 = 3 (gdb) p fco->fcsl->lsegs $20 = 3 (gdb) set $i = 0 (gdb) p fco->fcsl->segs[$i++]->state $21 = FCS_INCORE (gdb) p fco->fcsl->segs[$i++]->state $22 = FCS_READING (gdb) p fco->fcsl->segs[$i++]->state $23 = FCS_READING so: - we can not make assumptions on the number of references - we need to wait for any I/O, not just writing and seglist read
-
Nils Goroll authored
Ref c1c9c4c7
-
- 04 Jun, 2024 3 commits
-
-
Nils Goroll authored
forgot some members in the calculation, so on SmartOS we hit an assertion that the resulting pool is below 4KB
-
Nils Goroll authored
cast to pointer from integer of different size seen on smartos
-
Nils Goroll authored
-
- 02 Jun, 2024 3 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
- 28 May, 2024 1 commit
-
-
Nils Goroll authored
For the old API, a variable assignment got lost. Ref 12db7fc5
-
- 27 May, 2024 3 commits
-
-
Nils Goroll authored
With the previous code, the compiler could have produced machine code which produced a transient zero lsb when in fact it should have been 1. Ref ba32426c Ref #66
-
Nils Goroll authored
-
Nils Goroll authored
Fixes #44 properly Ref https://github.com/varnishcache/varnish-cache/pull/4109
-
- 22 May, 2024 3 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
Ref #65
-
Nils Goroll authored
The .happy VCL variable now returns true only when the respective fellow storage is open (has completed loading). The b_happy VSL bitfield contains the happy state in individual bits, updated at logbuffer_flush_interval. The least significant bit contains the most recent happy state. The semantics of the happy value are likely to change in the future. Motivated by #66
-