- 03 Aug, 2023 1 commit
-
-
Nils Goroll authored
-
- 02 Aug, 2023 2 commits
-
-
Nils Goroll authored
This fixes a use-after-destroy of the logwather condition variable reported in #19
-
Nils Goroll authored
Fixes #20
-
- 31 Jul, 2023 12 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
I have tried hard to make value tracking understand the code, but to no avail. It seems, for example assert(n <= 56) and later assert(n > 0) will just lead to flexelint knowning 1? but not 56 as the limit.
-
Nils Goroll authored
-
Nils Goroll authored
it was never accessed, but triggered flexelint
-
Nils Goroll authored
First and foremost, fellow_log_prep_max_regions was defined wrong: Except in fellow_cache_test, we call log submission with a maximum of FELLOW_DISK_LOG_BLOCK_ENTRIES = 56 DLEs. The intention of the fellow_log_prep_max_regions was was to allocate space to track return of the maximum number of regions possibly contained. The exact maximum would be (FELLOW_DISK_LOG_BLOCK_ENTRIES - 1) * DLE_REG_NREGION + 1 = (55 * 4) + 1 = 221, which is higher than FELLOW_DISK_LOG_BLOCK_ENTRIES * DLE_BAN_REG_NREGION = 56 * 3 = 168. Yet it seems prudent to not reply on any fixed maximum, and also our test cases call for a higher value, so we now define the maximum three times the actually used value, and also ensure that we batch the code to this size. In addition, one assertion in fellow_log_entries_prep() was wrong (it compared a number of DLEs with a number of regions). We also tighten some assertions to help future analysis of possible issues in this area: - Ensure that the data path via fellow_log_entries_prep() only ever uses a region list on the stack. - By using the regionlist_onlystk_add() macro, ensure that we hit an assertion on the array on stack, rather than one on the regionlist pointer. Diff best viewed with -b Fixes #18
-
Nils Goroll authored
Related to #18
-
Nils Goroll authored
We should do this right and not over-allocate, this is just confusing.
-
Nils Goroll authored
Motivated by #18, but does not fix the root cause yet For the call path in the bug ticket, the stack regionlist is supposed to be big enough and the root cause is that it is not. But at any rate, for that call path, the regionlist is OK to be NULL and regionlist_add() should never be called. If, however, it _is_ called, the regionlist can't be NULL.
-
Nils Goroll authored
-
Nils Goroll authored
Avoids: fellow_io_uring.c:234:1: error: ‘try_flag’ defined but not used [-Werror=unused-function] 234 | try_flag(unsigned flag) | ^~~~~~~~
-
Nils Goroll authored
the lru_mtx is our most contended mtx. As a first improvement, batch changes to LRU for multiple segments and maintain the effective change locally outside the lru mtx (but while holding the obj mtx).
-
Nils Goroll authored
-
- 24 Jul, 2023 22 commits
-
-
Nils Goroll authored
is there a better way? https://github.com/axboe/liburing/issues/906
-
Nils Goroll authored
during error paths, we might call it multiple times
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
varnish-cache does not touch objects for OA_VARY, but we need to keep FCOs in memory which are frequently used during lookup. Thoughts on why this should not race LRU: - lru_list is owned by lru_mtx - object can't go away, because - for call from hash, we hold the oh->mtx - otherwise, we hold a ref
-
Nils Goroll authored
... which happens potentially under the cache lock
-
Nils Goroll authored
upfront: This is not the segment allocation, which uses parts of the busy obj region allocation, and is mostly motivated by how much data we need to have in RAM at minimum. For the region allocation, we have conflicting goals: - To keep the log short, we want to use the least number of regions - To reduce fragmentation, we want to use the largest possible allocations - To use space efficiently, we want to split regions into power of two allocations. Also, for chunked encoding, we do not have an upper limit of how much space we are going to need, so we have to use the estimate provided by fellow_busy_obj_getspace(). It can not guess more than objsize_max. The new region alloc algorithm takes this compromise: - For the base case that we ran out of available regions (220), we allocate all we need without cramming. - Otherwise if we need less than a chunk, we request it - Otherwise if we know the size, we round down to a power of two - Otherwise we round up We then allow any cramming down to the chunk size, because that is what our LRU reservation uses.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Ref #10
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Ref #10
-
Nils Goroll authored
-
Nils Goroll authored
adjust dsk size if mem allocation was smaller than requested
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Could have caused #5, related to #10
-
Nils Goroll authored
-
Nils Goroll authored
This is counter-intuitive and could lead to extreme values, for example: default: chunk_exponent = 20, dsk_reserve_chunks 4 adjusted to: 12, 4 << 8 = 1024 now user sets chunk_exponent = 21 adjusted to: 12, 1024 << 9 = 524288 Could have caused #5, related to #10
-
- 21 Jul, 2023 3 commits
-
-
Nils Goroll authored
tiny glitch
-
Nils Goroll authored
The main cause for #11 seems to be that the chunk size in relation to the memory cache was too big. We now clamp it at memsz >> 10 (less than 1/1024 of the memsz). This can still lead to issues when the memory size is reduced and the cache reloaded, but then at least new objects will not compete for the available memory.
-
Nils Goroll authored
-