- 20 Dec, 2023 9 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
... a chance to take memory from the old
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
- 18 Dec, 2023 2 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
- 17 Dec, 2023 2 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
- 15 Dec, 2023 3 commits
-
-
Nils Goroll authored
otherwise we would hold onto log blocks for a long time, possibly until the next ref'ing flush. This could cause lock ups due to no memory being available. This is considered a major contribution towards fixing the lockup issues documented in #41
-
Nils Goroll authored
-
Nils Goroll authored
-
- 14 Dec, 2023 12 commits
-
-
Nils Goroll authored
kick the logwatcher first before potentially running into a synchronous flush.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Related to #45
-
- 13 Dec, 2023 12 commits
-
-
Nils Goroll authored
The oh-so-smart idea from 39c2568e was pretty dumb after all: While testing with the low RAM config (16MB on 10GB), a lockop was found with only ~45% of RAM occupied. The reason for the lockup was the un-crammed 64KB (16bits) request for a regionlist with priority 4, which was blocking all other requests. So: No, trying to allocate something "just in case" is never a good idea.
-
Nils Goroll authored
-
Nils Goroll authored
It should allow log transactions to complete first
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
A pool tries to always have allocations ready. There are two buddy_reqs, and when one is depleted, allocations are taken from the other, while the empty reqs are being filled again.
-
Nils Goroll authored
such that the test can easily be run with 2>/dev/null if the dbg output is of no interest.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Issue #41 has shown a deadlock scenario where various object iterators would wait for memory. While reviewing this issue, we noticed a couple of shortcomings in the existing code: * fellow_cache_seg_ref_in() would always wait for allocation requests for readahead segments. Yet, when under memory pressure, we should not wait at all for memory for readahead. * fellow_cache_obj_iter() would hold onto already sent segments also when waiting for synchronous I/O and memory allocations. To improve on these shortcomings and further optimize the code, some of fellow_cache_obj_iter() and all of the readahead code has been rewritten. Improvements comprise the following: * For read ahead, we now use asynchronous memory allocations. If they succeed right away, we issue I/O right away also, but if allocations are delayed, we continue delivery and check back later. By chance, memory allocations will succeed until then. * We decouple memory allocations from specific segments and only care about the right size of the allocation. Because many segments will be of chunk_bytes size, this will allow more efficient use of available asynchronous allocations. * We now de-reference already sent segments also whenever we need to wait for anything, be it a memory allocation or I/O. This should help overall efficiency and reduce memory pressure, because already sent segments can be LRUd earlier. The drawback is that we flush the VDP pipeline more often (we need to before we can deref segments). We also cap the readahead parameter at the equivalent of 1/16 of memory in order to avoid inefficiencies because of single requests holding too much of the memory cache hostage. An additional hard cap at 31 is required to keep the default esi depth supported with the default stack size of varnish-cache.
-