- 13 Dec, 2023 4 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
Issue #41 has shown a deadlock scenario where various object iterators would wait for memory. While reviewing this issue, we noticed a couple of shortcomings in the existing code: * fellow_cache_seg_ref_in() would always wait for allocation requests for readahead segments. Yet, when under memory pressure, we should not wait at all for memory for readahead. * fellow_cache_obj_iter() would hold onto already sent segments also when waiting for synchronous I/O and memory allocations. To improve on these shortcomings and further optimize the code, some of fellow_cache_obj_iter() and all of the readahead code has been rewritten. Improvements comprise the following: * For read ahead, we now use asynchronous memory allocations. If they succeed right away, we issue I/O right away also, but if allocations are delayed, we continue delivery and check back later. By chance, memory allocations will succeed until then. * We decouple memory allocations from specific segments and only care about the right size of the allocation. Because many segments will be of chunk_bytes size, this will allow more efficient use of available asynchronous allocations. * We now de-reference already sent segments also whenever we need to wait for anything, be it a memory allocation or I/O. This should help overall efficiency and reduce memory pressure, because already sent segments can be LRUd earlier. The drawback is that we flush the VDP pipeline more often (we need to before we can deref segments). We also cap the readahead parameter at the equivalent of 1/16 of memory in order to avoid inefficiencies because of single requests holding too much of the memory cache hostage. An additional hard cap at 31 is required to keep the default esi depth supported with the default stack size of varnish-cache.
-
Nils Goroll authored
we only set stobj->priv after returning from obj_get(), so assert(oc->stobj->priv == fco) could trigger in the lru thread. We now set the priv right before inserting into LRU.
-
Nils Goroll authored
It does not make sense to use up memory for busy objects if we can not create their cache and disk counterparts in memory. Motivated by #41
-
- 11 Dec, 2023 4 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
Motivated by https://gitlab.com/uplex/varnish/slash/-/issues/41#note_1688912442 Also added to Varnish-Cache: https://github.com/varnishcache/varnish-cache/commit/24b434383c616639d5aa9be9b5ba3647a418d64c
-
Nils Goroll authored
Fixes #42
-
Nils Goroll authored
to enable parallel tests, for example: $ for i in {1..24} ; do while ./src/fellow_cache_test /tmp/f.${i} >/dev/null 2>/dev/null ; do : ;done & done ; wait Motivated by #42
-
- 10 Dec, 2023 7 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Have seen (fbo) != NULL in fellow_cache_async_write_complete(): #14 0x00007f9aa30957e5 in fellow_cache_async_write_complete (fc=0x7f9aa2c41300, ptr=0x7f9a9ff4df58, result=4096) at fellow_cache.c:2791 #15 0x00007f9aa3096403 in fellow_cache_seg_async_compl_cb (priv=0x7f9aa2c41300, status=0x7f9a999fa3e0, n=1) at fellow_cache.c:2951 (gdb) info local fbio = 0x7f9a9ff4df58 fbo = 0x0 fcs = 0x0 fco = 0x7f9a9ff4f000 fcos_next = FCOS_INVAL type = FBIO_SEG io_outstanding = 2 '\002' refcount = 0 __PRETTY_FUNCTION__ = "fellow_cache_async_write_complete" lcb = {{magic = 2863944409, n_add = 0, l_rem = 2, n_rem = 0, fco = 0x7f9a9ff4f000, add = {vtqh_first = 0x0, vtqh_last = 0x7f9a999fa2b8}, fcs = 0x7f9a9ff4df5c}} __func__ = "fellow_cache_async_write_complete" _pterr281611 = <optimized out> _pterr282913 = <optimized out> _pterr289715 = <optimized out> (gdb) p *fbio $1 = {magic = 3019, retries = 0, type = FBIO_SEG, sync = FBIOS_ASYNC, fbo = 0x0, u = {fcs = 0x0, seglist = {fdsl = 0x0, reg = {off = 0, size = 0}}}}
-
Nils Goroll authored
-
Nils Goroll authored
we can have FCS_BUSY segments before OBJ_ITER_END while streaming.
-
Nils Goroll authored
At the last segment, do not advance to the next segment list if it is still empty.
-
- 08 Dec, 2023 1 commit
-
-
Nils Goroll authored
-
- 29 Nov, 2023 3 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
we only jump to the again label if we did not get a reference.
-
Nils Goroll authored
Motivated by #40
-
- 28 Nov, 2023 21 commits
-
-
Nils Goroll authored
CID#469253
-
Nils Goroll authored
CID#469261
-
Nils Goroll authored
Coverity CID#469242
-
Nils Goroll authored
Coverity CID#469229
-
Nils Goroll authored
To address some Coverity pedentry with minor impact (at most) CID#469230
-
Nils Goroll authored
Coverity CID#469233
-
Nils Goroll authored
Spotted by Coverity CID#469228, but it is irrelevant because only in test code
-
Nils Goroll authored
Good catch by Coverity CID#469254
-
Nils Goroll authored
Coverity CID#469252
-
Nils Goroll authored
Ref Coverity CID#469262
-
Nils Goroll authored
The optimized case for multiple segments from the same fco did not work as expected, continue did not continue the inner loop. Spotted by Coverity, CID#469236
-
Nils Goroll authored
Coverity CID#469225
-
Nils Goroll authored
Ref CID#469268
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Motivated by #41
-
Nils Goroll authored
-