- 07 Feb, 2024 40 commits
-
-
Nils Goroll authored
The code accepted the right pointer, the right offset _OR_ the right size. Which lead to the wrong (usually the last) segment being freed. Fixes #39
-
Nils Goroll authored
... in fellow_busy_obj_trimstore(): The code did not work correctly when the region was to be removed (because it was reduced to size zero) _and_ it was not the last of the regions. Part of the fix for #39
-
Nils Goroll authored
-
Nils Goroll authored
assert that the FCS_BUSY segment to be trimmed is in fact contained in the current body region.
-
Nils Goroll authored
-
Nils Goroll authored
The previous code - if (fdsl->nsegs == fcsl->lsegs || - fcsl->segs[fdsl->nsegs].state == FCS_INIT) { was confusing, it relied on the fact that, when an fcsl has more segments than an fdsl, the first "surplus" segment has state FCS_INIT, when it makes much more sense to just check fdsl->nsegs against fdsl->lsegs.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
This concerns an open issue from before the public release, which I had not understood before and believe to have understood now. With this code *) added to the test @@ -5946,6 +5949,15 @@ t_cache(unsigned chksum) test_bocdone(fbo, TRUST_ME(hash), 1); fellow_cache_obj_deref(fc, fco); + // === max out region alloc + fbo = fellow_busy_obj_alloc(fc, &fco, &priv2, 1234).r.ptr; + CHECK_OBJ_NOTNULL(fbo, FELLOW_BUSY_MAGIC); + for (u = 0; u < FCO_MAX_REGIONS; u++) + AN(fellow_busy_region_alloc(fbo, 1234, INT8_MAX)); + + test_bocdone(fbo, TRUST_ME(hash), 1); + fellow_cache_obj_delete(fc, fco, hash); + // === alloc space, dont use we tripped here assert(flivs->oob || u == obj_alive); with u == 1 and obj_alive == 0. So the offset of a region from a dead object was not taken by a subsequent allocation, which is fine, why should it be? *) Note: The added test code is not correct yet, as it does not register the regions with the segment list, so obj_delete leaks.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
This change should be semantically equivalent, but Flexelint understands it.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
It was too complicated and limited by waiting for flushes to finish. Now that we can issue multiple flushes, we can simplify it substantially. As a result from intermediate efforts, there is now also a facility to base nuking on the amount of data currently in the process of freeing. Leaving it in #ifdef'ed out in case we'll need it again.
-
Nils Goroll authored
with more than once flush finish, writing a header from an old flush could race the logbuffer_ref() from a more recent one, leading to an inconsistent log where a logblock with next_off == 0 became reachable.
-
Nils Goroll authored
To avoid having to wait for a previous flush to finish (in most cases), we now allocate the flush finish state dynamically (and asynchronously). For ordinary flushes, we can now start the next flush while a previous one is still in flight, ordering the flush finish in a list to preserve log consistency.
-
Nils Goroll authored
-
Nils Goroll authored
as there is only one thread waiting
-
Nils Goroll authored
the logwatcher has now been, for a long time, the only thread waiting on it
-
Nils Goroll authored
-
Nils Goroll authored
buddy_reqs are not relocatable, so we need to finish them when moving logbuffers.
-
Nils Goroll authored
regionlists are updated during DLE submit under the logmtx. Thus, we should avoid synchronous memory allocations. We change the strategy as follows: * Memory for the top regionlist (which has one regl embedded) _is_ allocated synchronously, but with maximum cram to reduce latencies at the expense of memory efficiency. The case where the allocation does block will not hit us for the most critical path in fellow_log_dle_submit(), because we pre-allocate there outside the logmtx. * When we create the top regionlist, we make two asynchronous memory allocation requests for our hard-coded size (16KB for prod), one crammed and one not. The crammed request is made such that we get _any_ memory rather than waiting. * When we need to extend the regionlist, we should already have an allocation available (if not, we need to wait, bad luck). The next allocation available is either [1] (uncrammed) left over after the previous extension, or [0], which is potentially crammed. If it is and we have an uncrammed [1], then we use that and return the crammed allocation. If there are no allocations left, we issue the next asynchronous request.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
When adding log blocks, trigger flush also based on available disk blocks, that is, do not add blocks to the logbuffer which we can not also flush. Also flush with reference: I think the capability was originally limited in order to do full flushes with reference only from the logwatcher thread, in order to not hold the logmtx for too long. But now that we have the extra flush finish thread, I do not think this is necessary any more, and we need to handle tight storage better.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
... such that LRU, which is operating on the temporary log, can make room. Ref #28
-
Nils Goroll authored
Ref #28
-
Nils Goroll authored
Hopefully, this also contributes to a solution for #28
-
Nils Goroll authored
Otherwise it looks like a rewrite would leak log blocks.
-
Nils Goroll authored
-