- 05 Feb, 2024 4 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
This momentarily breaks the iterator fault injection test, but I would have had to squash too many commits to make it work right, so it appeared to be the better option to disable the respective test for some commits.
-
- 04 Feb, 2024 36 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
this is to make reading disk objects more efficient later on. This patch triggered c93.vtc failing, so we add a bit of a hack to avoid a problem which might need more attention later: FCO_MAX_REGIONS is an issue for chunked encoding objects (with growing size). We have not yet implemented the best strategy and just tried to always allocate the largest possible seglist to not use up more regions than necessary, but small memory configurations do not support the maximum seglist (4 MB).
-
Nils Goroll authored
previously, we just sized the disk seglist to fit. Now we chose the size such that the memory seglist corresponding to the disk seglist fits a power of two page.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
This allows us to shrink the fellow_cache_obj allocation for the fellow_obj_get (from disk) case from 8KB to 512 bytes. The root cause for the massively oversized fco allocation was that the nseg_guess heuristic could not take into account the wsl (size of the actual object), so it had to assume that all of the disk object's size was taken up for segments in the disk object embedded seglist.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Share the mempool between reading (flc) and writing (logbuffer). Use different mempools and priorities for log rewrite and log flushes.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
the logcache should know the best size
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
The previous implementation used only one BUDDY_REQS, so whenever one segment allocation was fulfilled, other requests with lower priority could "get through" and ultimately lead to bfa_alloc() failing to complete. By using two BUDDY_REQS, we now make sure to "keep out place in the priority queue". We also limit cramming not only by the available bitfield segment slots, but also by a maximum of 4 (1/16th of the requested size) and yield when a lower cram does not succeed to motivate LRU more to make room. This has undergone a _lot_ of testing and has gone through many iterations, which all have been squashed into this commit.
-
Nils Goroll authored
-
Nils Goroll authored
-