- 07 Sep, 2024 1 commit
-
-
Nils Goroll authored
just learned this from the proxy.c demo program: > DEFER_TASKRUN ... (is) generally the preferred and recommended way to setup > a ring.
-
- 23 Aug, 2024 15 commits
-
-
Nils Goroll authored
Historically, fellow_cache_test did not delete a base stock of ~540 objects. This has certainly served us well in the past, but now it seems that differing conditions on different platforms lead to fluctuations in the disk layout, which can cause tests to hang in disk space allocations (fellow_cache_test does not use disk LRU). Reflecting on this, I think that keeping a certain fill level of the disk cache is not so important any more and spurious lock ups (which, if caused by a full disk cache in fellow_cache_test, are not relevant for production), only cause confusion and overhead both on the end of users and developers. Fixes #73
-
Nils Goroll authored
See also 1ffabdef
-
Nils Goroll authored
this saves 8 bytes (actually 12, but alignment) in struct fellow_cache_seg
-
Nils Goroll authored
-
Nils Goroll authored
to avoid duplicate pointers for fco and fds in each fcs
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
because, until it has, we do not know if we are going to remove it
-
Nils Goroll authored
the neovim neobie still has whitespace issues...
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
TL;DR: This should fix the "nuke despite free space" problem with chunked responses for the most relevant case. Long explanation for future self any anyone interested: Handling objects created from chunked encoding responses, where we do not know how long they are going to be, puts us into a dilemma: To only use the least amount of space, we would like to use small allocations, but for one, small allocations are inefficient, but we also limit the number of regions which each object can use (FCO_MAX_REGIONS) to keep the log efficient. So, in short, we need to make disk regions reasonably large in case the object grows to the maximum size, and we need to make memory segments reasonably large to keep IO efficient. On top of that, for other reasons of efficiency, our buddy allocator only returns object offsets at multiples of their size (rounded up to a power of two, the page size), so even if we return (free) the "tail" of allocations (the buddy "trim") operation, and, consequently, nominally make free space, that space can only be used for allocation requests smaller than the free space. So, to sum it all up, if, as an extreme example, we only ever request 1 MB and then trim to 4KB, we will end up with storage ~0.4% full but still unable to hold 1 MB requests. This is basically what happened with storage being mostly fed objects from chunked encoding responses. Now the solution to the problem is to use allocations as small as possible once we have a final length. The only way to get there is to use _different_, smaller allocations. In the general case, while creating an object, we can not just change our mind if we already started writing segments - which we need to do to keep memory requirements low (otherwise we would need to keep objects in memory until we know their length, which would severely increase memory requirements). However, if we are lucky and the last segment of an object is the first segment of a region, so, consequently, we have not started writing to the region, we _can_ just change our mind and use a different region. Also, if we know that our segment is not read concurrently, we can relocate it in memory. If it is read concurrently, we can sacrifice some performance and LRU-evict it at the next opportunity, such that it will be read into a smaller memory region when used again. This is what this patch does. Further optimizations would require to re-write already written ("unbusied") segments into a different region or hold back writes at the expense of more memory being un-LRU-able. Should fix #71 for the case of objects smaller than the segment size.
-
Nils Goroll authored
and allow writing an fcs with a memory size greater than the disk size
-
- 22 Aug, 2024 5 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
to enable additional action within the lock
-
Nils Goroll authored
-
Nils Goroll authored
by default, we add segments to the LRU tail, implementing LEAST recently used. We now also add the option to add segments to the LRU head, which is MOST recently used, or rather "evict first".
-
Nils Goroll authored
-
- 21 Aug, 2024 3 commits
-
-
Nils Goroll authored
The first limit check if (*sz + fbo->sz_returned > fbo->fc->tune->objsize_max) was superfluous. The second limit check if (fbo->sz_estimate > fbo->fc->tune->objsize_max) happened at the wrong place, because the estimate could have been too high from the previous invocation, and also we missed to clear the growing flag.
-
Nils Goroll authored
-
Nils Goroll authored
This avoids superfluous additional invocations
-
- 11 Aug, 2024 1 commit
-
-
Nils Goroll authored
I should have had done this a long time ago, no idea why I lost track of this TODO item.
-
- 07 Aug, 2024 1 commit
-
-
Nils Goroll authored
-
- 03 Aug, 2024 2 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
When we have a disk object in core and do not need the disk seglist, we throw it away by not copying or trimming the allocation and mark this fact in a flag bit.
-
- 02 Aug, 2024 7 commits
-
-
Nils Goroll authored
This adds the facility to mark slimmed disk objects used in the next commit.
-
Nils Goroll authored
This reverts the quick code fix, but keeps the test case Partially reverts commit 9df35fb7.
-
Nils Goroll authored
This also fixes #70, but more efficiently
-
Nils Goroll authored
-
Nils Goroll authored
I am still a neovim-noob
-
Nils Goroll authored
Fixes #70 We should support objects without seglists, though.
-
Nils Goroll authored
-
- 22 Jul, 2024 4 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
we can not reset the growing flag
-
- 21 Jul, 2024 1 commit
-
-
Nils Goroll authored
I believe the actual issue reported in #22 is that, with the old disk region allocation scheme, we could need one more disk segment list, such that FCO_REGIONS_RESERVE was too small. But pondering this issue, I went back to square one and re-thought the allocation plan. I now think that there were some fundamental flaws in the previous allocation code: - we did not plan for how many segment lists we would actually need - we would cram the segment allocation, which could lead to the number of segment lists growing - for growing allocations, we would switch from "assume we have enough regions" to "assume we have no regions at all any more" when FCO_REGIONS_RESERVE was reached. Hopefully, this new allocation plan code now takes a more sensible, holistic approach: Whenever we need more disk space (= another disk region), we calculate how many regions we are sensibly going to need in total. For no cram, this is just one, and for abs(cram) >=1 it is going to be the number of one-bits (popcount) of the size. Then we calculate the chunk size which we need to go to in order to fit all segments into segment lists. Based on this outcome, we calculate a maximum cam which we can allow for region allocations. This approach is fundamentally different to before in that we no longer cram segment sizes - which was wrong, because we do not have an infinite number of segment lists. Fixes #22 for real now, I hope
-