- 07 Feb, 2024 40 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Unfortunately, this brings us above 4K, so we fill up with io structs
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
This way we have a better chance to call the actual return at a more suitable time, for example outside a lock.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Ref #53
-
Nils Goroll authored
-
Nils Goroll authored
Fixes #52
-
Nils Goroll authored
Closes #50
-
Nils Goroll authored
-
Nils Goroll authored
The fellow_e00029.vtc canary fell over on Ubuntu 20.04 with kernel 5.15
-
Nils Goroll authored
This fixes two bugs: The most relavant was that we did not increase buddy->wait_pri if the new priority was higher, which could lead to all kinds of weird effects, for example: - starvation/lock up - LRU removing all of the cache because there were requests waiting, but buddy_wait_work() did not see them. The second bug is that we also called buddy_wait_work() if there actually was no change (the critical region could find state == IW_SIGNALLED). Also, for the new assertions, we need to set the proper wait_pri for the case that the priority is lowered.
-
Nils Goroll authored
we did not lower buddy->wait_pri if the cancel resulted in an empty priority list. This was no problem, but the stricter assertions from the previous commit would trigger, because they now require that the priority list for wait_pri is filled.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
For clarity and to not overlook it, pools should have a proper default priority. The fill callback can still change it.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
For cases where we do not need FFS, we do not need the index. This does not change anything yet. This patch only works together with the next (so git bisect is expected to break here). The split is to separate manual changes from automated patching.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
at least I now realize that this could probably be misunderstood...
-
Nils Goroll authored
-
Nils Goroll authored
The dance of taking a reference when waiting caused a lot of trouble already before, and with a fresh look at it does not seem to make much sense. But most importantly, it was wrong: lbuf->ff was set to NULL in logbuffer_flush_finish_work_one() before the mutex was returned with pthread_cond_wait(), so if (ff == NULL) goto unlock; in logbuffer_wait_flush_fini() could lead to the function returning before logbuffer_flush_finish_work_one() _was_ actually done. But with bceec122 this could lead to the stack memory being repurposed (logbuffer_flush_finish returning) before it was actually safe to. This issue could surface in fellow_log_test hanging. We also now return all allocations under the lock to prevent a race with fellow_log_close() where flush finish threads could outlive the ffd, resulting in buddy leak detection to fire, because the ff allocation was not returned. Fixes #49
-
Nils Goroll authored
The other assertion is what we actually mean: The log2up(sz) must be at least bits, otherwise it makes no sense.
-
Nils Goroll authored
-
Nils Goroll authored
otherwise all hell will break loose if we changed reqs->pri, because we would dequeue from the wrong list head.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-