- 26 Apr, 2019 5 commits
-
-
Martin Blix Grydeland authored
The previous patch 4130055c went a bit too far in it's mission to reorder events, and included putting HSH_Fail() after ObjSetState(). That caused problems for code looking at the OC_F_FAILED to learn about failed fetches. Change the order of this back to normal, and move the call to HSH_Fail() into ObjSetState(), so that the order can be shown and the caveats properly commented.
-
Martin Blix Grydeland authored
This reverts commit 462eab25. That assert was there for a good reason.
-
Martin Blix Grydeland authored
With the recent changes where HSH_Unbusy/HSH_Fail is called after ObjSetState, this assert becomes racy. Remove the assert.
-
Martin Blix Grydeland authored
This reverts some of the previous attempts to get these test cases stable, as those attempts actually prevented the testing of the desired code paths. Also make the test cases wait until the required requests are on the waitinglist before continuing.
-
Martin Blix Grydeland authored
When an object is ready for delivery, HSH_Unbusy was called before calling ObjSetState([BOS_STREAM|BOS_FINISHED]). The HSH_Unbusy() call does the waitinglist rushing, but HSH_Lookup() wanted to look at the boc->state and if BOS_STREAM had been reached. This could cause requests woken to find that the stream state still hadn't been reached (ObjSetState still hadn't executed), and go back on the waitinglist. To fix this, this patch reverts commit 0375791c, and goes back to considering OC_F_BUSY as the gate keeper for HSH_Lookup. This eliminates the race, because HSH_Unbusy and HSH_Lookup then uses the same mutex. That change opens up the possiblity that req code after HSH_Lookup() sees an object that has not yet reached BOS_STREAM. In order to not have to add new ObjWaitState() calls (with the additional locking cost that would bring) to wait for BOS_STREAM, the order of events is changed throughout, and calls ObjSetState([BOS_STREAM|BOS_FINISHED]) before HSH_Unbusy(). That way, an object returned from HSH_Lookup() is guaranteed to be at least BOS_STREAM.
-
- 25 Apr, 2019 2 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
Fixes #2985
-
- 24 Apr, 2019 23 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
Relying on fixed vxids was found to fail, sorry
-
Nils Goroll authored
see comment in the vtc: For now this test is exploiting some implementation detail and we might want to consider adding a VSL for waiter involvement.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
Ref #2980
-
Nils Goroll authored
should also fix printf format errors on 32bit
-
Nils Goroll authored
the last uses were removed in fcbc7951
-
Nils Goroll authored
-
Dridi Boukelmoune authored
-
Dridi Boukelmoune authored
It is unclear to me whether those calls belong under the h2 session lock, but this patch doesn't change any behavior.
-
Dridi Boukelmoune authored
-
Dridi Boukelmoune authored
The dummy_acct is only here to avoid repeating null checks in the send loop below. This doesn't change the end result if the transaction completes with no problem. As such, only header and body bytes are accounted for, as before, ignoring h2 framing overhead and in general other kinds of frames that belong to the stream. In other words, the only improvement is that ReqAcct doesn't show a full delivery when the client hangs up before the end of the transaction. The split of H2_Send will allow handling of error conditions from the multiple return statements although at this point there is no change in this area.
-
Dridi Boukelmoune authored
And make it clear that we steal the reference at this point.
-
Dridi Boukelmoune authored
For now the varnishd handling of SO_SNDBUF lives in vmod-debug but could be promoted to vmod-vtc to be usable out of tree too.
-
Dridi Boukelmoune authored
-
Nils Goroll authored
hope we'll get to a better option soon. Ref: 691d5ac9
-
Nils Goroll authored
-
Nils Goroll authored
and change the t0 argument into a vtim_real deadline, allowing for adjustments per call. This causes changes of send_timeout to not change open V1L transactions. This also exposed the fact that we are using send_timeout also for sending the bereq.body to the backend. Is this what we want?
-
Nils Goroll authored
timevals and timespecs could represent vtim_real / vmtim_mono also, but for now, all use cases within varnish-cache are for vtim_dur As we typedef vtim_{real,mono,dur} to double at the moment, this should not break any vmods.
-
Nils Goroll authored
-
- 23 Apr, 2019 10 commits
-
-
Poul-Henning Kamp authored
The trick here is that these tests depend on c1 getting in front of c2... and that cannot be assumed given random thread scheduling algorithms, it must be enforced with barriers
-
Poul-Henning Kamp authored
-
Nils Goroll authored
(i hope) Trouble here is that in pool_herder(), we access pp->dry unprotected, so we might see an old value, thus we might breed more than wthread_min even if the dry condition does not exist any more. So for the vtc, we need to wait until wthread_timeout has passed and the surplus thread has been kissed to death. Notice that this does not change with #2942 because there the same unprotected access happens to lqueue.
-
Nils Goroll authored
This reverts commit fa3e1419. We might have over-bread such that thread_pool_timeout becomes relevant at the other place in pool_herder()
-
Nils Goroll authored
it should remove the worker it returns from the idle queue for clarity
-
Nils Goroll authored
the herder delay to .5 seconds anyway
-
Nils Goroll authored
This reverts commit 3e6e584b. No use, still happens
-
Martin Blix Grydeland authored
It seems that these test cases were suffering under the problem that #2942 addresses. Set a minimum thread pool so that there will be adequate threads available before the test begins.
-
Martin Blix Grydeland authored
-
Nils Goroll authored
I ignored this for ages, but now it really bothers me: This test had quite a high failure rate on systems I control. And actually I do not quite understand why the fix works, but it does survive -j100 -n1000 Additional input welcome
-