- 16 Aug, 2018 1 commit
-
-
Dag Haavi Finstad authored
Since we verify that header blocks are not interleaved, and we zero the struct h2h_decode on every new header block, there is no need to malloc a separate struct h2h_decode per stream.
-
- 14 Aug, 2018 19 commits
-
-
Nils Goroll authored
... not in the caller
-
Nils Goroll authored
-
Dag Haavi Finstad authored
Future-proofing to avoid mistakenly introducing another race down the line.
-
Dag Haavi Finstad authored
The current flow control code's use of h2->cond is racy. h2->cond is already used for handing over a DATA frame to a stream thread. In the event that we have both streams waiting on this condvar for window updates and at the same time the rxthread gets signaled for a DATA frame, we could end up waking up the wrong thread and the rxthread gets stuck forever. This commit addresses this by using a separate condvar for window updates. An alternative would be to always issue a broadcast on h2->cond instead of signal, but I found this approach much cleaner. Probably fixes: #2623
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
For ban statistics, we updated VSC_C_main directly, so if we raced Pool_Sumstat(), that could undo our changes. This patch fixes statistics by using the per-worker statistics cache except for the following remaining corner cases: * bans_persisted_* counters receive absolut updates, which does not seem to fit the incremental updates via the per-worker stats. I've kept these cases untouched and marked with comments. Worst that should happen here are temporary inconsistencies until the next absolute update. * For BAN_Reload(), my understanding is that it should only happen during init, so we continue to update VSC_C_main directly. * For bans via the cli, we would need to grab the wstat lock, which, at the moment, is private to the worker implementation. Until we make a change here, we could miss a ban increment from the cli. * for VCL bans from vcl_init / fini, we do not have access to the worker struct at the moment, so for now we also accept an inconsistency here. Fixes #2716 for relevant cases
-
Federico G. Schwindt authored
A bound socket will timeout instead of refusing the connection. Should fix b00015.vtc under macos.
-
Dag Haavi Finstad authored
The previous commit made the assumption that END_STREAM is in the last of the frames in a header block. This is not necessarily the case.
-
Dag Haavi Finstad authored
Previously we've been incorrectly transtitioning to CLOS_REM on END_HEADERS, which prevents us from seeing if a client incorrectly transmitted a DATA frame on a closed stream. This slightly complicates things in that we can now be in state OPEN with an inactive hpack decoding state, and we need to make sure on cleanup if that has already been finalized. This would be simpler if the h/2 spec had split the OPEN state in two parts, with an extra state transition on END_HEADERS. Again big thanks to @xcir for his help in diagnosing this. Fixes: #2623
-
Federico G. Schwindt authored
Should help with the ASAN builds in travis.
-
Federico G. Schwindt authored
Should address #2666 and #2711.
-
Dag Haavi Finstad authored
Tune down h2_rx_window_low_water to make sure we don't get a window_update racing against the response frames. Fixes: #2709
-
Dag Haavi Finstad authored
-
Dag Haavi Finstad authored
If we failed to schedule a thread for a stream and it's not cleaned up prior to handling a request body, the rxthread would sit around waiting in h2_rx_data indefinitely.
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
the backend might be going away, and since we cannot afford to hold the lock over VTP_Open(), we have to pull the VBE_vsc knowledge one level back up. Overlooked by: slink, phk Spotted by: Coverity Conflicts: bin/varnishd/cache/cache_backend.c bin/varnishd/cache/cache_backend.h
-
Poul-Henning Kamp authored
termination of the rxbuf. Found by: fgs
-
Lucas Guardalben authored
This adds -j support to the command varnishadm ping -j Tested on RFC4627/RFC7159/ECMA404 standards (all valid JSON) Conflicts: bin/varnishtest/tests/b00008.vtc
-
- 20 Jun, 2018 20 commits
-
-
Poul-Henning Kamp authored
Found by: fgs & ASAN
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Dag Haavi Finstad authored
-
Martin Blix Grydeland authored
Previous fix for #2285 (and the duplicate #2624) was missdiagnosed. The problem stems from a wrong assumption that the number of bytes already pipelined will be less than maxbytes, with maxbytes beeing the maximum number of bytes the HTC_RxStuff may need to get a full work unit. That assumption may fail during the H/1 to H/2 upgrade path where maxbytes change with the context, or during runtime changing of parameters. This patch makes HTC_RxStuff not assert if the pipelined data turned out to exceed maxbytes, but return overflow if we run out of workspace. (#2624 has received a workaround in the H/2 code that perhaps should be reverted).
-
Martin Blix Grydeland authored
Remove old and now invalid assert. Change order of evaluation in if-statement to make sure we don't step outside rxbuf_e.
-
Nils Goroll authored
Previously, tracing the root cause of probe failures was unnecessarily complicated by the fact that the probe window bits and timing information were the only source of information when no HTTP status line was logged and for the case of all the bits being zero, almost impossible (e.g. differentiating between a local and remote connection open failure). We now re-use the response field for failing probes also.
-
Nils Goroll authored
Conflicts: bin/varnishd/cache/cache_backend.c
-
Nils Goroll authored
This is similar to the vca pace: Depending on the backend connection error, it does not make sense to re-try in rapid succession, instead not attempting the failed connection again for some time will save resources both locally and remotely, where applicable, and should thus help improve the overall situation. Fixes #2622
-
Nils Goroll authored
Previously, we had zero stats on the cause of backend connection errors, which made it close to impossible to diagnose such issues in retrospect (only via log mining). We now pass an optional backend vsc to vcp and record errors per backend. Open errors are really per vcp entry (ip + port or udc path), which can be shared amongst backends (and even vcls), but we maintain the counters per backend (and, consequently, per vcl) for simplicity. It should be noted though that errors for shared endpoints affect all backends using them. Ref #2622 Conflicts: bin/varnishd/cache/cache_backend.c
-
Nils Goroll authored
... and introduce request functions for this purpose (for busy objects, there is only one use case yet, so we don't). Before we reset the workspace, we must ensure that there are no active references to objects on it. As PRIV_TASK and PRIV_TOP have the same lifetime as the respective workspace, they need to be destroyed. vmods must not use workspaces for storing information referenced via any of the other PRIVs unless the rollback case is considered. Note that while this bug was exposed by beeaa19c, it existed all along for any vmod priv state stored on the workspace, so if a vmod happened to access a TASK_PRIV stored on the workspace, it would likely have triggered a magic check assertion as well. I got plans for making std.rollback() more useful. While this change is required to do so, it only partly covers the planned changes. Fixes #2706
-
Federico G. Schwindt authored
-
Poul-Henning Kamp authored
-
Federico G. Schwindt authored
-
Federico G. Schwindt authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
in our loops.
-