- 15 Nov, 2017 7 commits
-
-
Nils Goroll authored
for example, mgt_shm_atexit could race in two processes, one in VSMW_Destroy and the other in system("rm -rf") Closes #2484
-
Nils Goroll authored
... now that I've found the layer 8 bug Thank you to @bsdphk Ref #2484
-
Nils Goroll authored
I had left in superfluous stuff from bug analysis. Ref: #2319
-
Nils Goroll authored
Creating "backend synthetic" content was impossible whenever a fetch had already started, unless storage was assigned (again) explicitly from VCL. Fixes #2494
-
Nils Goroll authored
The oa_present member acts as a filter at the level above the stevedore to check set attributes efficiently. We missed to clear it when recreating freeing an abandoned backend fetch object in vcl_backend_error. Fixes #2319
-
Poul-Henning Kamp authored
sanely, in this case by failing VCL initialization. Fixes #2036
-
Poul-Henning Kamp authored
order uncertainty.
-
- 14 Nov, 2017 16 commits
-
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
(Replaces cache_backend_cfg.c::backend_find)
-
Martin Blix Grydeland authored
-
Martin Blix Grydeland authored
We only want to return the connection early to the waiter when the request is empty. Correct the read timeout calculation to reflect that. Thanks to Stackpath for helping to debug this issue. Fixes: #2492
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Dridi Boukelmoune authored
The documentation on directors needs an update too.
-
Nils Goroll authored
If workspace was exhausted, vmod_blob would fail yet still leave a reservation which would likely trigger a WS_Reserve() assertion failure in later code trying to reserve the workspace. Fixes #2488 Thank you to @jarro2783 for the report and @dridi for the analysis.
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
- 13 Nov, 2017 17 commits
-
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Nils Goroll authored
-
Nils Goroll authored
beeaa19c had left 32 bytes on the client workspace for the /baz case, which was sufficient to write resp.http.x-of for 4-byte alignment, but not 8byte alignment: * "false\0" = 6b * "x-of: false\0" = 12b
-
Nils Goroll authored
Let's see if this fixes #2482
-
Nils Goroll authored
-
Nils Goroll authored
As dicussed during bugwash. Ref #2489 partially reverts 9701bc56
-
Federico G. Schwindt authored
It's not ready for primetime yet.
-
Poul-Henning Kamp authored
-
Nils Goroll authored
-
Nils Goroll authored
... as it was the case before 69d45413 and as documented. The motivation is to remove the reservation from req->ws during delivery, but actually line delivery memory should not come from the request space - as originally designed: - We avoid requiring an obscure surplus of workspace_client for delivery - which is also allocated for every subrequest though not required there - We get predictable performance as the number of IO-vectors available is now only a function of workspace_thread or esi_iovs (see below) rather than the amount of memory which happens to be available on the request workspace. As a sensible side effect, we now also fail with an internal 500 error for workspace_session and workspace_thread overflows in addition to the existing check on workspace_client for completeness. For ESI requests, we run all of the client side processing, which uses the thread workspace, with V1L set up. Thus, V1L now needs its control structure together with a small amount of io vectors as an allocation on the workspace. Real world observation has shown that no more than five io vectors are normally in use during ESI, yet still we make this number configurable and have a default with some safety margin. For non-ESI requests and headers, we use all of the thread_workspace for io vectors, as before. As V1L does not necessarily reserve workspace any more, functions have been renamed to better reflect the purpose: V1L_Reserve -> V1L_Open V1L_FlushRelease -> V1L_Close
-
Nils Goroll authored
We still check that nothing is left on the thread workspace after vcl methods in vcl_call_method(), but we do not require the thread workspace to be totally empty any more.
-
Nils Goroll authored
-
Nils Goroll authored
-
Federico G. Schwindt authored
-
Poul-Henning Kamp authored
Inspired by: #2488
-