- 21 Apr, 2021 2 commits
-
-
Nils Goroll authored
... introduced with 3bb8b84c: in Pool_Work_Thread(), we could break out of the for (i = 0; i < TASK_QUEUE__END; i++) loop with tp set to the value from the previous iteration of the top while() loop where if should have been NULL (for no task found). Noticed staring at #3192 - unclear yet if related
-
Nils Goroll authored
Previously, we used a minimum number of idle threads (the reserve) to ensure that we do not assign all threads with client requests and no threads left over for backend requests. This was actually only a special case of the more general issue exposed by h2: Lower priority tasks depend on higher priority tasks (for h2, sessions need streams, which need requests, which may need backend requests). To solve this problem, we divide the reserve by the number of priority classes and schedule lower priority tasks only if there are enough idle threads to run higher priority tasks eventually. This change does not guarantee any upper limit on the amount of time it can take for a task to be scheduled (e.g. backend requests could be blocking on arbitrarily long timeouts), so the thread pool watchdog is still warranted. But this change should guarantee that we do make progress eventually. With the reserves, thread_pool_min needs to be no smaller than the number of priority classes (TASK_QUEUE__END). Ideally, we should have an even higher minimum (@Dridi rightly suggested to make it 2 * TASK_QUEUE__END), but that would prevent the very useful test t02011.vtc. For now, the value of TASK_QUEUE__END (5) is hardcoded as such for the parameter configuration and documentation because auto-generating it would require include/macro dances which I consider over the top for now. Instead, the respective places are marked and an assert is in place to ensure we do not start a worker with too small a number of workers. I dicided against checks in the manager to avoid include pollution from the worker (cache.h) into the manager. Fixes #2418 for real Conflicts: bin/varnishd/cache/cache_wrk.c bin/varnishd/mgt/mgt_pool.c
-
- 20 Apr, 2021 26 commits
-
-
Dridi Boukelmoune authored
There's no point waiting for the backend to complain if we weren't able to properly send the backend request. Fixes #3556 Conflicts: bin/varnishd/cache/cache_backend.c
-
Dridi Boukelmoune authored
The reason we expect here can be summarized as: this was a pipe transaction or an error occurred. This could be much simpler if we replaced enum sess_close with a struct stream_close instead. Refs dc5bddbd
-
Reza Naghibi authored
Also move the lock up to cover more operations.
-
Reza Naghibi authored
-
Reza Naghibi authored
Previously we would read the response Content-Length from a failed oc, which would make the error response valid. Now, if this is detected, we don't touch the Content-Length.
-
Martin Blix Grydeland authored
VRT_delete_backend() sets be->cooled to non-zero as the only place where that is done. Assert that it is zero on entry as a check that VRT_delete_backend isn't called multiple times.
-
Martin Blix Grydeland authored
We refuse to accept new dynamic backends while the VCL is cooling, and drop adding the attempted backend on the VCL's backend list when that condition is found. But that would cause an assert later when it is picked off the cool_backends list for destruction. Fix this by directly destroying the backend instead of going through the cooling list. Note that this patch removes the ASSERT_CLI() macro in vbe_destroy().
-
Martin Blix Grydeland authored
Several functions (VBE_Poll and vbe_destroy) tests be->cooled == 0 to determine which of the two lists backends and cool_backends a specific instance currently lives on. If the flag is in the process of being changed, then the wrong list head may be used and will result in strange bugs. Conflicts: bin/varnishd/cache/cache_backend.c
-
Steven authored
-
Steven authored
-
Klemens Nanni authored
The last three commits already made configure recommend installing Python 3 packages and look for versioned executables, however with a low priority. This is a problem on systems such as OpenBSD 6.5 with a default Python version at 2.7, where 3.7 flavored Python packages get installed with a "-3" binary suffix. That is, when both rst2man and rst2man-3 are installed at configure time, the lower version will be picked unless explicitly passed through `--with-feature' arguments. Regardless of this specific case, trying more specificly versioned tool names first seems correctly in line with recent development and less error prone, so change it accordingly. Conflicts: configure.ac
-
Dridi Boukelmoune authored
For a given definition of "future" or "now".
-
Federico G. Schwindt authored
-
Dridi Boukelmoune authored
-
Dridi Boukelmoune authored
Until the naked "python" executable refers to python3 (currently it is still python2) it now takes lower precedence.
-
Simon authored
-
Dridi Boukelmoune authored
The assertion that the stale objcore of a conditional fetch cannot be failed unless it was streaming is incorrect. Between the moment when we grab the stale objcore in HSH_Lookup and the moment we try to use it after vcl_backend_response, the backend fetch may have completed or failed. Instead, we need to treat an ongoing fetch and a failed fetch as separate checks since the latter may happen with or without a boc. Conflicts: bin/varnishd/cache/cache_fetch.c
-
Nils Goroll authored
Test case by Reza, thank you Fixes #3433 Closes #3434
-
Dridi Boukelmoune authored
Once we ask the backend to close the connection after a given request there is no benefit from putting the backend connection back in the pool. It's actually a surefire way to force a subsequent backend fetch to fail its first attempt and go straight to its extra chance. Since we try to maximize connection reuse this would have to come from VCL and a user asking for the backend to close the connection should have a good reason to do so, for example when the backend is known to misbehave under certain circumstances. Closes #3400 Refs #3405
-
Dridi Boukelmoune authored
Whether the header was set by the backend or directly in VCL, it is now possible to signal that a backend connection should not be added back to the pool after a successful fetch with a Connection:close header. Pooling such a connection would be counter-productive if closing the session was requested by the backend itself, because it would then be likely that reusing the connection would result in busting the extra chance. Setting the Connection:close directly in VCL can help mitigating against a misbehaving backend. Refs #3400
-
Nils Goroll authored
When resolve requests race, we were not guaranteed to consider all backends because we updated a shared nxt variable. Fixes #3474
-
Reza Naghibi authored
We do not hold a reference, the magic can be unstable.
-
Reza Naghibi authored
-
Reza Naghibi authored
We can incorrectly reference resp.reason from other sources when jumping into vcl_synth. This also covers passing in a reason in vcl_backend_error.
-
Steven authored
The bo fields err_code and err_reason need to be reset on a retry otherwise the values are kept. Fixes #3525
-
- 13 Apr, 2021 2 commits
-
-
Guillaume Quintard authored
Conflicts: doc/sphinx/installation/install_source.rst
-
Guillaume Quintard authored
-
- 06 Nov, 2020 2 commits
-
-
Martin Blix Grydeland authored
-
Martin Blix Grydeland authored
-
- 05 Nov, 2020 1 commit
-
-
Martin Blix Grydeland authored
If given a build parameter called 'dist-url', the build script downloads a tarball from the given URL instead of doing a 'make dist' step.
-
- 04 Nov, 2020 2 commits
-
-
Pål Hermunn Johansen authored
This reverts commit 4f99d164. This was merged before review, by mistake. I am reverting this so that we can do the quality assurence before the actual merge. Sorry.
-
Pål Hermunn Johansen authored
-
- 02 Nov, 2020 1 commit
-
-
Reza Naghibi authored
Also make sure we didn't overflow before entering vcl_pipe. This would mean we have lost important connection headers.
-
- 31 Oct, 2020 1 commit
-
-
Poul-Henning Kamp authored
-
- 26 Oct, 2020 1 commit
-
-
Guillaume Quintard authored
-
- 24 Oct, 2020 1 commit
-
-
Guillaume Quintard authored
-
- 23 Oct, 2020 1 commit
-
-
Guillaume Quintard authored
-