- 25 Oct, 2021 2 commits
-
-
Poul-Henning Kamp authored
-
Dridi Boukelmoune authored
Ever since my system upgraded haproxy to 2.3.10 this test has consistently timed out. While that would be a breaking change involving the independent vtest project too, I think the VTC syslog spec would work better with something like: expect skip facility.level regex Where skip could be uint, * or ? similar to how logexpect works, and both facility and level could also be * to be non-specific. For now, let's hope this does not break the test suite for anyone else. Conflicts: bin/varnishtest/tests/h00005.vtc
-
- 20 Aug, 2021 1 commit
-
-
Dridi Boukelmoune authored
In the output of vcl.show -v, it means that the least useful file (in the sense that it is common to every single vcl.load) is now printed last. This change originates from a larger and more intrusive refactoring. It also helps get rid of spurious Wstring-contatenation warnings from clang 12 in the test suite, instead of disabling it altogether. Refs c8174af6
-
- 11 Aug, 2021 1 commit
-
-
Reza Naghibi authored
Conflicts: bin/varnishd/cache/cache_esi_deliver.c
-
- 03 Aug, 2021 1 commit
-
-
Dridi Boukelmoune authored
This is just the order of their declaration in the VCL manual. As a side effect it works around a bug where the sess.xid syntax requirements would prevent sess.timeout_idle to be used in VCL 4.0, which is less intrusive than a proper fix. The bug was fixed in trunk without being noticed in the first place after many heavy changes to libvcc. For a stable branch this is less risky than a back-port since there are only two sess.* symbols. Fixes #3564
-
- 01 Jul, 2021 5 commits
-
-
Martin Blix Grydeland authored
-
Martin Blix Grydeland authored
-
Martin Blix Grydeland authored
-
Martin Blix Grydeland authored
When receiving H/2 data frames, make sure to take the advertised content length into account, and fail appropriately if the combined sum of the data frames does not match the content length.
-
Martin Blix Grydeland authored
The change to VTCP_Check() in 58a21da7 broke expect_close in varnishtest.
-
- 28 Apr, 2021 5 commits
-
-
Dridi Boukelmoune authored
Technically it can also happen with a debugger attached to a process despite SA_RESTART.
-
Dridi Boukelmoune authored
Conflicts: lib/libvarnish/binary_heap.c
-
Nils Goroll authored
... documented on Linux as POSIX.1 The exception here is ECONNREFUSED which so far we only tolerate for Solaris and which seems to make sense for connect() only. To be discussed in #3539
-
Nils Goroll authored
Conflicts: lib/libvarnish/vtcp.c
-
Nils Goroll authored
it was already accepted on Solaris and NetBSD, now we have seen it on Linux and I think it does not make sense to keep the exception for Apple. Fixes #3532 (hopefully) Conflicts: lib/libvarnish/vtcp.c
-
- 23 Apr, 2021 1 commit
-
-
Reza Naghibi authored
-
- 22 Apr, 2021 7 commits
-
-
Martin Blix Grydeland authored
This adds VTCP_Assert() on the result of read and write calls that deals with TCP sockets. Conflicts: bin/varnishd/proxy/cache_proxy_proto.c
-
Martin Blix Grydeland authored
-
Martin Blix Grydeland authored
When used to check the result of read() and write() calls, it is useful that a positive return value is accepted in VTCP_Check().
-
Martin Blix Grydeland authored
Now that VTCP_Assert() accepts EAGAIN as a legal errno value for read() errors, uncomment this check.
-
Martin Blix Grydeland authored
Since the input value is sometimes the result of a read()/write() call, avoid truncating the ssize_t value on calling it.
-
Martin Blix Grydeland authored
When a socket timeout is set on a socket and the timeout expires, read() and write() calls on that socket will return (-1) with errno set to EAGAIN/EWOULDBLOCK. Conflicts: lib/libvarnish/vtcp.c
-
Martin Blix Grydeland authored
Consistently use VTCP_Assert when asserting on the result of VTCP_Check().
-
- 21 Apr, 2021 11 commits
-
-
Martin Blix Grydeland authored
Once HSH_Unbusy() has been called there is a possibility for EXP_Remove() to be called before the fetch thread has had a chance to call EXP_Insert(). By adding a OC_EF_NEW flag on the objects during HSH_Unbusy(), that is removed again during EXP_Insert(), we can keep track and clean up once EXP_Insert() is called by the inserting thread if EXP_Remove() was called in the mean time. This patch also removes the AZ(OC_F_DYING) in EXP_Insert(), as that is no longer a requirement. Fixes: #2999
-
Martin Blix Grydeland authored
This makes the order of events the same as on real cache insertions.
-
Martin Blix Grydeland authored
The OC_EF_REFD flag indicates whether expiry has a ref on the OC. Previously, the flag was only gained during the call to EXP_Insert. With this patch, and the helper function EXP_RefNewObjcore(), the flag is gained while holding the objhead mutex during HSH_Unbusy(). This enables the expiry functions to test on missing OC_EF_REFD and quickly return without having to take the main expiry mutex. Conflicts: bin/varnishd/cache/cache_varnishd.h
-
Martin Blix Grydeland authored
When posting to the expiry thread, we wrongly counted exp_mailed also if the OC in question was already on the mail queue. This could cause a discrepency between the exp_mailed and exp_received counters.
-
Martin Blix Grydeland authored
This enables doing extra handling while holding the mutex specific to EXP_Insert/EXP_Remove before/after calling exp_mail_it.
-
Nils Goroll authored
background: When the ban lurker has finished working the bottom of the ban list, conceptually we mark all bans it has evaluated as completed and then remove the tail of the ban list which has no references any more. Yet, for efficiency, we first remove the tail and then mark only those bans completed, which we did not remove. Doing so depends on knowing where in the (obans) list of bans to be completed is the new tail of the bans list after pruning. 5dd54f83 was intended to solve this, but the fix was incomplete (and also unnecessarily complicated): For example when a duplicate ban was issued, ban_lurker_test_ban() could remove a ban from the obans list which later happens to become the new ban tail. We now - hopefully - solve the problem for real by properly cleaning the obans list when we prune the ban list. Fixes #3006 Fixes #2779 Fixes #2556 for real (5dd54f83 was incomplete) Conflicts: bin/varnishd/cache/cache_ban_lurker.c
-
Martin Blix Grydeland authored
The watchdog mechanism currently triggers when any queueing is happening, regardless of the priority. Strictly speaking it is only the backend fetches that are critical to get executed, and this prevents the thread limits to be used as limits on the amount of work the Varnish instance should handle. This can be especially important for instances with H/2 enabled, as these connections will be holding threads for extended periods of time, possibly triggering the watchdog in benign situations. This patch limits the watchdog to only trigger for no queue development on the highest priority queue.
-
Martin Blix Grydeland authored
When accepting new incoming connections in the acceptor thread, it would schedule, they would be registered with the VCA priority. This priority is reserved for the acceptor thread itself, and specifically is not included in the TASK_QUEUE_CLIENT categorisation. This would interfere with the thread reserve pools. t02011.vtc had to be adjusted to account for the new priority categorisation of the initial request.
-
Nils Goroll authored
This test is to detect a deadlock which does not exist any more. IMHO, the only sensible way to test for the lack of it now is to do a load test, which is not what we want in vtc.
-
Nils Goroll authored
... introduced with 3bb8b84c: in Pool_Work_Thread(), we could break out of the for (i = 0; i < TASK_QUEUE__END; i++) loop with tp set to the value from the previous iteration of the top while() loop where if should have been NULL (for no task found). Noticed staring at #3192 - unclear yet if related
-
Nils Goroll authored
Previously, we used a minimum number of idle threads (the reserve) to ensure that we do not assign all threads with client requests and no threads left over for backend requests. This was actually only a special case of the more general issue exposed by h2: Lower priority tasks depend on higher priority tasks (for h2, sessions need streams, which need requests, which may need backend requests). To solve this problem, we divide the reserve by the number of priority classes and schedule lower priority tasks only if there are enough idle threads to run higher priority tasks eventually. This change does not guarantee any upper limit on the amount of time it can take for a task to be scheduled (e.g. backend requests could be blocking on arbitrarily long timeouts), so the thread pool watchdog is still warranted. But this change should guarantee that we do make progress eventually. With the reserves, thread_pool_min needs to be no smaller than the number of priority classes (TASK_QUEUE__END). Ideally, we should have an even higher minimum (@Dridi rightly suggested to make it 2 * TASK_QUEUE__END), but that would prevent the very useful test t02011.vtc. For now, the value of TASK_QUEUE__END (5) is hardcoded as such for the parameter configuration and documentation because auto-generating it would require include/macro dances which I consider over the top for now. Instead, the respective places are marked and an assert is in place to ensure we do not start a worker with too small a number of workers. I dicided against checks in the manager to avoid include pollution from the worker (cache.h) into the manager. Fixes #2418 for real Conflicts: bin/varnishd/cache/cache_wrk.c bin/varnishd/mgt/mgt_pool.c
-
- 20 Apr, 2021 6 commits
-
-
Dridi Boukelmoune authored
There's no point waiting for the backend to complain if we weren't able to properly send the backend request. Fixes #3556 Conflicts: bin/varnishd/cache/cache_backend.c
-
Dridi Boukelmoune authored
The reason we expect here can be summarized as: this was a pipe transaction or an error occurred. This could be much simpler if we replaced enum sess_close with a struct stream_close instead. Refs dc5bddbd
-
Reza Naghibi authored
Also move the lock up to cover more operations.
-
Reza Naghibi authored
-
Reza Naghibi authored
Previously we would read the response Content-Length from a failed oc, which would make the error response valid. Now, if this is detected, we don't touch the Content-Length.
-
Martin Blix Grydeland authored
VRT_delete_backend() sets be->cooled to non-zero as the only place where that is done. Assert that it is zero on entry as a check that VRT_delete_backend isn't called multiple times.
-