- 09 Mar, 2020 3 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
This reverts commit 0c3ec169. as decided by bugwash Ref #3218
-
Dridi Boukelmoune authored
-
- 07 Mar, 2020 4 commits
-
-
Federico G. Schwindt authored
-
Federico G. Schwindt authored
-
Federico G. Schwindt authored
-
Nils Goroll authored
-
- 06 Mar, 2020 3 commits
-
-
Dridi Boukelmoune authored
-
Nils Goroll authored
-
Dridi Boukelmoune authored
-
- 05 Mar, 2020 4 commits
-
-
Nils Goroll authored
this might need more love in places, but I wanted to get my first pass done and pushed before an ICE train arrives in Hamburg. I suggest we aim for a complete and correct changelog first before we rewrite/reformat it for the release- and upgrading notes. Please review and feel free to fix/improve or add additional changes which I left out or might have missed.
-
Nils Goroll authored
-
Dridi Boukelmoune authored
The longest part is the test suite, so in order to release build machines sooner I hope that 8 concurrent jobs will strick a good balance between speed and resource consumption, considering that test cases mostly wait. That is still below what we do for Circle CI builds with 12 jobs for distcheck tasks.
-
Nils Goroll authored
went through commits and dropped notes in running order, stopped at f402e5ff for now (had to switch tasks)
-
- 04 Mar, 2020 4 commits
-
-
Dridi Boukelmoune authored
-
Dridi Boukelmoune authored
-
Dridi Boukelmoune authored
Better late than never.
-
Dridi Boukelmoune authored
Closes #3238
-
- 03 Mar, 2020 6 commits
-
-
Dridi Boukelmoune authored
The havemsg variable is purely local, and only written-then-read once.
-
Dridi Boukelmoune authored
-
Nils Goroll authored
Previously, we only checked `v1l->deadline` (which gets initialized from the `send_timeout` session attribute or parameter) only for short writes, so for successful "dripping" http1 writes (streaming from a backend busy object with small chunks), we did not respect the timeout. This patch restructures `V1L_Flush()` to always check the deadline before a write by turning a `while() { ... }` into a `do { ... } while` with the same continuation criteria: `while (i != v1l->liov)` is turned into `if (i == v1l->liov) break;` and `while (i > 0 || errno == EWOULDBLOCK)` is kept to retry short writes. This also reduces the `writev()` call sites to one. Fixes #3189
-
Nils Goroll authored
As @Dridi and myself concluded, the send_timeout had no effect on backend connections anyway because we never set SO_SNDTIMEO (aka idle_send_timeout on the client side) on backend connections. With the next commit, we will fix the send_timeout on the client side and thus would also enable it for "dripping" writes on the backend side. To preserve existing behavior for the time being, we explicitly disable the timeout (actually deadline) on the backend side. There is ongoing work in progress to rework all of our timeouts for 7.x. Implementation note: if (VTIM_real() > v1l->deadline) evaluates to false for v1l->deadline == NaN Ref #3189
-
Nils Goroll authored
and check our WS_Snapshot() / WS_Reset() debug messages
-
Nils Goroll authored
We use workspace overflows to signal to bail out for example after a failing `VRT_SetHdr()`. This is a guarantee that if some serious issue occurred during processing, we rather send an error downstream than an incomplete response or the result of incomplete processing. We use the `WS_Snapshot() ... WS_Reset()` pattern as some kind of second order workspace allocation where the called code itself uses `WS_Reserve()`. With this usage pattern, `WS_Reset()` called `ws_ClearOverflow(ws)`, potentially clearing the overflow bit from a previous relevant failure. We now avoid any other unintended clears of the overflow bit by splitting two functions: * WS_Rollback() is now what WS_Reset() used to be: It clears overflows and accepts the zero cookie for a reset-to-start It is only intended for use within varnishd and is thus declared in cache_varnishd.h * WS_Reset() does not touch the overflow bit any longer, ensuring that a once-overflowed workspace stays overflowed `WS_Snapshot()` now returns a magic value which gets recognized by `WS_Reset()` to ensure that the overflowed marker is still present. This serves two purposes: - better debugging and - a safety measure against passing a cookie from an already overflowed workspace to WS_Rollback() Fixes #3194
-
- 02 Mar, 2020 14 commits
-
-
Nils Goroll authored
We add an attribute macro which, for gcc, will disable compiler optimizations. The intended use are VCL subs which are called exactly once such that any additional compilation effort is wasteful with respect to the overall vcl.load time. We also decorate the vcc-generated C code for vcl_init and vcl_fini with v_dont_optimize, which is simple and easy: void v_dont_optimize v_matchproto_(vcl_func_f) VGC_function_vcl_fini(VRT_CTX) void v_dont_optimize v_matchproto_(vcl_func_f) VGC_function_vcl_init(VRT_CTX) A more difficult improvement is left to be done: Any custom vcl subs which are called from vcl_init and/or vcl_fini _only_ should also be decorated with v_dont_optimize. With the current code base, determining if a custom sub qualifies would require helper code in vcc_xref to check all uses and return the usage union. As #3163 requires a similar mechanism and because we are about to enter the pre-6.4 release freeze, the better option seems to be to implement this "use mask" when/if #3163 gets in and come back to this optimization then. Closes #3230
-
Nils Goroll authored
So far, we have focused on transitioning vmod arguments to STRANDS. To take the next step and also return STRANDS (which leverages the actual benefit - not copying unmodified strand elements), vmods need to allocate a struct strands on the workspace. This commit adds a utility function for this common task. Implementation note: A single WS_Alloc() with some pointer arithmetic would suffice, but using two results in cleaner code.
-
Nils Goroll authored
This commit adds the final bit to fix #2902: As discussed in the ticket, there should be no VRT_CTX msg vsb present for COLD events, yet existing code did provide it in some cases. This commit message comments on changes in cache_vcl.c in order of appearance. Existing code kept the CLI VRT_CTX around for longer than it strictly needed to just because access to the error message in ctx->msg was still required. In order to be able to simplify management of the CLI VRT_CTX, we change VCL_Rel_CliCtx() to return the ctx->msg IFF it was requested by a true argument to VCL_Get_CliCtx() Regarding #2902, the main issue with existing code was that the decision on whether or not to request a ctx->msg was separated from where we could actually make that decision: It is (only) in vcl_send_event() where we know which event is going to be issued, which, in turn, determines if we are going to need a ctx->msg. Thus, we move the VRT_CTX management for all of the CLI operations into one place in vcl_send_event() and change the infrastructure around it to handle the vcl and the error message vsb instead of a VRT_CTX. This allows us to (finally) ensure that VCL_EVENT_COLD does never have a ctx->msg error vsb, asserted by AZ(havemsg) in the switch/case statement in vcl_send_event(). In vcl_send_event(), we also assert that if a vcl subroutine was called in the case of a LOAD or DISCARD event, only vcl_init {} for a LOAD event may return anything but OK. vcl_set_state() contains the two use cases of vcl_send_event(): When we know that an event may not fail, we also assert that there is no msg vsb returned, and if it could, we keep it, as it is still required by its callers. vcl_cancel_load() shows how the new infrastructure simplifies the code: Rather than having to re-setup the VRT_CTX for the DISCARD event, we simply output the error and have vcl_send_event() set up the proper handling for that event. Basically the same pattern repeats in vcl_load(), VCL_Poll() and vcl_cli_state(): In these functions, we only look after error handling and output. Fixes #2902
-
Nils Goroll authored
In preparation for the next commit, we avoid using the ctx->msg vsl and use a separate vsl for the VCL_Open step of vcl_load. Ref #2902
-
Nils Goroll authored
vcl_event dictates ctx->method, so we set it where we need it. Note: This is in preparation of the disabled AZ(ctx->msg) for cold events, but we are not there yet. Ref #2902
-
Dridi Boukelmoune authored
I have a plan to help them fit in the table, and regrouping them will make matters simpler. Much better diff with the --patience option.
-
Dridi Boukelmoune authored
It was already gone for the 4.1.0 release, replaced by the varnishd -W option.
-
Dridi Boukelmoune authored
And with that trailing zero-flags can be omitted.
-
Dridi Boukelmoune authored
It turns out it's not just minimum and maximum values that can be dynamic.
-
Dridi Boukelmoune authored
It's a bit weird that the default value and the unit is separated from the min and max. This aligns struct parspec field order with the PARAM() macro for these two fields. In static parspec definitions, the flags field can now be omitted if there are no flags, and the parameter is not subject to dynamic bounds.
-
Dridi Boukelmoune authored
I didn't change their shorthand counterparts (for example st for s-text) because they will go away soon enough.
-
Dridi Boukelmoune authored
Now that PARAM() is a vararg macro, we can afford having the dynamic bounds reasons only for relevant parameters without bloating the rest of the table.
-
Dridi Boukelmoune authored
The two places where the parameters table is used have very distinct needs, and even among parameters themselves we will need some degree variation.
-
Dridi Boukelmoune authored
This is the first of a series of mechanical patches made possible by vim macros. This is a first step towards continuing the work initiated in 39e0bc53 and the first observation is that we have some dead weight in the table.
-
- 27 Feb, 2020 1 commit
-
-
Dridi Boukelmoune authored
This was documented following the naming convention in tree, but implemented differently. In order not to change the behavior, only the documentation is updated. Closes #3232
-
- 26 Feb, 2020 1 commit
-
-
Dridi Boukelmoune authored
-