- 27 Oct, 2019 3 commits
-
-
Geoff Simmons authored
-
Nils Goroll authored
Ref: eb805e8e we cannot signal task_fini after posting subreq.done, because then we race the assertion that all tasks are done when we are done delivering. But we also cannot do it the other way around because then the assertion that subreqs are done when all tasks are finished, does not hold. So the right option should be to do both under the tree lock.
-
Nils Goroll authored
assert_node is only safe under the bytes_tree lock, yet, for the error case, we called it while other threads could still be running. Move it until until after we know that all other tasks are done. This also implies a second asser_node for the retval == 0 case, which shouldn't do any harm. should fix: Panic at: Thu, 24 Oct 2019 14:21:27 GMT Assert error in assert_node(), node_assert.h line 103: Condition((node->nexus.owner) != 0) not true. version = varnish-6.2.1 revision 9f8588e4ab785244e06c3446fe09bf9db5dd8753, vrt api = 9.0 ident = Linux,3.10.0-1062.4.1.el7.x86_64,x86_64,-junix,-smalloc,-smalloc,-hcritbit,epoll now = 29366.422728 (mono), 1571926828.402512 (real) Backtrace: 0x43cf3b: /usr/sbin/varnishd() [0x43cf3b] 0x4a01c2: /usr/sbin/varnishd(VAS_Fail+0x42) [0x4a01c2] 0x7f6bf06a033c: ./vmod_cache/_vmod_pesi.4d9e0603bac2a2e2b2627f7fe90ff1d55d4759545517c869a5571f16636e230e(+0xe33c) [0x7f6bf06a033c] 0x4222b6: /usr/sbin/varnishd(VDP_close+0x66) [0x4222b6] ... (gdb) bt #0 0x00007f6db97d2337 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:55 #1 0x00007f6db97d3a28 in __GI_abort () at abort.c:90 #2 0x000000000043d232 in pan_ic () #3 0x00000000004a01c2 in VAS_Fail () #4 0x00007f6bf06a033c in assert_node (check=CHK_ANY, node=<optimized out>) at node_assert.h:103 #5 vdp_pesi_fini (req=0x7f6ba8f52020, priv=0x7f6ba8f57aa8) at vdp_pesi.c:782 #6 0x00000000004222b6 in VDP_close () #7 0x0000000000464c5e in V1D_Deliver () #8 0x0000000000441eab in CNT_Request () #9 0x00000000004665b3 in http1_req () #10 0x000000000045c833 in WRK_Thread () #11 0x000000000045ccf0 in pool_thread () #12 0x00007f6db9b71e65 in start_thread (arg=0x7f6cdc5dc700) at pthread_create.c:307 #13 0x00007f6db989a88d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 (gdb) f 4 #4 0x00007f6bf06a033c in assert_node (check=CHK_ANY, node=<optimized out>) at node_assert.h:103 103 AN(node->nexus.owner);
-
- 25 Oct, 2019 1 commit
-
-
Geoff Simmons authored
-
- 23 Oct, 2019 6 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Nils Goroll authored
For the case of gzip included in plain response, the esi_level == 1 vdp order was: pesi gunzip pesi_buf V2P(to_parent) Yet we had assertions in place that pesi_buf always immediately follows pesi. The reason was that, for esi_level > 0, we would not push pesi_buf from pesi init but rather from the transport, which was plain wrong: We should delay any additional vdps in order to buffer the least amount of data. Working on this, I also noted that for the generic buffering case, our assertion that pesi_buf is first, might be too strict. Now, any VDPs before the buffer are being closed at esi_level > 1. fixes this panic: Assert error in vped_close_vdp(), vdp_pesi.c line 1182: Condition(vdpe->vdp == vdp) not true. ... Backtrace: 0x43cf3b: /usr/sbin/varnishd() [0x43cf3b] 0x4a01c2: /usr/sbin/varnishd(VAS_Fail+0x42) [0x4a01c2] 0x7f3719306b63: ./vmod_cache/_vmod_pesi.4d9e0603bac2a2e2b2627f7fe90ff1d55d4759545517c869a5571f16636e230e(+0x8b63) [0x7f3719306b63] 0x7f371930a377: ./vmod_cache/_vmod_pesi.4d9e0603bac2a2e2b2627f7fe90ff1d55d4759545517c869a5571f16636e230e(+0xc377) [0x7f371930a377] 0x441eab: /usr/sbin/varnishd(CNT_Request+0x11ab) [0x441eab] 0x7f3719308043: ./vmod_cache/_vmod_pesi.4d9e0603bac2a2e2b2627f7fe90ff1d55d4759545517c869a5571f16636e230e(+0xa043) [0x7f3719308043] 0x45c833: /usr/sbin/varnishd() [0x45c833] 0x45ccf0: /usr/sbin/varnishd() [0x45ccf0] 0x7f37e6d18e65: /lib64/libpthread.so.0(+0x7e65) [0x7f37e6d18e65] 0x7f37e6a4188d: /lib64/libc.so.6(clone+0x6d) [0x7f37e6a4188d] thread = (cache-worker) pthread.attr = { guard = 4096, stack_bottom = 0x7f372c482000, stack_top = 0x7f372c502000, stack_size = 524288, } thr.req = 0x7f3601106020 { vxid = 25559391, transport = PESI_INCLUDE step = R_STP_TRANSMIT, req_body = R_BODY_NONE, restarts = 0, esi_level = 1, ...
-
Nils Goroll authored
We pass on reqs from esi subrequests to the top request for delivery. Doing so we need to give them the top requests's worker such that VDPs requiring it be happy.
-
Geoff Simmons authored
-
Geoff Simmons authored
Package includes the most recent bugfix, and is compatible with Varnish 6.2.1.
-
- 18 Sep, 2019 1 commit
-
-
Nils Goroll authored
When VDP_DeliverObj() was not called, for example for a head request or a return code which implies no response body, bytes_tree->npending == 0 was not true. To avoid additional complications, we code the fact that the root node, if used, is pending into the npending_private field which meant for this purpose, but otherwise only accounts for nodes below it. Yet, this is not implied anywhere, so this use case should be perfectly fine. Also add a test for HEAD requests on an actual ESI object. Note on possible alternatives: I do not think a solution at VDP init time is possible because, after the vmod gets pushed from VCL, the response status could still be changed with effects on whether or not a response body is to be sent (e.g. changed from 200 to 204 after the VDP is pushed). So our only chance is to handle the case when the VDP gets called next after _init, which is just _fini.
-
- 03 Sep, 2019 2 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
-
- 08 Aug, 2019 4 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
Strict compatibility with Varnish 6.2.0.
-
Geoff Simmons authored
-
Geoff Simmons authored
-
- 07 Aug, 2019 2 commits
-
-
Geoff Simmons authored
To reduce risks concerning portability and compatibility, and to avoid accidentally depending on non-standard features.
-
Nils Goroll authored
-
- 06 Aug, 2019 3 commits
-
-
Geoff Simmons authored
- Use WS_Reserve() instead of WS_ReserveAll(). - Add HSH_Cancel() to the "foreign" code.
-
Geoff Simmons authored
-
Geoff Simmons authored
-
- 05 Aug, 2019 1 commit
-
-
Nils Goroll authored
when we prune the tree, we may the current node containing the link to the next unpending node may get freed, so we need to iterate using the _SAFE variant which saves it.
-
- 04 Aug, 2019 16 commits
-
-
Geoff Simmons authored
-
Nils Goroll authored
-
Nils Goroll authored
the code is correct in varnish-cache, the VDP_bytes(req, VDP_NULL, ...) calls do not return. This reverts commit ff7d20a2.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
because the ESI request needs to stay alive until all nodes below have been delivered, now that we got tree pruning, we can take them from the ESI requests workspace. See long comment in src/node.c for more detail and evidence collection for the tuneables.
-
Nils Goroll authored
-
Nils Goroll authored
once a subtree is completely delivered, we do still need the T_NEXUS at the head of the (sub)tree for integrity (linkage to its siblings), but anything below can be completely fini'd and freed.
-
Nils Goroll authored
-
Nils Goroll authored
this is a bad leftover from when we still used ALLOC_JOB/FREE_OBJ for nodes and anyway we must not just free a node which is still linked in an otherwise alive tree
-
Nils Goroll authored
During code restructuring earlier in 77e9559d I overlooked that VTIM_real() may not be declared where debug output is used
-
- 02 Aug, 2019 1 commit
-
-
Nils Goroll authored
-