- 01 Nov, 2019 1 commit
-
-
Nils Goroll authored
-
- 31 Oct, 2019 4 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
in vped_deliver, we used the wrong gzip state pointer for ESI objects, which lead to the pretendgzip filter not being pushed. Also, we should respect the RES_ESI bit to properly handle the case when esi is deactivated on an esi object. (to be tested)
-
Nils Goroll authored
move a block of initializations statements which should run at all levels
-
Nils Goroll authored
-
- 30 Oct, 2019 2 commits
-
-
Nils Goroll authored
-
Geoff Simmons authored
Before the bugfix for Varnish #3109 (d235b3c90a631ef39fdf0a8103e44ebfb0ddbacb), the gunzip VDP went into an infinite loop for such a case.
-
- 29 Oct, 2019 5 commits
-
-
Geoff Simmons authored
configure checks if you have lcov & genhtml; these can be specified with --with-lcov and/or --with-genhtml. If they are available, then make coverage does the following: - make clean, then make check with CC=gcc and CFLAGS set so that inputs for gcov/lcov are generated. - lcov creates the src/coverage subdir and generates a targetfile there. - genhtml generates HTML reports in src/coverage.
-
Nils Goroll authored
the continue statement continued the inner loop, in fact the while (node->state >= ST_UNPENDING) was intended to be continued. I could neither convince myself that this can not cause an infinite loop nor could I convince myself that it does.
-
-
Nils Goroll authored
-
Nils Goroll authored
-
- 27 Oct, 2019 1 commit
-
-
Geoff Simmons authored
-
- 25 Oct, 2019 2 commits
-
-
Nils Goroll authored
Ref: eb805e8e we cannot signal task_fini after posting subreq.done, because then we race the assertion that all tasks are done when we are done delivering. But we also cannot do it the other way around because then the assertion that subreqs are done when all tasks are finished, does not hold. So the right option should be to do both under the tree lock.
-
Nils Goroll authored
assert_node is only safe under the bytes_tree lock, yet, for the error case, we called it while other threads could still be running. Move it until until after we know that all other tasks are done. This also implies a second asser_node for the retval == 0 case, which shouldn't do any harm. should fix: Panic at: Thu, 24 Oct 2019 14:21:27 GMT Assert error in assert_node(), node_assert.h line 103: Condition((node->nexus.owner) != 0) not true. version = varnish-6.2.1 revision 9f8588e4ab785244e06c3446fe09bf9db5dd8753, vrt api = 9.0 ident = Linux,3.10.0-1062.4.1.el7.x86_64,x86_64,-junix,-smalloc,-smalloc,-hcritbit,epoll now = 29366.422728 (mono), 1571926828.402512 (real) Backtrace: 0x43cf3b: /usr/sbin/varnishd() [0x43cf3b] 0x4a01c2: /usr/sbin/varnishd(VAS_Fail+0x42) [0x4a01c2] 0x7f6bf06a033c: ./vmod_cache/_vmod_pesi.4d9e0603bac2a2e2b2627f7fe90ff1d55d4759545517c869a5571f16636e230e(+0xe33c) [0x7f6bf06a033c] 0x4222b6: /usr/sbin/varnishd(VDP_close+0x66) [0x4222b6] ... (gdb) bt #0 0x00007f6db97d2337 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:55 #1 0x00007f6db97d3a28 in __GI_abort () at abort.c:90 #2 0x000000000043d232 in pan_ic () #3 0x00000000004a01c2 in VAS_Fail () #4 0x00007f6bf06a033c in assert_node (check=CHK_ANY, node=<optimized out>) at node_assert.h:103 #5 vdp_pesi_fini (req=0x7f6ba8f52020, priv=0x7f6ba8f57aa8) at vdp_pesi.c:782 #6 0x00000000004222b6 in VDP_close () #7 0x0000000000464c5e in V1D_Deliver () #8 0x0000000000441eab in CNT_Request () #9 0x00000000004665b3 in http1_req () #10 0x000000000045c833 in WRK_Thread () #11 0x000000000045ccf0 in pool_thread () #12 0x00007f6db9b71e65 in start_thread (arg=0x7f6cdc5dc700) at pthread_create.c:307 #13 0x00007f6db989a88d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111 (gdb) f 4 #4 0x00007f6bf06a033c in assert_node (check=CHK_ANY, node=<optimized out>) at node_assert.h:103 103 AN(node->nexus.owner);
-
- 23 Oct, 2019 2 commits
-
-
Nils Goroll authored
For the case of gzip included in plain response, the esi_level == 1 vdp order was: pesi gunzip pesi_buf V2P(to_parent) Yet we had assertions in place that pesi_buf always immediately follows pesi. The reason was that, for esi_level > 0, we would not push pesi_buf from pesi init but rather from the transport, which was plain wrong: We should delay any additional vdps in order to buffer the least amount of data. Working on this, I also noted that for the generic buffering case, our assertion that pesi_buf is first, might be too strict. Now, any VDPs before the buffer are being closed at esi_level > 1. fixes this panic: Assert error in vped_close_vdp(), vdp_pesi.c line 1182: Condition(vdpe->vdp == vdp) not true. ... Backtrace: 0x43cf3b: /usr/sbin/varnishd() [0x43cf3b] 0x4a01c2: /usr/sbin/varnishd(VAS_Fail+0x42) [0x4a01c2] 0x7f3719306b63: ./vmod_cache/_vmod_pesi.4d9e0603bac2a2e2b2627f7fe90ff1d55d4759545517c869a5571f16636e230e(+0x8b63) [0x7f3719306b63] 0x7f371930a377: ./vmod_cache/_vmod_pesi.4d9e0603bac2a2e2b2627f7fe90ff1d55d4759545517c869a5571f16636e230e(+0xc377) [0x7f371930a377] 0x441eab: /usr/sbin/varnishd(CNT_Request+0x11ab) [0x441eab] 0x7f3719308043: ./vmod_cache/_vmod_pesi.4d9e0603bac2a2e2b2627f7fe90ff1d55d4759545517c869a5571f16636e230e(+0xa043) [0x7f3719308043] 0x45c833: /usr/sbin/varnishd() [0x45c833] 0x45ccf0: /usr/sbin/varnishd() [0x45ccf0] 0x7f37e6d18e65: /lib64/libpthread.so.0(+0x7e65) [0x7f37e6d18e65] 0x7f37e6a4188d: /lib64/libc.so.6(clone+0x6d) [0x7f37e6a4188d] thread = (cache-worker) pthread.attr = { guard = 4096, stack_bottom = 0x7f372c482000, stack_top = 0x7f372c502000, stack_size = 524288, } thr.req = 0x7f3601106020 { vxid = 25559391, transport = PESI_INCLUDE step = R_STP_TRANSMIT, req_body = R_BODY_NONE, restarts = 0, esi_level = 1, ...
-
Nils Goroll authored
We pass on reqs from esi subrequests to the top request for delivery. Doing so we need to give them the top requests's worker such that VDPs requiring it be happy.
-
- 18 Sep, 2019 1 commit
-
-
Nils Goroll authored
When VDP_DeliverObj() was not called, for example for a head request or a return code which implies no response body, bytes_tree->npending == 0 was not true. To avoid additional complications, we code the fact that the root node, if used, is pending into the npending_private field which meant for this purpose, but otherwise only accounts for nodes below it. Yet, this is not implied anywhere, so this use case should be perfectly fine. Also add a test for HEAD requests on an actual ESI object. Note on possible alternatives: I do not think a solution at VDP init time is possible because, after the vmod gets pushed from VCL, the response status could still be changed with effects on whether or not a response body is to be sent (e.g. changed from 200 to 204 after the VDP is pushed). So our only chance is to handle the case when the VDP gets called next after _init, which is just _fini.
-
- 13 Sep, 2019 3 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
- 28 Aug, 2019 7 commits
-
-
Nils Goroll authored
Use a link for uses outside the function documentation and ``pesi.function()`` as the code snippet.
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
"Typically it suffices...", IMHO, is in some way misleading and contradictory to ".activate mus be called in vcl_deliver at all levels" stated before.
-
Nils Goroll authored
-
- 14 Aug, 2019 2 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
-
- 09 Aug, 2019 5 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
resp.do_esi in VCL 4.1, req.esi in 4.0.
-
Geoff Simmons authored
-
Geoff Simmons authored
-
- 08 Aug, 2019 2 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
-
- 06 Aug, 2019 2 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
-
- 05 Aug, 2019 1 commit
-
-
Nils Goroll authored
when we prune the tree, we may the current node containing the link to the next unpending node may get freed, so we need to iterate using the _SAFE variant which saves it.
-