- 08 Oct, 2018 5 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Federico G. Schwindt authored
-
Nils Goroll authored
-
Nils Goroll authored
-
- 06 Oct, 2018 1 commit
-
-
Federico G. Schwindt authored
-
- 05 Oct, 2018 8 commits
-
-
Nils Goroll authored
I failed to consider the hypothetical case that there is only gethrtime() and no clock_gettime(CLOCK_MONOTONIC).
-
Nils Goroll authored
Throw out the conventional wisdom and base the decision on a micro benchmark. clock_gettime() is now preferred if it is consistently at least double as fast as gethrtime(), which is the case on varnishdev-il, the SmartOS vtest machine. config.log gives details on the performance check, sample output below: configure:22703: ./conftest hrtime 45989530 check 16748699083977959327 clock_gettime 4119385 check 16748701613138517215 ... hrtime 48113108 check 16748749015170035860 clock_gettime 4020802 check 16748751585081458308 clock_gettime wins 10/10
-
Dridi Boukelmoune authored
r2 can be either null or not. Test case by @daghf Refs #2781
-
Dridi Boukelmoune authored
And by the way, they are known as h2_rxframe_f these days! Refs #2781
-
Carlo Cannas authored
Currently Varnish doesn't allow PRIORITY frames to be received on closed streams: it treats it as a protocol violation and replies with a GOAWAY. This is not spec compliant, rfc7540 states: The PRIORITY frame can be sent for a stream in the "idle" or "closed" state. rfc7540,l,1947,1948 The PRIORITY frame can be sent on a stream in any state rfc7540,l,1938,1938 https://tools.ietf.org/html/rfc7540#section-6.3 This behaviour can be triggered by real-world browsers: Chrome 69 has been observed sending PRIORITY frames which are received by Varnish after a stream has been closed (and cleaned by h2_sweep). When that happens the connection is closed and Chrome aborts the loading of all the resources which started to load on that connection. This commit solves the issue by avoiding all the stream creation code and its checks to be performed when a PRIORITY frame is received. This moves all the stream creation logic inside h2_rx_headers, the only other frame which is allowed on idle streams. This also fixes the concurrent streams counter and highest_stream: they should be updated only when a stream enters the "open" state (or "reserved" if Varnish used served push) but currently a PRIORITY frame on an idle stream affects them. https://tools.ietf.org/html/rfc7540#section-5.1.1 rfc7540,l,1153,1156 rfc7540,l,1193,1198 rfc7540,l,1530,1533 Fixes: #2775
-
Carlo Cannas authored
This moves it before the new stream object creation, so we save ourselves an useless allocation and initialization of a stream object which would be never used and straight killed. This also simplifies upcoming commits.
-
Carlo Cannas authored
As per spec a client can only send a HEADERS frame to cause a stream to transition from the "idle" state to "open". A PRIORITY frame can be sent to an "idle" stream, but it will remain in that state. rfc7540,l,916,940 https://tools.ietf.org/html/rfc7540#section-5.1 To open a stream the command txreq -nostrend can be used. The -nostrend option will ensure that the stream won't transition to a "half-closed" state.
-
Nils Goroll authored
-
- 04 Oct, 2018 4 commits
-
-
Nils Goroll authored
-
Nils Goroll authored
-
Nils Goroll authored
be694992 unnecessarily changed the values of most vsl tag enums and thus introduced an incompatibility with logs written with previous code. Fixes #2790
-
Poul-Henning Kamp authored
Fixes #2788
-
- 03 Oct, 2018 5 commits
-
-
Poul-Henning Kamp authored
This reverts commit 4e7e3499.
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
If the worker pool is configured too small, it can deadlock. Recovering from this would require a lot of complicated code, to discover queued but unscheduled tasks which can be cancelled (because the client went away or otherwise) and code to do the cancelling etc. etc. But fundamentally either people configured their pools wrong, in which case we want them to notice, or they are under DoS, in which case recovering gracefully is unlikely be a major improvement over a restart. Instead we implement a per-pool watchdog and kill the child process if nothing has been dequeued for too long. Default value 10 seconds, open to discussion. Band-aid for: #2418 Test-case by: @Dridi
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
(Same as we do in H1) Fixes #2589
-
- 01 Oct, 2018 1 commit
-
-
Poul-Henning Kamp authored
Spotted by: Willy Tarreau <w@1wt.eu>
-
- 29 Sep, 2018 1 commit
-
-
Federico G. Schwindt authored
-
- 28 Sep, 2018 1 commit
-
-
Dridi Boukelmoune authored
Fixes #2787
-
- 27 Sep, 2018 5 commits
-
-
Nils Goroll authored
my apologies: as long as we pass around a struct wrk, some other function could use the wrk->stats - in other words, the fact that the ban stats are decoupled from the wrk stats does not make the latter any less relevant. This reverts commit 527f1bd0.
-
Nils Goroll authored
-
Nils Goroll authored
Checking and preparing our worker struct does not need to happen under the lock.
-
Nils Goroll authored
Over time, I have repeatedly stared at this code again and again wondering if (and why) our cv signaling is correct, just to end up with the same insight each time (but first overlooking #2719) Being fully aware that we do not want to plaster our code with outdated comments, I hope this explanation is warranted to save myself (and others, hopefully) from wasting precious life time on reiterating over the same question.
-
Poul-Henning Kamp authored
-
- 26 Sep, 2018 8 commits
-
-
Poul-Henning Kamp authored
Fixes: #2782
-
Emmanuel Hocdet authored
-
Federico G. Schwindt authored
-
Federico G. Schwindt authored
-
Federico G. Schwindt authored
Take 2. Let's see if this time sticks.
-
Poul-Henning Kamp authored
-
Nils Goroll authored
-
Nils Goroll authored
-
- 25 Sep, 2018 1 commit
-
-
Poul-Henning Kamp authored
-