- 25 Feb, 2018 1 commit
-
-
Geoff Simmons authored
Storage defaults to umem where libumem is available, as is usually the case on SunOS. So checking SMA.* stats was causing the test to fail on Solaris.
-
- 24 Feb, 2018 3 commits
-
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Nils Goroll authored
There might be code in vcl_miss changing the request which we don't run for bgfetches, which could lead to unexpected behaviour. On the other hand, what purpose does vcl_miss serve? Is there anything we can do in vcl_miss which we can't do in vcl_backend_fetch?
-
- 23 Feb, 2018 8 commits
-
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Dridi Boukelmoune authored
Some user agents like Safari may "probe" specific resources like medias before getting the full resources usually asking for the first 2 or 11 bytes, probably to peek at magic numbers to figure early whether a potentially large resource may not be supported (read: video). If the user agent also advertises gzip support, and the transaction is known beforehand to not be cacheable, varnishd will forward the Range header to the backend: Accept-Encoding: gzip (when http_gzip_support is on) Range: bytes=0-1 If the response happens to be both encoded and partial, the gunzip test cannot be performed. Otherwise we systematically end up with a broken transaction closed prematuraly: FetchError b tGunzip failed Gzip b u F - 2 0 0 0 0 Refs #2530 Refs #2554
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Poul-Henning Kamp authored
Fixes: #2582
-
- 22 Feb, 2018 10 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
So don't make the test for the error depend on the specific message. In this case, because getaddrinfo(3) may or may not resolve to a sockaddr_un for path if it happens to be a socket (it does on FreeBSD, doesn't on Linux).
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
NULL for an IP address (matches <undef>). Also verify that remote.ip and remote.port correspond to the bogo-IP 0.0.0.0:0 for a UDS connection.
-
Geoff Simmons authored
-
Geoff Simmons authored
hence if the listen address is UDS. Otherwise, the setsockopt() call will fail VTCP_Assert(). Also, verify that std.ip() and std.port() don't work with UDS.
-
Geoff Simmons authored
This determines the values of the vtc macros vN_addr, _port and _sock.
-
Geoff Simmons authored
Also adds the user, group and mode sub-args to -a, to set permissions on the path created by -a for UDS. Add the bogo_ip pseudo-VSA, representing IPv4 0.0.0.0:0, to be exposed in VCL for non-IP addresses. Also adding the field listen_sock to struct sess: pointer to the struct listen_sock that was created by the acceptor and lives in heritage.socks. This makes information like the endpoint name (named -a arg) and the UDS path available from an sp.
-
Federico G. Schwindt authored
-
- 21 Feb, 2018 7 commits
-
-
Poul-Henning Kamp authored
home-rolled stuff to VJSN in anticipation of more complex specifications in the future.
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Geoff Simmons authored
Just full of XXX's for now.
-
Federico G. Schwindt authored
Prompted on irc by scn.
-
- 20 Feb, 2018 5 commits
-
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
-
Poul-Henning Kamp authored
(It doesn't get to the C code yet.)
-
- 19 Feb, 2018 6 commits
-
-
Nils Goroll authored
Ref: #2573
-
Pål Hermunn Johansen authored
This fixes the long-standing #1799 for "keep" objects, and this commit message suggests a way of working around #1799 in the remaining cases. The following is a (long) explanation on how grace and keep works at the moment, how this relates to #1799, and how this commit changes things. 1. How does it work now, before this commit? Objects in cache can outlive their TTL, and the typical reason for this is grace. Objects in cache can also linger because of obj.keep or in the (rare but observed) case where the expiry thread have not yet evicted an object. Grace and keep are here to minimize backend load, but #1799 shows that we are not successful in doing this in some important cases. Whenever sub vcl_recv has ended with return (lookup) (which is the default action), we arrive at HSH_Lookup, where varnish sometimes only finds an expired object (that match Vary logic, is not banned, etc). When this happens, we will initiate a background fetch (by adding a "busy object") if and only if there is no busy object on the oh already. Then the expired object is returned with HSH_EXP or HSH_EXPBUSY, depending on whether a busy object was inserted. 2. What makes us run into #1799? When we have gotten an expired object, we generally hope that it is in grace, and that sub vcl_hit will return(deliver). However, if grace has expired, then the default action (ie the action from builtin.vcl) is return (miss). It is also possible that the user vcl, for some reason, decides that the stale object should not be delivered, and does return (miss) explicitly. In these cases it is common that the current request is not the one to insert a busy object, and then we run into the issue with a message "vcl_hit{} returns miss without busy object. Doing pass.". Note that normally, if a resource is very popular and has a positive grace, it is unlikely that #1799 will happen. Then a new version will always be available before the grace has run out, and everybody get the latest fetched version with no #1799 problems. However, if a resource is very popular (like a manifest file in a live streaming setup) and has 0s grace, and the expiry thread lags a little bit behind, then vcl_hit can get an expired object even when obj.keep is zero. In these circumstances we can get a surge of requests to the backend, and this is especially bad on a very busy server. Another real world example is where grace is initially set high (48h or similar) and vcl_hit considers the health of the backend, and, if the backend is healthy, explicitly does a return(miss) ensure that the client gets a fresh object. This has been a recommended use of vcl_hit, but, because of #1799, can cause considerable load on the backend. Similarly, we can get #1799 if we use "keep" to facilitate IMS requests to the backend, and we have a stale object for which several requests arrive before the first completes. 3. How do we fix this? The main idea is to teach varnish to consider grace during lookup. To be specific, the following changes with this commit: If an expired object is found, the ttl+grace has expired and there already is an ongoing request for the object (ie. there exists a busy object), then the request is put on the waiting list instead of simply returning the object ("without a busy object") to vcl_hit. This choice is made because we anticipate that vcl_hit will do (the default) return (miss) and that it is better to wait for the ongoing request than to initiate a new one with "pass" behavior. The result is that when the ongoing request finishes, we will either be able to go to vcl_hit, start a new request (can happen if there was a Vary mismatch) by inserting a new "busy object", or we lose the race and have to go back to the waiting list (typically unlikely). When grace is in effect we go to vcl_hit even when we did not insert a busy object, anticipating that vcl_hit will return (deliver). This will will fix the cases where the user does not explicitly do a return(miss) in vcl_hit for object where ttl+grace has not expired. However, since this is not an uncommon practice, we also have to change our recommendation on how to use grace and keep. The new recommendation will be: * Set grace to the "normal value" for a working varnish+backend. * Set keep to a high value if the backend is not 100% reliable and you want to use stale objects as a fallback. * Do not explicitly return(miss) in sub vcl_hit{}. The exception is when this only can happen now and then and you are really sure that this is the right thing to do. * In vcl_hit, check if the backend is sick, and then explicitly return(deliver) when appropriate (ie you want an stale object delivered instead of an error message). A test case is included.
-
Federico G. Schwindt authored
Fixes #2562.
-
Martin Blix Grydeland authored
There was a regression from Varnish 4.0 to 4.1, where the response bytes was counted as the number of bytes fed to the outgoing write vector, rather than the bytes that was actually handed off to the OS' socket buffer. This would cause for many cases the complete object size counted as transmitted bytes, even though the client hung up the connection early. This patch changes the counters to show the amount of bytes sent as reported from the write() system calls rather than the bytes we planned and prepared to send. The counters will include any protocol overhead (ie chunked encoding in HTTP/1 and the frame headers in HTTP/2). ESI subrequests will as before in their log transactions report the number of bytes it (and any subrequests below it) contributed to the total body bytes produced. Some test cases have been adjusted to account for the new counter behaviour. Fixes: 2558
-
Dag Haavi Finstad authored
-
Poul-Henning Kamp authored
-