Commit a09a6bfc authored by Andreas Plesner Jacobsen's avatar Andreas Plesner Jacobsen Committed by Tollef Fog Heen

Documentation fixes for 3.0

parent bf813731
......@@ -115,7 +115,7 @@ You can use the ``bereq`` object for altering requests going to the backend, but
sub vcl_miss {
set bereq.url = regsub(req.url,"stream/","/");
fetch;
return(fetch);
}
**How do I force the backend to send Vary headers?**
......@@ -148,18 +148,18 @@ A custom error page can be generated by adding a ``vcl_error`` to your configura
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html>
<head>
<title>"} obj.status " " obj.response {"</title>
<title>"} + obj.status + " " + obj.response + {"</title>
</head>
<body>
<h1>Error "} obj.status " " obj.response {"</h1>
<p>"} obj.response {"</p>
<h1>Error "} + obj.status + " " + obj.response + {"</h1>
<p>"} + obj.response + {"</p>
<h3>Guru Meditation:</h3>
<p>XID: "} req.xid {"</p>
<p>XID: "} + req.xid + {"</p>
<address><a href="http://www.varnish-cache.org/">Varnish</a></address>
</body>
</html>
"};
deliver;
return(deliver);
}
**How do I instruct varnish to ignore the query parameters and only cache one instance of an object?**
......@@ -235,8 +235,8 @@ HTTPS proxy such as nginx or pound.
Yes, you need VCL code like this::
director foobar round-robin {
{ .backend = { .host = "www1.example.com; .port = "http"; } }
{ .backend = { .host = "www2.example.com; .port = "http"; } }
{ .backend = { .host = "www1.example.com"; .port = "http"; } }
{ .backend = { .host = "www2.example.com"; .port = "http"; } }
}
sub vcl_recv {
......@@ -337,7 +337,7 @@ Varnish has a feature called **hit for pass**, which is used when Varnish gets a
* Client 2..N are now given the **hit for pass** object instructing them to go to the backend
The **hit for pass** object will stay cached for the duration of its ttl. This means that subsequent clients requesting /foo will be sent straight to the backend as long as the **hit for pass** object exists.
The :command:`varnishstat` can tell you how many **hit for pass** objects varnish has served. You can lower the ttl for such an object if you are sure this is needed, using the following logic::
The :command:`varnishstat` can tell you how many **hit for pass** objects varnish has served. The default vcl will set ttl for a hit_for_pass object to 120s. But you can override this, using the following logic:
sub vcl_fetch {
if (!obj.cacheable) {
......
......@@ -16,10 +16,10 @@ To add a HTTP header, unless you want to add something about the client/request,
sub vcl_fetch {
# Add a unique header containing the cache servers IP address:
remove obj.http.X-Varnish-IP;
set obj.http.X-Varnish-IP = server.ip;
remove beresp.http.X-Varnish-IP;
set beresp.http.X-Varnish-IP = server.ip;
# Another header:
set obj.http.Foo = "bar";
set beresp.http.Foo = "bar";
}
**How can I log the client IP address on the backend?**
......
......@@ -53,29 +53,29 @@ headers from the backend.
actions
~~~~~~~
The most common actions to call are these:
The most common actions to return are these:
*pass*
When you call pass the request and subsequent response will be passed
to and from the backend server. It won't be cached. pass can be called
in both vcl_recv and vcl_fetch.
When you return pass the request and subsequent response will be passed to
and from the backend server. It won't be cached. pass can be returned from
both vcl_recv and vcl_fetch.
*lookup*
When you call lookup from vcl_recv you tell Varnish to deliver content
When you return lookup from vcl_recv you tell Varnish to deliver content
from cache even if the request othervise indicates that the request
should be passed. You can't call lookup from vcl_fetch.
should be passed. You can't return lookup from vcl_fetch.
*pipe*
Pipe can be called from vcl_recv as well. Pipe short circuits the
Pipe can be returned from vcl_recv as well. Pipe short circuits the
client and the backend connections and Varnish will just sit there
and shuffle bytes back and forth. Varnish will not look at the data being
send back and forth - so your logs will be incomplete.
Beware that with HTTP 1.1 a client can send several requests on the same
connection and so you should instruct Varnish to add a "Connection: close"
header before actually calling pipe.
header before actually returning pipe.
*deliver*
Deliver the cached object to the client. Usually called in vcl_fetch.
Deliver the cached object to the client. Usually returned from vcl_fetch.
Requests, responses and objects
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment