1. 19 Apr, 2007 2 commits
  2. 11 Apr, 2007 2 commits
  3. 09 Apr, 2007 6 commits
  4. 01 Apr, 2007 11 commits
  5. 31 Mar, 2007 5 commits
  6. 30 Mar, 2007 2 commits
  7. 29 Mar, 2007 1 commit
  8. 28 Mar, 2007 2 commits
  9. 24 Mar, 2007 3 commits
  10. 09 Mar, 2007 1 commit
    • Poul-Henning Kamp's avatar
      Implement a facility for source file modularization in the VCL · 31f42eaf
      Poul-Henning Kamp authored
      compiler.  The syntax is:
      
      	include "filename" ;
      
      Unlike the C preprocessors #include directive, a VCL include can
      appear anywhere in the sourcefile:
      
      	if {req.Cookie == include "cookie.vcl" ; || !req.Host } {
      	}
      
      and have cookie.vcl contain just:
      
      	"8435398475983275293759843"
      
      
      Technically this results in a change to how we account for source
      code references in the counter/profile table as well, and as a result
      the entire source code of the VCL program is now compiled into the
      shared library for easy reference.
      
      
      
      git-svn-id: http://www.varnish-cache.org/svn/trunk/varnish-cache@1281 d4fa192b-c00b-0410-8231-f00ffab90ce4
      31f42eaf
  11. 08 Mar, 2007 2 commits
  12. 07 Mar, 2007 1 commit
  13. 06 Mar, 2007 1 commit
    • Poul-Henning Kamp's avatar
      · 0dd910f6
      Poul-Henning Kamp authored
      Having thought long and hard about this, commit what I think is the
      new and simpler flow for version2.
      
      Pass is now handled like a miss where the object will not be cached.
      
      The main result of this is that we drag the entire object, header
      and body, from the backend before transmitting it to the client,
      thus isolating the backend from slow clients.
      
      From a software engineering point of view it is a big improvement,
      because it eliminates the need for all of cache_pass.c and we therefore
      end up with less HTTP protocol implementations.
      
      A side effect of this is that ticket #56 should be fixed now.
      
      If the object is pass'ed before vcl_fetch{} that is, in vcl_recv{},
      vcl_hit{} or vcl_miss{}, no "pass this" object is inserted in the
      cache.  The confusion between "pass", "insert" and "insert_pass"
      has been cleaned up, by the removal of the latter.
      
      Pipe and Pass calls vcl_pipe{} and vcl_pass{} respectively, before
      contacting the backend.  I havn't quite decided if they should
      operate on the request header from the client or the one to the
      backend, or both.
      
      One possible use is to inject a "Connection: close" header to limit
      pipe to one transaction.
      
      A new vcl_hash{} has been added, it will allow customization of
      which fields we hash on, instead of the default "url + Host:" but
      this is not yet implemented.
      
      vcl_fetch{} is now called after both the headers and body have been
      picked up from the backend.  This will allow us to do more comprehensive
      handling of backend errors later on.
      
      A disadvantage to this is that if the object ends up as a "pass
      this" object in the cache, we could possibly have released any
      queued requests already after the headers were received.  If this
      is transpires as a real-world problem, we can add a vcl_fetchhdr{}
      which can do an early release (ie: "pass").
      
      
      
      git-svn-id: http://www.varnish-cache.org/svn/trunk/varnish-cache@1277 d4fa192b-c00b-0410-8231-f00ffab90ce4
      0dd910f6
  14. 27 Feb, 2007 1 commit