-
Nils Goroll authored
In varnish-cache, the deliberate decision has been made to not support TLS from the same address space as varnish itself, see doc/sphinx/phk/ssl_again.rst So the obvious way to connect to TLS backends is to use a TLS "onloader" (a term coined by @slimhazard as in the opposite of "offloader"), which turns a clear connection into a TLS connection. Before this change, this required additional configuration in two places: An address/port or UDS path needs to be uniquely allocated for each destination address, the specific onloader configuration has to be put in place and a varnish backend pointing to the onloader needs to be added. All of this for each individual backend. Also, this requirement prevents any use of dynamic backends with a TLS onloader. haproxy, however, offers a convenient and elegant way to avoid this configuration overhead: The PROXY protocol can also be used to transport the destination address which haproxy is to connect to if a server's address is unspecified (IN_ADDR_ANY / 0.0.0.0). The configuration template for this use case looks like this (huge thank you to @wtarreau for pointing out this great option in haproxy): listen clear-to-ssl bind /my/path/to/ssl_onloader accept-proxy balance roundrobin stick-table type ip size 100 stick on dst server s0 0.0.0.0:443 ssl ca-file /etc/ssl/certs/ca-bundle.crt server s1 0.0.0.0:443 ssl ca-file /etc/ssl/certs/ca-bundle.crt server s2 0.0.0.0:443 ssl ca-file /etc/ssl/certs/ca-bundle.crt # .. approximately as many servers as expected peers # for improved tls session caching With this setup, by connecting to /my/path/to/ssl_onloader and sending the address to make a TLS connection to in a PROXY header (as the server address / port), we can reduce the configuration overhead outside varnish substantially. In particular, we do not require a path / port per destination dynamic TLS backends become possible This patch implements the basis for simple means of configuring such an ssl onloader: backends can be created with an additional "via" director, which has to resolve to a simple backend. The connection is then made to that address and the actual endpoint address is sent in an additional PROXY header. Notice that sending yet another proxy header to the actual backend is unaffected. Despite using the same format, the two proxy headers are semantically different: The first, here coined the "preamble", is the address to make the connection to while the (optional) second proxy header continues to contain the addresses of the connection to varnish. Future improvements on the roadmap: * resolution of the "via" backend at the time the connection is made: This will allow for fault tolerance and load balancing of via backends * Cascade the health check: If the "via" backend is probed / set down, any backends using it could be set unhealthy also. * Timeouts: The "via" backend's timeouts could define maximum values for any connections made through it Tivia: To future Varnish-Cache historians, this patch originates from #2850 and went through three more iterations, making it a likely candidate for the PR with the longest turnaround time of 1543 days.