- 06 Jul, 2020 40 commits
-
-
Geoff Simmons authored
For the most part by adding go docs, and by using Go naming conventions in a few cases. Remove some commented-out code while we're here.
-
Geoff Simmons authored
Quiets golint.
-
Geoff Simmons authored
Use the label key viking.uplex.de/secret. The controller only reads Secrets with this label, and with the field type:kubernetes.io/tls (the latter are Secrets specified for Ingress). Three values are permitted for the label: admin: credentials for remote admin of Varnish and haproxy (Varnish shared secret and Basic Auth password for the dataplane API). pem: initially empty Secret into which the controller writes pem files (concatenated crt and key), projected into a volume from which haproxy reads at load time. Currently only with the hard- wired name "tls-cert", so that RBAC update privileges can be limited to this Secret. auth: credentials for Basic and Proxy Auth, as configured via the VarnishConfig custom resource.
-
Tim Leers authored
-
Tim Leers authored
-
Geoff Simmons authored
We read Secrets with labels that identify a Secret for use by this application. These include: - Secrets for the remote administration of Varnish and haproxy (to authorize use of the Varnish CLI and the dataplane API for haproxy). - Secrets for applications like Basic and Proxy Auth. - The Secret in which PEM files for haproxy are created, and is projected into a volume that haproxy reads. This is how we create TLS material for use by haproxy (which requires that crt and key are concatenated into one file). We also read Secrets with the type field set to "kubernetes.io/tls". These contain the TLS material, and are the Secrets named in an Ingress spec. This has necessitated adding two new informers to the controller, for which the filters are defined.
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
We take advantage of the fact that if a directory is specified for haproxy as the location for certificates, then haproxy reads all of the certificates found there, and automatically associates the SNI of incoming connections with the appropriate certificate.
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
It contained a newline, which rendered the sed command in the entry point of the haproxy container invalid.
-
Tim Leers authored
-
Tim Leers authored
-
Tim Leers authored
-
Geoff Simmons authored
-
Geoff Simmons authored
The entrypoint of the haproxy container is an exec script that sets the password from a template for haproxy.cfg. Setting the password from an env variable has proven to be too unreliable. Also, listening at a UDS does not appear to be working at all. So the dataplane API listens directly at the container port (no intervening haproxy frontend). This change makes the container more friendly to read-only filesystems, since haproxy.cfg is now no longer modified in /etc, but rather in the ephemeral file system at /run (== /var/run).
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
Most importantly, this tells us the current version number in use by the dataplane API, which apparently insists that the number is always counted up (otherwise there are 409 Conflict responses).
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
The Secret must be in the same namespace of the Pods into which their contents are mounted.
-
Geoff Simmons authored
-
Geoff Simmons authored
The verification script intermittently gets a 503 status if requests are sent to "soon", even after waiting for the Varnish Services to become ready. It doesn't appear to happen if we wait a few seconds longer. For now, wait longer until we run the verification test case. In the long run, we should investigate why the configuration is not actually ready when the Ready state is reached.
-
Geoff Simmons authored
Since the controller now interacts with the headless Service that defines the admin ports, it can no longer find the http port in that Service definition. This is needed to configure the Varnishen as backends for one another. Search for all Services in the same namespace that define the same selector as the admin Service (and hence are configured for the same Pods). We then search for the http port in the Endpoints of those Services.
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-