Commit 0aa26c7e authored by Geoff Simmons's avatar Geoff Simmons

Update docs to reflect the separation of controller and Varnish.

parent c0a9a092
......@@ -21,31 +21,21 @@ time, including:
Endpoints) in the namespace of the Pod in which it is deployed.
* Only one Ingress definition is valid at a time. If more than one definition
is added to the namespace, then the most recent definition becomes valid.
* A variety of elements in the Varnish implementation of Ingress are
hard-wired, as detailed in the following, These are expected to
become configurable in further development.
* A variety of elements in the implementation are hard-wired, as
detailed in the documentation, These are expected to become configurable
in further development.
# Installation
The container image implementing the Ingress, including the Ingress
controller deployed in the same container, is created via a
multi-stage Docker build using the Dockerfile in the root of the
source repository. The build can be initiated with the ``container``
target for ``make``:
Varnish for the purposes of Ingress and the controller that manages it
are implemented in separate containers -- one controller can be used
to manage a group of Varnish instances. The Dockerfiles and other
files needed to build the two images are in the
[``container/``](/container) folder, together with a Makefile that
encapsulates the commands for the build.
```
$ make container
```
If you wish to add custom options for the Docker build, assign these
to the environment variable ``DOCKER_BUILD_OPTIONS``:
```
$ DOCKER_BUILD_OPTIONS='--no-cache --pull' make container
```
The resulting image must then be pushed to a registry available to the
Kubernetes cluster.
The resulting images must then be pushed to a registry available to
the Kubernetes cluster.
The Ingress can then be deployed by any of the means that are
customary for Kubernetes. The [``deploy/``](/deploy) folder contains
......@@ -70,39 +60,12 @@ based on other technologies in the same Kubernetes cluster.
# Development
The executable ``k8s-ingress``, which acts as the Ingress controller,
is currently built with Go 1.10.
Targets in the Makefile support development in your local environment, and
facilitate testing with ``minikube``:
* ``k8s-ingress``: build the controller executable. This target also
runs ``go get`` for package dependencies, ``go generate`` (see
below) and ``go fmt``.
The source code for the controller, which listens to the k8s cluster
API and issues commands to Varnish instances to realize Ingress
definitions, is in the [``cmd/``](/cmd) folder. The folder also
containes a Makefile defining targets that encapsulate the build
process for the controller executable.
* ``check``, ``test``: build the ``k8s-ingress`` executable if
necessary, and run ``go vet``, ``golint`` and ``go test``.
* ``clean``: run ``go clean``, and clean up other generated artifacts
If you are testing with ``minikube``, set the environment variable
``MINIKUBE=1`` before running ``make container``, so that the
container will be available to the local k8s cluster:
```
$ MINIKUBE=1 make container
```
The build currently depends on the tool
[``gogitversion``](https://github.com/slimhazard/gogitversion) for the
generate step, to generate a version string using ``git describe``,
which needs to be installed by hand. This sequence should suffice:
```
$ go get -d github.com/slimhazard/gogitversion
$ cd $GOPATH/src/github.com/slimhazard/gogitversion
$ make install
```
# Varnish as a Kubernetes Ingress
Since this project is currently in its early stages, the implementation of
......@@ -147,5 +110,5 @@ that:
Varnish.
* If there is no valid Ingress definition (none has been defined
since the Varnish instance started, or the only valid definition
was deleted), then Varnish generates a synthetic 404 Not Found
response for every request.
was deleted), then Varnish generates a synthetic 503 Service Not
Available response for every request.
# Controller executable
This folder contains the source code for the controller executable.
The controller runs in its own container, and can manage groups of
Varnish instances.
## Development
The executable ``k8s-ingress`` is currently built with Go 1.10.
Targets in the Makefile support development in your local environment,
and facilitate testing with ``minikube``:
* ``k8s-ingress``: build the controller executable. This target also
runs ``go get`` for package dependencies, ``go generate`` (see
below) and ``go fmt``.
* ``check``, ``test``: build the ``k8s-ingress`` executable if
necessary, and run ``go vet``, ``golint`` and ``go test``.
* ``clean``: run ``go clean``, and clean up other generated artifacts
The build currently depends on the tool
[``gogitversion``](https://github.com/slimhazard/gogitversion) for the
generate step, to generate a version string using ``git describe``,
which needs to be installed by hand. This sequence should suffice:
```
$ go get -d github.com/slimhazard/gogitversion
$ cd $GOPATH/src/github.com/slimhazard/gogitversion
$ make install
```
# Container images for Varnish and the Ingress controller
Varnish instances to be deployed as realizations of Ingresses and the
controller that manages them are implemented in separate containers.
One controller is able to manage a group of Varnish instances, for
example when they are realized as a Deployment with several replicas.
The Dockerfiles and other files needed to build the two images are in
the current folder. The build commands are encapsulated by these
targets:
```
# Build the image for Varnish as an Ingress
$ make varnish
# Build the image for the controller
$ make controller
# Build both images
$ make
```
If you are testing with ``minikube``, set the environment variable
``MINIKUBE=1`` before running ``make container``, so that the
container will be available to the local k8s cluster:
```
$ MINIKUBE=1 make container
```
Both images must be pushed to a repository available to the k8s
cluster.
* The Varnish image is tagged ``varnish-ingress/varnish``.
* The controller image is tagged ``varnish-ingress/controller``.
The images are only suitable for the realization of Kubernetes
Ingresses. Since the Varnish image has configurations specific for
this purpose, it is not suited as a general-purpose Varnish
deployment.
## Varnish image
The Varnish image currently runs Varnish version 6.0.1. The image runs
Varnish in the foreground as its entry point (``varnishd -F``, see
[``varnishd(1)``](https://varnish-cache.org/docs/6.0/reference/varnishd.html));
so the image runs the Varnish master process as PID 1, which in turn
controls the child or worker process that implements the HTTP proxy.
Varnish is live (although not necessarily ready) when the master
process is running, hence if the container is running at all. The
Deployment configuration for a Varnish instance illustrated in the
[``deploy/``](/deploy) folder shows a simple example of a k8s liveness
check.
Varnish runs with two listeners:
* for "regular" client requests.
* for readiness checks from the k8s cluster
* The Varnish instance is ready when it is configured to respond
with status 200 to requests for a specific URL received over the
"readiness listener". The controller ensures that this happens
after it has loaded the configuration for an Ingress at the
instance. When it is not ready, it responds with status 503.
**TO DO**: The listeners are currently hard-wired at ports 80 and
8080, respectively. It is presently not possible to specify the PROXY
protocol for a listener. The readiness check is hard-wired at the URL
path ``/ready``.
Another listener is opened to receive administrative commands (see
[``varnish-cli(7)``](https://varnish-cache.org/docs/6.0/reference/varnish-cli.html));
this connection will be used by the controller to manage the Varnish
instance.
**TO DO**: The admin port is currently hard-wired as port 6081.
Use of the administrative interface requires authorization based on a
secret that must be shared by the Varnish instances and the
controller. This must be deployed as a k8s Secret, whose contents are
in a file mounted to a path on each Varnish instance, and are obtained
by the controller from the cluster API. The configurations in the
[``deploy/``](/deploy) folder show how this is done.
**TO DO**: The path of the secret file on the Varnish instance is
currently hard-wired as ``/var/run/varnish/_.secret``.
The Varnish instance is configured to start with a start script that
does the following:
* load a VCL configuration that generates a synthetic 200 response for
every request
* load a VCL that generates a synthetic 503 response response for
every request
* apply a "readiness" label to the VCL configuration that responds
with 503
* apply a "regular" label to the VCL that responds with 503
* load a "boot" VCL configuration that directs requests over the
"readiness" listener to the "readiness" label, and all requests over
the public HTTP port to the "regular" label
* make the "boot" configuration active
* start the child process
This means that in the initial configuration, Varnish responds with
the synthetic 503 response to all requests, received over both the
readiness port and the public HTTP port.
The controller operates by loading VCL configurations to implement
Ingress definitions, and swapping the labels. When the controller has
loaded a configuration for an Ingress, the "regular" label is applied
to it. It then applies the "readiness" label to the configuration that
leads to 200 responses; so a Varnish instance becomes ready after it
has successfully loaded its first configuration for an Ingress.
......@@ -5,42 +5,237 @@ cluster. The YAML configurations in this folder prepare a simple
method of deployment, suitable for testing and editing according to
your needs.
1. Define the Namespace ``varnish-ingress``, and a ServiceAccount
named ``varnish-ingress`` in that namespace (``ns-and-sa.yaml``).
2. Apply [Role-based access
control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
(RBAC) by creating a ClusterRole named ``varnish-ingress`` that
permits the necessary API access for the Ingress controller, and a
ClusterRoleBinding that assigns the ClusterRole to the
ServiceAccount defined in the first step (``rbac.yaml``).
3. Define a Deployment named ``varnish-ingress`` in the namespace,
associated with the ServiceAccount. Among other things, this
identifies the container deployed as the Ingress, establishes an
[imagePullPolicy](https://kubernetes.io/docs/concepts/containers/images/),
defines the port number at which Varnish accepts requests, and so forth
(``varnish-ingress.yaml``).
4. Define a NodePort as simple solution for directing requests to Varnish --
an external port number assigned by the Kubernetes cluster through
which the Varnish listener is accessed (``nodeport.yaml``).
This sequence of commands creates the resources described above:
## Namespace and ServiceAccount
Define the Namespace ``varnish-ingress``, and a ServiceAccount named
``varnish-ingress`` in that namespace:
```
$ kubectl apply -f ns-and-sa.yaml
```
**NOTE**: You can choose any Namespace, but currently all further
operations are restricted to that Namespace -- all resources described
in the following must be defined in the same Namespace. The controller
currently only reads information from the cluster API about Ingresses,
Services and so forth in the namespace of the pod in which it is
running; so all Varnish instances and every resource named in an
Ingress definition must defined in that namespace. This is likely to
become more flexible in future development.
## RBAC
Apply [Role-based access
control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
(RBAC) by creating a ClusterRole named ``varnish-ingress`` that
permits the necessary API access for the Ingress controller, and a
ClusterRoleBinding that assigns the ClusterRole to the ServiceAccount
defined in the first step:
```
$ kubectl apply -f rbac.yaml
$ kubectl apply -f varnish-ingress.yaml
```
## Admin Secret
The controller uses Varnish's admin interface to manage the Varnish
instance, which requires authorization using a shared secret. This is
prepared by defining a k8s Secret:
```
$ kubectl apply -f adm-secret.yaml
```
32 bytes of randomness are sufficient for the secret:
```
# This command can be used to generate the value in the data field of
# the Secret:
$ head -c32 /dev/urandom | base64
```
**TO DO**: The ``metadata.name`` field of the Secret is currently
hard-wired to the value ``adm-secret``, and the key for the Secret (in
the ``data`` field) is hard-wired to ``admin``. The Secret must be
defined in the same Namespace defined above.
## Deploy Varnish containers
The present example uses a Deployment to deploy Varnish instances
(other possibilities are a DaemonSet or a StatefulSet):
```
$ kubectl apply -f varnish.yaml
```
With a choice such as a Deployment you can set as many replicas as you
need; the controller will manage all of them uniformly.
There are some requirements on the configuration of the Varnish
deployment that must be fulfilled in order for the Ingress to work
properly:
* Currently it must be defined in the same Namespace as defined
above.
* The ``serviceAccountName`` must match the ServiceAccount defined
above.
* The ``image`` must be specified as ``varnish-ingress/varnish``.
* ``spec.template`` must specify a ``label`` with a value that is
matched by the Varnish admin Service described below. In this
example:
```
template:
metadata:
labels:
app: varnish-ingress
```
* The HTTP, readiness and admin ports must be specified:
```
ports:
- name: http
containerPort: 80
- name: k8sport
containerPort: 8080
- name: admport
containerPort: 6081
```
**TO DO**: The ports are currently hard-wired to these port numbers.
A port for TLS access is currently not supported.
* ``volumeMounts`` and ``volumes`` must be specified so that the
Secret defined above is available to Varnish:
```
volumeMounts:
- name: adm-secret
mountPath: "/var/run/varnish"
readOnly: true
```
```
volumes:
- name: adm-secret
secret:
secretName: adm-secret
items:
- key: admin
path: _.secret
```
**TO DO**: The ``mountPath`` is currently hard-wired to
``/var/run/varnish``. The ``secretName`` is hard-wired to
``adm-secret``, the ``key`` to ``admin``, and ``path`` to
``_.secret``.
* The liveness check should determine if the Varnish master process is
running. Since Varnish is started in the foreground as the entry
point of the container, the container is live if it is running at
all. This check verifies that a ``varnishd`` process with parent PID
0 is found in the process table:
```
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -P
- "0"
- varnishd
```
* The readiness check is an HTTP probe at the reserved listener (named
``k8sport`` above) for the URL path ``/ready``:
```
readinessProbe:
httpGet:
path: /ready
port: k8sport
```
The port name must match the name given for port 8080 above.
## Expose the Varnish HTTP port
With a Deployment, you may choose a resource such as a LoadBalancer or
Nodeport to create external access to Varnish's HTTP port. The present
example creates a Nodeport, which is simple for development and
testing (a LoadBalancer is more likely in production deployments):
```
$ kubectl apply -f nodeport.yaml
```
The cluster then assigns an external port over which HTTP requests are
directed to Varnish instances.
## Varnish admin Service
The controller discovers Varnish instances that it manages by
obtaining the Endpoints for a headless Service that the admin port:
```
$ kubectl apply -f varnish-adm-svc.yaml
```
This makes it possible for the controller to find the internal
addresses of Varnish instances and connect to their admin listeners.
The Service definition must fulfill some requirements:
* The Service must be defined so that the cluster API will allow
Endpoints to be listed when the container is not ready (since
the Varnish instances are initialized in the not ready state).
The means for doing so has changed in different versions of
Kubernetes. In versions up 1.9, this annotation must be used:
```
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
```
Since 1.9, the annotation is deprecated, and this field in ``spec``
should be specified instead:
```
spec:
publishNotReadyAddresses: true
```
In recent versions, both specifications are permitted in the YAML,
as in example YAML (the annotation is deprecated, but is not yet an
error).
* The ``selector`` must match the ``label`` given for the Varnish
deployment, as discussed above. In the present example:
```
selector:
app: varnish-ingress
```
**TO DO**: The Service must be defined in the Namespace of the pod in
which the controller runs. The ``name`` of the Service is currently
hard-wired to ``varnish-ingress-admin``. The port number is hard-wired
to 6081, and the ``port.name`` is hardwired to ``varnishadm``.
## Deploy the controller
This example uses a Deployment to run the controller container:
```
$ kubectl apply -f controller.yaml
```
The requirements are:
* The ``image`` must be ``varnish-ingress/controller``.
* ``spec.template.spec`` must specify the ``POD_NAMESPACE``
environment variable:
```
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
```
It does *not* make sense to deploy more than one replica of the
controller. If there are more controllers, all of them will connect to
the Varnish instances and send them the same administrative
commands. That is not an error (or there is a bug in the controller if
it does cause errors), but the extra work is superflous.
**TO DO**: The controller currently only acts on Ingress, Service,
Endpoint and Secret definitions in the same Namespace as the pod in
which it is running.
# Done
When these commands succeed:
* Varnish is started, and when the child process is running, it can
receive requests sent to the external port established by the
NodePort.
* The Varnish instances are running and are in the not ready state.
They answer with synthetic 503 Service Not Available responses to
every request, for both readiness probes and regular HTTP traffic.
* The Ingress controller begins discovering Ingress definitions for
the namespace of the Pod in which it is running (``varnish-ingress``
in this example). Once it has obtained an Ingress definition, it
creates a VCL configuration to implement it.
* Before such a VCL configuration is loaded, Varnish answers every
request with a synthetic 404 Not Found response.
creates a VCL configuration to implement it, and instructs the
Varnish instances to load and use it.
You can now define Services that will serve as backends for the
Varnish instances, and Ingress rules that define how they route
requests to those Services.
The [``examples/``](/examples) folder of the repository contains YAML
configurations for sample Services and an Ingress to test and
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment