Commit 81d7c3e2 authored by Geoff Simmons's avatar Geoff Simmons

Add docs and examples for multi-controller configurations.

Closes #26
parent 1c189b14
...@@ -16,6 +16,8 @@ The docs in this folder cover these topics: ...@@ -16,6 +16,8 @@ The docs in this folder cover these topics:
different namespaces different namespaces
* merging various Ingress definitions into a comprehensive set * merging various Ingress definitions into a comprehensive set
or routing rules implemented by a Varnish Service or routing rules implemented by a Varnish Service
* running more than one controller in a cluster, if necessary
(in most cases, one controller Pod in a cluster will suffice)
* [Logging, Events and the Varnish Service monitor](monitor.md) * [Logging, Events and the Varnish Service monitor](monitor.md)
......
...@@ -105,11 +105,12 @@ The controller is notified about all Services, Ingresses and so on in ...@@ -105,11 +105,12 @@ The controller is notified about all Services, Ingresses and so on in
the cluster, by default in every namespace, including components that the cluster, by default in every namespace, including components that
have nothing to do with Ingress or Varnish. These are ignored -- for have nothing to do with Ingress or Varnish. These are ignored -- for
example, Ingresses without the ``ingress.class`` annotation set to example, Ingresses without the ``ingress.class`` annotation set to
``varnish``, or Secrets that do not have the label ``varnish`` (or the value of the [controller option
``app: varnish-ingress``. The controller may generate ``SyncSuccess`` ``-class``](/docs/ref-cli-options.md), or Secrets that do not have the
Events for such objects, but in fact it has done nothing for them. label ``app: varnish-ingress``. The controller may generate
The controller log usually contains a message at the ``INFO`` level ``SyncSuccess`` Events for such objects, but in fact it has done
that it has ignored information about a component. nothing for them. The controller log usually contains a message at
the ``INFO`` level that it has ignored information about a component.
## Varnish Service monitor ## Varnish Service monitor
......
...@@ -10,6 +10,10 @@ $ k8s-ingress --help ...@@ -10,6 +10,10 @@ $ k8s-ingress --help
Usage of ./k8s-ingress: Usage of ./k8s-ingress:
-alsologtostderr -alsologtostderr
log to standard error as well as files log to standard error as well as files
-class string
value of the Ingress annotation kubernetes.io/ingress.class
the controller only considers Ingresses with this value for the
annotation (default "varnish")
-kubeconfig string -kubeconfig string
config path for the cluster master URL, for out-of-cluster runs config path for the cluster master URL, for out-of-cluster runs
-log-level string -log-level string
...@@ -83,6 +87,18 @@ the file at that path immediately at startup, if any exists, and ...@@ -83,6 +87,18 @@ the file at that path immediately at startup, if any exists, and
touches it when it is ready. Readiness probes can then test the file touches it when it is ready. Readiness probes can then test the file
for existence. By default, no readiness file is created. for existence. By default, no readiness file is created.
``-class ingclass`` sets the string ``ingclass`` (default ``varnish``)
as the required value of the Ingress annotation
``kubernetes.io/ingress.class``. The controller ignores Ingresses
that do not have the annotation set to this value. This makes it
possible for the Varnish Ingress implementation to co-exist in a
cluster with other implementations, as long as the other
implementations also respect the annotation. It also makes it possible
to deploy more than one Varnish controller to manage Varnish Services
and Ingresses separately; see the
[documentation](/docs/ref-svcs-ingresses-ns.md) and
[examples](/examples/architectures/multi-controller/) for details.
``-monitorintvl`` sets the interval for the ``-monitorintvl`` sets the interval for the
[monitor](/docs/monitor.md). By default 30 seconds, and the monitor is [monitor](/docs/monitor.md). By default 30 seconds, and the monitor is
deactivated for values <= 0. The monitor sleeps this long between deactivated for values <= 0. The monitor sleeps this long between
......
# Structuring Varnish Services, Ingresses and namespaces # Structuring Varnish Services, Ingresses, controllers and namespaces
This document is the authoritative reference for the configuration This document is the authoritative reference for the configuration
elements and rules governing these relationships: elements and rules governing these relationships:
...@@ -12,6 +12,8 @@ elements and rules governing these relationships: ...@@ -12,6 +12,8 @@ elements and rules governing these relationships:
* how various Ingress definitions can be merged into a comprehensive * how various Ingress definitions can be merged into a comprehensive
set of routing rules implemented by a single Varnish Service set of routing rules implemented by a single Varnish Service
* how to operate more than one controller in a cluster, if needed
These relations are driven by the contents of Ingress definitions, These relations are driven by the contents of Ingress definitions,
both their rules and these two annotations: both their rules and these two annotations:
...@@ -41,9 +43,11 @@ configurations that apply the following rules. ...@@ -41,9 +43,11 @@ configurations that apply the following rules.
* The controller only considers Ingress definitions with the * The controller only considers Ingress definitions with the
``kubernetes.io/ingress.class`` annotation set to specify Varnish as ``kubernetes.io/ingress.class`` annotation set to specify Varnish as
the implementation, with the currently hard-wired value the implementation, by default with the value ``"varnish"`` (or the
``"varnish"``. Ingresses that do not have the annotation, or in value of the [controller option
which the annotation is set to another value, are ignored. ``-class``](/docs/ref-cli-options.md)). Ingresses that do not have
the annotation, or in which the annotation is set to another value,
are ignored.
* Services that run Varnish and implement Ingress, using the * Services that run Varnish and implement Ingress, using the
Varnish container defined for this project, are identified Varnish container defined for this project, are identified
...@@ -92,3 +96,37 @@ of the Kubernetes standard specification for host and path rules. For ...@@ -92,3 +96,37 @@ of the Kubernetes standard specification for host and path rules. For
each host, the first path rule that matches the URL determines how a each host, the first path rule that matches the URL determines how a
request is routed. But if the same host appears in more than one request is routed. But if the same host appears in more than one
Ingress, then there is no defined ordering for the path rules. Ingress, then there is no defined ordering for the path rules.
## Multiple controllers
The controller is designed so that it can run in only one Pod and
manage all of the Varnish Services for Ingress in the entire cluster
(deployment in namespace ``kube-system`` is a natural choice). But it
is possible to run more than one instance to manage separate Varnish
Services, for example to partition the controller load, or to
logically separate the responsibilities of controllers.
To do so:
* Start the different controller instances with different values of
the [command-line option ``-class``](/docs/ref-cli-options.md), to
designate distinct values of the Ingress annotation
``kubernetes.io/ingress.class``. Then the different controller
instances will only implement the Ingress definitions that have
their "own" value for the annotation.
* Ingress definitions with distinct values of the ``ingress.class``
annotation should designate distinct Varnish Services (with one of
the means described above). In other words, the Ingresses and
Varnish Services managed by one controller should not be managed by
any other controller.
Multiple controllers in a cluster SHOULD NOT be started with the same
value of the ``-class`` option. Varnish Services SHOULD NOT be
designated by Ingress definitions with different values of the
``ingress.class`` annotation. If more than one controller attempts to
manage the same Ingresses or Varnish Services, the results are
undefined, and the desired state of the cluster might not be achieved.
See the [``examples/`` folder](/examples/architectures/multi-controller/)
for a working example of two Varnish controllers in a cluster.
# Architectures for Varnish Services, Ingresses and namespaces # Architectures for Varnish Services, Ingresses, controllers and namespaces
The examples in the subfolders illustrate some of the possible The examples in the subfolders illustrate some of the possible
relations between Varnish Services implementing Ingress, Ingress relations between Ingress controllers, Varnish Services implementing
definitions defining routing rules, and the namespaces in which they Ingress, Ingress definitions defining routing rules, and the
are deployed: namespaces in which they are deployed:
* A [cluster-wide Varnish * A [cluster-wide Varnish
Service](/examples/architectures/clusterwide/) that implements Service](/examples/architectures/clusterwide/) that implements
...@@ -17,6 +17,10 @@ are deployed: ...@@ -17,6 +17,10 @@ are deployed:
Services](/examples/architectures/multi-varnish-ns/) in the same Services](/examples/architectures/multi-varnish-ns/) in the same
namespace, each of which implement separate Ingress rules. namespace, each of which implement separate Ingress rules.
* [Multiple Ingress
controllers](/examples/architectures/multi-controller/) for Varnish,
managing separate sets of Varnish Services and Ingresses.
These configurations apply the [rules](/docs/ref-svcs-ingresses-ns.md) These configurations apply the [rules](/docs/ref-svcs-ingresses-ns.md)
concerning the relationships between Varnish Services, Ingresses and concerning the relationships between Varnish Services, Ingresses and
namespaces. namespaces.
# Multiple controllers
The controller is designed so that in most deployments, it suffices to
run it in exactly one Pod in the cluster, to manage all Varnish
Services and Ingresses in the cluster. But more than one controller
instance can be run by following the method described in the
[documentation](/docs/ref-svcs-ingresses-ns.md). The sample manifests
in this folder demonstrate a working example.
Multiple controllers are only assured to work correctly if they manage
distinct sets of Varnish Services and Ingresses (otherwise the results
are undefined). This is accomplished by:
* Starting the different controller instances with different values of
the [command-line option ``-class``](/docs/ref-cli-options.md),
defining the value of the Ingress annotation
``kubernetes.io/ingress.class`` that the controller instance will
consider. This defines which controllers manage which Ingresses.
* No two Ingress definitions with different values of the
``ingress.class`` annotation should designate the same Varnish
Service to implement the Ingress rules (by the rules for determining
the Varnish Service as described in the
[documentation](/docs/ref-svcs-ingresses-ns.md)).
The [Deployment
manifest](/examples/architectures/multi-controller/controller.yaml)
for a Varnish controller in this folder shows the use of the ``-class``
option in the ``spec.args`` field of its Pod template:
```
args:
- -readyfile=/ready
- -class=varnish-coffee
```
This sets the value of the ``ingress.class`` annotation for Ingresses
that the controller considers.
## The example
![multiple controllers](multi-controller.png?raw=true "multiple controllers")
The configuration is similar to the ["cafe" example](/examples/hello/)
in that it defines the Services ``coffee-svc`` and ``tea-svc``, and
Ingress rules route requests to those Services. There are also Varnish
Services ``varnish-coffee`` and ``varnish-tea`` in the same namespace.
* Controller instance ``varnish-ingress-controller`` is started with
the default value ``"varnish"`` for the ``ingress.class``
annotation. This is the same configuration defined by the manifests
in the [``deploy/`` folder](/deploy/); the configuration is not
included in the present folder.
* Controller instance ``varnish-coffee-ingress-controller`` is started
with the [command-line option ``-class``](/docs/ref-cli-options.md)
set to ``"varnish-coffee"``, so that this instance only considers
Ingresses with that value for the ``ingress.class`` annotation.
* Ingress ``tea-ingress`` sets the ``ingress.class`` annotation to
``"varnish"``. It defines the rule that requests with Host
``tea.example.com`` are routed to ``tea-svc``. This Ingress has the
``varnish-svc`` annotation to specify the Varnish Service
``varnish-tea`` as the one to implement its rules.
* Ingress ``coffee-ingress`` sets the ``ingess.class`` annotation to
defines the rule that requests with the Host ``coffee.example.com``
are routed to ``coffee-svc``. It uses ``varnish-svc`` to specify the
Varnish Service ``varnish-coffee``.
The effect is that:
* Controller ``varnish-ingress-controller`` manages Varnish Service
``varnish-tea`` to implement the Ingress rule in ``tea-ingress``.
* Controller ``varnish-coffee-ingress-controller`` manages Varnish
Service ``varnish-coffee`` to implement the Ingress rule in
``coffee-ingress``.
## Deploying the example
First deploy the ``varnish-ingress-controller`` instance as described
in the [deployment instructions](/deploy/), and then deploy
``varnish-coffee-ingress-controller`` as the second controller
instance:
```
$ kubectl apply -f controller.yaml
```
Then define the ``cafe`` namespace:
```
$ kubectl apply -f namespace.yaml
```
Then define the backend Deployments and Services. These are the same
simple applications used for the ["cafe" example](/examples/hello/),
but with ``namespace`` set to ``cafe``:
```
$ kubectl apply -f coffee.yaml
$ kubectl apply -f tea.yaml
```
Now define the two Varnish Service and associated resources. This is
done the same way as described for the [example of multiple Varnish
Services in a namespace](/examples/architectures/multi-varnish-ns/):
```
$ kubectl apply -f adm-secret-tea.yaml
$ kubectl apply -f nodeport-tea.yaml
$ kubectl apply -f varnish-tea.yaml
$ kubectl apply -f adm-secret-coffee.yaml
$ kubectl apply -f nodeport-coffee.yaml
$ kubectl apply -f varnish-coffee.yaml
```
(Running multiple controllers does not depend on whether or not multiple
Varnish Services are run in the same namespace.)
The routing rules to be implemented by Varnish can now be configured
by loading the Ingress definitions:
```
$ kubectl apply -f coffee-ingress.yaml
$ kubectl apply -f tea-ingress.yaml
```
## Verification
The log output of the two controller instance shows their use of
different values for ``ingress.class``, which in turn determines which
of the Ingresses they manage or ignore.
In the log output for ``varnish-ingress-controller``:
```
Ingress class:varnish
Ingress cafe/tea-ingress configured for Varnish Service cafe/varnish-tea
Ignoring Ingress cafe/coffee-ingress, Annotation 'kubernetes.io/ingress.class' absent or is not 'varnish'
```
In the log for ``varnish-coffee-ingress-controller``:
```
Ingress class:varnish-coffee
Ingress cafe/coffee-ingress configured for Varnish Service cafe/varnish-coffee
Ignoring Ingress cafe/tea-ingress, Annotation 'kubernetes.io/ingress.class' absent or is not 'varnish-coffee'
```
The implementation of the Ingress rules by the Varnish Services can
now be verified, for example with curl. Since we are accessing the two
Varnish Services as Nodeports, they are accessed externally over two
different ports. In the following, we use:
* ``$IP_ADDR`` for the IP address of the Kubernetes cluster
* ``$IP_PORT_COFFEE`` for the port at which requests are forwarded to
the Varnish Service ``varnish-coffee``
* ``$IP_PORT_TEA`` for the port at which requests are forwarded to
Varnish Service ``varnish-tea``
These values are used with curl's ``-x`` option (or ``--proxy``), to
identify the IP/port address as a proxy.
```
# Requests sent to Varnish Service varnish-coffee with
# Host:coffee.example.com are routed to coffee-svc:
$ curl -v -x $IP_ADDR:$IP_PORT_COFFEE http://coffee.example.com/foo
[...]
> GET http://coffee.example.com/foo HTTP/1.1
> Host: coffee.example.com
[...]
>
< HTTP/1.1 200 OK
[...]
Server name: coffee-6c47b9cb9c-mgh48
[...]
# Requests sent to varnish-coffee with any other Host result in a 404
# response.
$ curl -v -x $IP_ADDR:$IP_PORT_COFFEE http://tea.example.com/foo
[...]
> GET http://tea.example.com/foo HTTP/1.1
> Host: tea.example.com
[...]
>
< HTTP/1.1 404 Not Found
[...]
# Requests sent to Varnish Service varnish-tea with
# Host:tea.example.com are routed to tea-svc:
$ curl -v -x $IP_ADDR:$IP_PORT_TEA http://tea.example.com/bar
[...]
> GET http://tea.example.com/bar HTTP/1.1
> Host: tea.example.com
[...]
< HTTP/1.1 200 OK
[...]
Server name: tea-58d4697745-wxdzb
[...]
# Requests sent to varnish-tea with any other Host get the 404
# response:
$ curl -v -x $IP_ADDR:$IP_PORT_TEA http://coffee.example.com/bar
[...]
> GET http://coffee.example.com/bar HTTP/1.1
> Host: coffee.example.com
[...]
>
< HTTP/1.1 404 Not Found
[...]
```
apiVersion: v1
kind: Secret
metadata:
name: coffee-secret
namespace: cafe
labels:
app: varnish-ingress
type: Opaque
data:
admin: BhjBhxjqrbCnW2eYLoUL+C2TN51a8sWQIfL9oRWPY2E=
apiVersion: v1
kind: Secret
metadata:
name: tea-secret
namespace: cafe
labels:
app: varnish-ingress
type: Opaque
data:
admin: IZqtwnccuVoCblGaTq8yK8mOk8gtLwWmbZq17tpcdwo=
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: coffee-ingress
namespace: cafe
annotations:
kubernetes.io/ingress.class: "varnish-coffee"
ingress.varnish-cache.org/varnish-svc: "varnish-coffee"
spec:
rules:
- host: coffee.example.com
http:
paths:
- backend:
serviceName: coffee-svc
servicePort: 80
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coffee
namespace: cafe
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
namespace: cafe
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: coffee
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: varnish-coffee-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: varnish-ingress-controller
template:
metadata:
labels:
app: varnish-ingress-controller
spec:
serviceAccountName: varnish-ingress-controller
containers:
- image: varnish-ingress/controller
imagePullPolicy: IfNotPresent
name: varnish-ingress-controller
ports:
- name: http
containerPort: 8080
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -P
- "0"
- k8s-ingress
readinessProbe:
exec:
command:
- /usr/bin/test
- -e
- /ready
args:
- -readyfile=/ready
- -class=varnish-coffee
apiVersion: v1
kind: Namespace
metadata:
name: cafe
apiVersion: v1
kind: Service
metadata:
name: varnish-coffee
namespace: cafe
labels:
app: varnish-ingress
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
type: NodePort
ports:
- port: 6081
targetPort: 6081
protocol: TCP
name: varnishadm
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: varnish-ingress
ingress: coffee
publishNotReadyAddresses: true
apiVersion: v1
kind: Service
metadata:
name: varnish-tea
namespace: cafe
labels:
app: varnish-ingress
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
type: NodePort
ports:
- port: 6081
targetPort: 6081
protocol: TCP
name: varnishadm
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: varnish-ingress
ingress: tea
publishNotReadyAddresses: true
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tea-ingress
namespace: cafe
annotations:
kubernetes.io/ingress.class: "varnish"
ingress.varnish-cache.org/varnish-svc: "varnish-tea"
spec:
rules:
- host: tea.example.com
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tea
namespace: cafe
spec:
replicas: 3
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
- name: tea
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: tea-svc
namespace: cafe
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: tea
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: varnish-coffee
namespace: cafe
spec:
replicas: 2
selector:
matchLabels:
app: varnish-ingress
ingress: coffee
template:
metadata:
labels:
app: varnish-ingress
ingress: coffee
spec:
containers:
- image: varnish-ingress/varnish
imagePullPolicy: IfNotPresent
name: varnish-ingress
ports:
- name: http
containerPort: 80
- name: k8s
containerPort: 8080
- name: varnishadm
containerPort: 6081
volumeMounts:
- name: adm-secret
mountPath: "/var/run/varnish"
readOnly: true
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -P
- "0"
- varnishd
readinessProbe:
httpGet:
path: /ready
port: k8s
volumes:
- name: adm-secret
secret:
secretName: coffee-secret
items:
- key: admin
path: _.secret
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: varnish-tea
namespace: cafe
spec:
replicas: 2
selector:
matchLabels:
app: varnish-ingress
ingress: tea
template:
metadata:
labels:
app: varnish-ingress
ingress: tea
spec:
containers:
- image: varnish-ingress/varnish
imagePullPolicy: IfNotPresent
name: varnish-ingress
ports:
- name: http
containerPort: 80
- name: k8s
containerPort: 8080
- name: varnishadm
containerPort: 6081
volumeMounts:
- name: adm-secret
mountPath: "/var/run/varnish"
readOnly: true
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -P
- "0"
- varnishd
readinessProbe:
httpGet:
path: /ready
port: k8s
volumes:
- name: adm-secret
secret:
secretName: tea-secret
items:
- key: admin
path: _.secret
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment