Commit cd5bc9e7 authored by Geoff Simmons's avatar Geoff Simmons

Add example architectures for mutiple Varnishen, Ingresses & namespaces.

Ref #26
Ref #13
parent 267dcbdd
......@@ -36,6 +36,9 @@ spec:
[...]
```
See the [``examples/`` folder](/examples/architectures/) for sample
configurations that apply the following rules.
* The controller only considers Ingress definitions with the
``kubernetes.io/ingress.class`` annotation set to specify Varnish as
the implementation, with the currently hard-wired value
......
# Architectures for Varnish Services, Ingresses and namespaces
The examples in the subfolders illustrate some of the possible
relations between Varnish Services implementing Ingress, Ingress
definitions defining routing rules, and the namespaces in which they
are deployed:
* A [cluster-wide Varnish
Service](/examples/architectures/clusterwide/) that implements
Ingress rules in all namespaces.
* A [setup](/examples/architectures/cluster-and-ns-wide/) with a
cluster-wide Service, and another Varnish Service that implements
Ingress rules in its own namespace.
* [Multiple Varnish
Services](/examples/architectures/multi-varnish-ns/) in the same
namespace, each of which implement separate Ingress rules.
These configurations apply the [rules](/docs/ref-svcs-ingresses-ns.md)
concerning the relationships between Varnish Services, Ingresses and
namespaces.
# A cluster-wide Varnish Service, and another in a namespace
The sample manifests in this folder implement the following
configuration in a cluster:
* A Varnish-as-Ingress deployment in the ``kube-system`` namespace
acts as a "cluster-wide" service.
* But there is another Varnish deployment to implement one of the
Ingresses in another namespace.
* Ingress definitions use the
``ingress.varnish-cache.org/varnish-svc`` annotation to identify the
Varnish Service in ``kube-system`` as the one to implement their
rules.
* The Ingress definition in the other namespace with a Varnish Service
has no such annotation. The unique Varnish Service in the same
namespace is assumed as the one to implement its rules.
The Ingresses all have the ``ingress.class:varnish`` annotation to
identify Varnish as the implementation of Ingress rules. Two Ingresses
are merged to form a set of rules implemented by the cluster-wide
Varnish Service.
The configuration illustrates these features:
* Use of the ``ingress.varnish-cache.org/varnish-svc`` in an Ingress
definition to explicitly identify a Varnish Service to implement its
rules.
* If an Ingress has no such annotation, and there is more than one
Varnish Service for Ingress in the cluster, but exactly one in the
same namespace, then the Service in the same namespace implements
its rules.
* Merging Ingresses from different namespaces. A Varnish Service
configures Services from different namespaces as backends, when
Ingresses in the various namespaces reference backend Services in
their own namespace.
## The example
![Varnish cluster-wide and per namespace](cluster-ns-wide.png?raw=true "Varnish cluster-wide and per namespace")
The configuration is similar to the ["cafe" example](/examples/hello/)
in that it defines "coffee" and "tea" Services, and Ingress rules
route requests to those Services. There is also an "other" Service to
serve as the default backend when no Ingress rules apply.
* In ``kube-system``, the Service ``varnish-ingress`` is deployed.
The label ``app:varnish-ingress`` identifies it as an Ingress
implementation to be managed by the controller defined for this
project.
* In the ``cafe`` namespace, these resources are defined:
* Services ``coffee-svc`` and ``tea-svc``
* Service ``varnish-ingress``, with the ``app:varnish-ingress``
label identifying it as an Ingress implementation.
* Ingress ``tea-ingress``, defining the rule that requests with
the Host ``tea.example.com`` are routed to ``tea-svc``. This
Ingress has the ``varnish-svc`` annotation to specify the
Varnish Service in ``kube-system`` as the one to implement its
rules.
* Ingress ``coffee-ingress``, with the rule that requests with
Host ``coffee.example.com`` are routed to ``coffee-svc``. There
is no ``varnish-svc`` annotation. Since there is more than one
Varinsh-as-Ingress Service in the cluster, but only one in
namespace ``cafe``, the Varnish Service in the same namespace
implements its rules.
* In the ``other`` namespace:
* Service ``other-svc``
* Ingress ``other-ingress``, in which ``other-svc`` is defined as
a default backend (to which requests are routed when no other
Ingress rules apply). Like ``tea-ingress`` discussed above, this
Ingress uses the ``varnish-svc`` annotation to specify the
Varnish Service in ``kube-system``.
The Varnish Ingress implementation combines these rules and routes
requests to the three Services.
## Deploying the example
First define the two namespaces:
```
$ kubectl apply -f namespace.yaml
```
Then define the backend Deployments and Services in the two
namespaces. These are the same simple applications used for the
["cafe" example](/examples/hello/), but with ``namespace``
configurations in their ``metadata``:
```
$ kubectl apply -f coffee.yaml
$ kubectl apply -f tea.yaml
$ kubectl apply -f other.yaml
```
Now define the two Varnish Service and associated resources in the
``kube-system`` and ``cafe`` namespaces. This is similar to the
sequence described in the [deployment instructions](/deploy/), but we
define for both Varnish deployments:
* a Secret, to authorize use of the Varnish admin interface
* the Varnish Service as a Nodeport (for simplicity's sake)
* a Deployment that specifies the ``varnish-ingress`` container, and
some required properties for the Ingress implementation
```
# Set up the Varnish deployment in kube-system
$ kubectl apply -f adm-secret-system.yaml
$ kubectl apply -f nodeport-system.yaml
$ kubectl apply -f varnish-system.yaml
# And in namespace cafe
$ kubectl apply -f adm-secret-coffee.yaml
$ kubectl apply -f nodeport-coffee.yaml
$ kubectl apply -f varnish-coffee.yaml
```
The routing rules to be implemented by Varnish can now be configured
by loading the three Ingress definitions:
```
$ kubectl apply -f coffee-ingress.yaml
$ kubectl apply -f tea-ingress.yaml
$ kubectl apply -f other-ingress.yaml
```
## Verification
The log output of the Ingress controller shows the association of
Ingress definitions with Varnish Services:
```
Ingresses implemented by Varnish Service kube-system/varnish-ingress: [other/other-ingress cafe/tea-ingress]
Ingresses implemented by Varnish Service cafe/varnish-ingress: [cafe/coffee-ingress]
```
The implementation of the Ingress rules by the Varnish Services can
now be verified, for example with curl. Since we are accessing the two
Varnish Services as Nodeports, they are accessed externally over two
different ports. In the following, we use:
* ``$IP_ADDR`` for the IP address of the Kubernetes cluster
* ``$IP_PORT_SYSTEM`` for the port at which requests are forwarded to
the Varnish Service in ``kube-system``
* ``$IP_PORT_CAFE`` for the port at which requests are forwarded to
the Varnish Service in namespace ``cafe``
These values are used with curl's ``-x`` option (or ``--proxy``), to
identify the IP/port address as a proxy.
```
# Requests sent to the Varnish Service in kube-system with
# Host:tea.example.com are routed to tea-svc:
$ curl -v -x $IP_ADDR:$IP_PORT_SYSTEM http://tea.example.com/foo
[...]
> GET http://tea.example.com/foo HTTP/1.1
> Host: tea.example.com
[...]
>
< HTTP/1.1 200 OK
[...]
Server name: tea-58d4697745-wxdzb
[...]
# Requests sent to the Varnish Service in kube-system with any other
# Host are routed to other-svc.
$ curl -v -x $IP_ADDR:$IP_PORT_SYSTEM http://anything.else/bar
[...]
> GET http://anything.else/bar HTTP/1.1
> Host: anything.else
[...]
>
< HTTP/1.1 200 OK
[...]
Server name: other-55cfbbf569-hv7x2
[...]
# Requests sent to the Varnish Service in namespace cafe with
# Host:coffee.example.com are routed to tea-svc:
$ curl -v -x $IP_ADDR:$IP_PORT_CAFE http://coffee.example.com/coffee
[...]
> GET http://coffee.example.com/coffee HTTP/1.1
> Host: coffee.example.com
[...]
< HTTP/1.1 200 OK
[...]
Server name: coffee-6c47b9cb9c-vlvdz
[...]
# Requests sent to the Varnish Service in namespace cafe with
# any other Host get the 404 response:
$ curl -v -x $IP_ADDR:$IP_PORT_CAFE http://tea.example.com/foo
[...]
> GET http://tea.example.com/foo HTTP/1.1
> Host: tea.example.com
[...]
>
< HTTP/1.1 404 Not Found
[...]
```
apiVersion: v1
kind: Secret
metadata:
name: adm-secret
namespace: cafe
labels:
app: varnish-ingress
type: Opaque
data:
admin: ByIQphD6z6UY3nEXAVS+AlrQUXgzg2dcT1Zd1rG1l4M=
apiVersion: v1
kind: Secret
metadata:
name: adm-secret
namespace: kube-system
labels:
app: varnish-ingress
type: Opaque
data:
admin: f/y/Vt0O7rnL3m5LM2upu/ImjA6paITHmvYYEQ1Qrfg=
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: coffee-ingress
namespace: cafe
annotations:
kubernetes.io/ingress.class: "varnish"
spec:
rules:
- host: coffee.example.com
http:
paths:
- backend:
serviceName: coffee-svc
servicePort: 80
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coffee
namespace: cafe
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
namespace: cafe
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: coffee
apiVersion: v1
kind: Namespace
metadata:
name: cafe
---
apiVersion: v1
kind: Namespace
metadata:
name: other
apiVersion: v1
kind: Service
metadata:
name: varnish-ingress
namespace: cafe
labels:
app: varnish-ingress
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
type: NodePort
ports:
- port: 6081
targetPort: 6081
protocol: TCP
name: varnishadm
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: varnish-ingress
publishNotReadyAddresses: true
apiVersion: v1
kind: Service
metadata:
name: varnish-ingress
namespace: kube-system
labels:
app: varnish-ingress
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
type: NodePort
ports:
- port: 6081
targetPort: 6081
protocol: TCP
name: varnishadm
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: varnish-ingress
publishNotReadyAddresses: true
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: other-ingress
namespace: other
annotations:
kubernetes.io/ingress.class: "varnish"
ingress.varnish-cache.org/varnish-svc: "kube-system/varnish-ingress"
spec:
backend:
serviceName: other-svc
servicePort: 80
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: other
namespace: other
spec:
replicas: 2
selector:
matchLabels:
app: other
template:
metadata:
labels:
app: other
spec:
containers:
- name: other
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: other-svc
namespace: other
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: other
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tea-ingress
namespace: cafe
annotations:
kubernetes.io/ingress.class: "varnish"
ingress.varnish-cache.org/varnish-svc: "kube-system/varnish-ingress"
spec:
rules:
- host: tea.example.com
http:
paths:
- backend:
serviceName: tea-svc
servicePort: 80
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tea
namespace: cafe
spec:
replicas: 3
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
- name: tea
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: tea-svc
namespace: cafe
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: tea
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: varnish
namespace: cafe
spec:
replicas: 2
selector:
matchLabels:
app: varnish-ingress
template:
metadata:
labels:
app: varnish-ingress
spec:
containers:
- image: varnish-ingress/varnish
imagePullPolicy: IfNotPresent
name: varnish-ingress
ports:
- name: http
containerPort: 80
- name: k8s
containerPort: 8080
- name: varnishadm
containerPort: 6081
volumeMounts:
- name: adm-secret
mountPath: "/var/run/varnish"
readOnly: true
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -P
- "0"
- varnishd
readinessProbe:
httpGet:
path: /ready
port: k8s
volumes:
- name: adm-secret
secret:
secretName: adm-secret
items:
- key: admin
path: _.secret
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: varnish
namespace: kube-system
spec:
replicas: 2
selector:
matchLabels:
app: varnish-ingress
template:
metadata:
labels:
app: varnish-ingress
spec:
containers:
- image: varnish-ingress/varnish
imagePullPolicy: IfNotPresent
name: varnish-ingress
ports:
- name: http
containerPort: 80
- name: k8s
containerPort: 8080
- name: varnishadm
containerPort: 6081
volumeMounts:
- name: adm-secret
mountPath: "/var/run/varnish"
readOnly: true
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -P
- "0"
- varnishd
readinessProbe:
httpGet:
path: /ready
port: k8s
volumes:
- name: adm-secret
secret:
secretName: adm-secret
items:
- key: admin
path: _.secret
# One cluster-wide Varnish Service for Ingresses in all namespaces
The sample manifests in this folder implement the following
configuration in a cluster:
* One Varnish-as-Ingress deployment in the cluster, in the
``kube-system`` namespace
* Services and Ingresses are defined in three additional namespaces
The Ingresses all have the ``ingress.class:varnish`` annotation to
identify Varnish as the implementation of Ingress rules, and no
``varnish-svc`` annotation to identify a specific Varnish Service. The
Ingresses are all merged to form one set of rules implemented by the
cluster-wide Varnish Service.
The configuration illustrates a few features of the implementation:
* A Varnish Service can serve as the cluster-wide Ingress
implementation, and is assumed as the Service to implement a Varnish
Ingress if there are no others in the cluster.
* Ingresses from different namespaces are merged when:
* They are implemented by the same Varnish Service.
* They do not violate restrictions in merging Ingresses: no
overlapping ``host`` configurations, and no more than one
default backend among the merged Ingress definitions.
* A Varnish Service can configure Services from different namespaces
as backends. This results from combining the Ingresses in the
various namespaces, each of which references backend Services in
their own namespace.
## The example
![clusterwide Varnish](clusterwide.png?raw=true "Cluster-wide Varnish")
The configuration is similar to the ["cafe" example](/examples/hello/)
in that it defines a "coffee" and "tea" Services, and Ingress rules
route requests to those Services. There is also an "other" Service
serving as the default backend, when no Ingress rules apply. In this
case, the Ingresses and Services are deployed in three namespaces.
* Requests with the Host ``coffee.example.com`` are routed to the
Service ``coffee-svc`` in namespace ``coffee``. This rule is
defined by the Ingress ``coffee-ingress`` in namespace ``coffee``.
* Requests with the Host ``tea.example.com`` are routed to the Service
``tea-svc`` in namespace ``tea``. The rule is defined in Ingress
``tea/tea-ingress``.
* All other requests are routed to the Service ``other-svc`` in
namespace ``other``. This rule is defined in Ingress
``other/other-ingress``.
The Varnish Ingress implementation combines these rules and routes
requests to the three Services.
## Preparation
The feature illustrated by the example depends on having only one
Varnish Service in the cluster -- defined as running the Varnish
container defined by the project, and with the label value
``app:varnish-ingress``. So to test the example, it is necessary to
delete any other such Service in the cluster, in all namespaces.
## Deploying the example
First define the three namespaces:
```
$ kubectl apply -f namespace.yaml
```
Then define the Deployments and Services in the three
namespaces. These are the same simple applications used for the
["cafe" example](/examples/hello/), but with ``namespace``
configurations in their ``metadata``:
```
$ kubectl apply -f coffee.yaml
$ kubectl apply -f tea.yaml
$ kubectl apply -f other.yaml
```
In the example, the Varnish Service and associated resources are
defined in the ``kube-system`` namespace. As discussed in the
[deployment instructions](/deploy/), we define:
* a Secret, to authorize use of the Varnish admin interface
* the Varnish Service as a Nodeport (for simplicity's sake)
* a Deployment that specifies the ``varnish-ingress`` container, and
some required properties for the Ingress implementation
The manifests have essentially the same content as in the
[deployment instructions](/deploy/) and other examples, except for the
``namespace:kube-system`` setting.
```
$ kubectl apply -f adm-secret.yaml
$ kubectl apply -f nodeport.yaml
$ kubectl apply -f varnish.yaml
```
The routing rules to be implemented by Varnish can now be configured
by loading the three Ingress definitions:
```
$ kubectl apply -f coffee-ingress.yaml
$ kubectl apply -f tea-ingress.yaml
$ kubectl apply -f other-ingress.yaml
```
## Verification