Commit 1140313d authored by Geoff Simmons's avatar Geoff Simmons

Watch for resources in all namespaces.

This commit includes a comprehensive refactoring of the
controller:

- ResourceEventHandlers dispatch work to a workqueue for
  a worker that in turn dispatches work to a separate
  queue for each namespace. For each namespace, there
  is a worker that syncs the various resources.

- The controller uses a SharedInformer, with Informers
  and Listers for each resource type. The Listers are
  handed off to the namespace-specific workers.

- Separate source files contain code for syncing
  Endpoints, Ingress, Service and Secret.

- Added the controller CLI options:
  -namespace: for single-namespace deployments
  -templatedir: to locate the templates for VCL

- The director for templates is given either by the
  templatedir CLI option or the TEMPLATE_DIR env
  variable. This makes it possible to run the
  controller from the command-line, and run tests
  from any working directory.

- The Ingress annotation
  ingress.varnish-cache.org/varnish-svc identifies
  the Service name of the Varnish Service intended
  to implement it. If the annotation is absent,
  then the controller looks for exactly one Service
  in the namespace with the label "app:varnish-ingress".
  If none is found, the Ingress is rejected.
  (This is not yet documented.)

- Docs and examples updated

- Example for single-namespace deployment added.
parent 16af59ba
...@@ -20,10 +20,9 @@ is undergoing initial testing. It is currently subject to a number of ...@@ -20,10 +20,9 @@ is undergoing initial testing. It is currently subject to a number of
limitations, expected to be removed over time, including: limitations, expected to be removed over time, including:
* No support for TLS connections * No support for TLS connections
* The controller only attends to definitions (Ingresses, Services and * Only one Ingress definition in a namespace is valid at a time. If
Endpoints) in the namespace of the Pod in which it is deployed. more than one definition is added to the namespace, then the most
* Only one Ingress definition is valid at a time. If more than one definition recent definition becomes valid.
is added to the namespace, then the most recent definition becomes valid.
* A variety of elements in the implementation are hard-wired, as * A variety of elements in the implementation are hard-wired, as
detailed in the documentation. These are expected to become configurable detailed in the documentation. These are expected to become configurable
in further development. in further development.
...@@ -48,6 +47,13 @@ The Ingress can then be deployed by any of the means that are ...@@ -48,6 +47,13 @@ The Ingress can then be deployed by any of the means that are
customary for Kubernetes. The [``deploy/``](/deploy) folder contains customary for Kubernetes. The [``deploy/``](/deploy) folder contains
manifests (YAMLs) for some of the ways to deploy an Ingress. manifests (YAMLs) for some of the ways to deploy an Ingress.
The deployment described in [``deploy/``](/deploy) targets a default
setup in which the controller runs in the ``kube-system`` namespace
and watches in all namespaces of the cluster for Ingresses, Services
and so on that are intended for the Varnish implementation. See the
[instructions for single-namespace deployments](/examples/namespace)
if you need to limit the deployment to one namespace.
The [``examples/``](/examples) folder contains YAMLs for Services and The [``examples/``](/examples) folder contains YAMLs for Services and
Ingresses to test and demonstrate the Varnish implementation and its Ingresses to test and demonstrate the Varnish implementation and its
features. You might want to begin with the features. You might want to begin with the
......
This diff is collapsed.
/*
* Copyright (c) 2018 UPLEX Nils Goroll Systemoptimierung
* All rights reserved
*
* Author: Geoffrey Simmons <geoffrey.simmons@uplex.de>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
package controller
func (worker *NamespaceWorker) syncEndp(key string) error {
worker.log.Infof("Syncing Endpoints: %s/%s", worker.namespace, key)
svc, err := worker.svc.Get(key)
if err != nil {
worker.log.Warnf("Cannot get service for endpoints %s/%s, "+
"ignoring", worker.namespace, key)
return nil
}
if worker.isVarnishIngSvc(svc) {
worker.log.Infof("Endpoints changed for Varnish Ingress "+
"service %s/%s, enqueuing service sync", svc.Namespace,
svc.Name)
worker.queue.Add(svc)
return nil
}
worker.log.Debugf("Checking ingresses for endpoints: %s/%s",
worker.namespace, key)
ings, err := worker.getIngsForSvc(svc)
if err != nil {
return err
}
if len(ings) == 0 {
worker.log.Debugf("No ingresses for endpoints: %s/%s",
worker.namespace, key)
return nil
}
worker.log.Debugf("Update ingresses for endpoints %s", key)
for _, ing := range ings {
if !isVarnishIngress(ing) {
worker.log.Debugf("Ingress %s/%s: not Varnish",
ing.Namespace, ing.Name)
continue
}
err = worker.addOrUpdateIng(ing)
if err != nil {
return err
}
}
return nil
}
This diff is collapsed.
/*
* Copyright (c) 2018 UPLEX Nils Goroll Systemoptimierung
* All rights reserved
*
* Author: Geoffrey Simmons <geoffrey.simmons@uplex.de>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
package controller
import (
"fmt"
api_v1 "k8s.io/api/core/v1"
)
// XXX make this configurable
const admSecretKey = "admin"
func (worker *NamespaceWorker) getVarnishSvcsForSecret(
secretName string) ([]*api_v1.Service, error) {
var secrSvcs []*api_v1.Service
svcs, err := worker.svc.List(varnishIngressSelector)
if err != nil {
return secrSvcs, err
}
for _, svc := range svcs {
pods, err := worker.getPods(svc)
if err != nil {
return secrSvcs,
fmt.Errorf("Error getting pod information for "+
"service %s/%s: %v", svc.Namespace,
svc.Name, err)
}
if len(pods.Items) == 0 {
continue
}
// The secret is meant for the service if a
// SecretVolumeSource is specified in the Pod spec
// that names the secret.
pod := pods.Items[0]
for _, vol := range pod.Spec.Volumes {
if vol.Secret == nil {
continue
}
if vol.Secret.SecretName == secretName {
secrSvcs = append(secrSvcs, svc)
}
}
}
return secrSvcs, nil
}
func (worker *NamespaceWorker) syncSecret(key string) error {
worker.log.Infof("Syncing Secret: %s/%s", worker.namespace, key)
secret, err := worker.secr.Get(key)
if err != nil {
return err
}
app, ok := secret.Labels[labelKey]
if !ok || app != labelVal {
worker.log.Infof("Not a Varnish admin secret, ignoring: %s/%s",
secret.Namespace, secret.Name)
return nil
}
secretData, exists := secret.Data[admSecretKey]
if !exists {
return fmt.Errorf("Secret %s/%s does not have key %s",
secret.Namespace, secret.Name, admSecretKey)
}
secretKey := secret.Namespace + "/" + secret.Name
worker.log.Debugf("Setting secret %s", secretKey)
worker.vController.SetAdmSecret(secretKey, secretData)
svcs, err := worker.getVarnishSvcsForSecret(secret.Name)
if err != nil {
return err
}
worker.log.Debugf("Found Varnish services for secret %s/%s: %v",
secret.Namespace, secret.Name, svcs)
for _, svc := range svcs {
svcKey := svc.Namespace + "/" + svc.Name
if err = worker.vController.
UpdateSvcForSecret(svcKey, secretKey); err != nil {
return err
}
}
return nil
}
func (worker *NamespaceWorker) deleteSecret(key string) error {
worker.log.Infof("Deleting Secret: %s", key)
svcs, err := worker.getVarnishSvcsForSecret(key)
if err != nil {
return err
}
worker.log.Debugf("Found Varnish services for secret %s/%s: %v",
worker.namespace, key, svcs)
secretKey := worker.namespace + "/" + key
worker.vController.DeleteAdmSecret(secretKey)
for _, svc := range svcs {
svcKey := svc.Namespace + "/" + svc.Name
if err = worker.vController.
UpdateSvcForSecret(svcKey, secretKey); err != nil {
return err
}
}
return nil
}
/*
* Copyright (c) 2018 UPLEX Nils Goroll Systemoptimierung
* All rights reserved
*
* Author: Geoffrey Simmons <geoffrey.simmons@uplex.de>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
package controller
import (
"fmt"
"code.uplex.de/uplex-varnish/k8s-ingress/cmd/varnish/vcl"
api_v1 "k8s.io/api/core/v1"
extensions "k8s.io/api/extensions/v1beta1"
"k8s.io/apimachinery/pkg/labels"
)
// XXX make this configurable
const admPortName = "varnishadm"
// isVarnishIngSvc determines if a Service represents a Varnish that
// can implement Ingress, for which this controller is responsible.
// Currently the app label must point to a hardwired name.
func (worker *NamespaceWorker) isVarnishIngSvc(svc *api_v1.Service) bool {
app, exists := svc.Labels[labelKey]
return exists && app == labelVal
}
func (worker *NamespaceWorker) getIngsForSvc(
svc *api_v1.Service) (ings []*extensions.Ingress, err error) {
allIngs, err := worker.ing.List(labels.Everything())
if err != nil {
return
}
for _, ing := range allIngs {
if ing.Namespace != svc.Namespace {
// Shouldn't be possible
continue
}
cpy := ing.DeepCopy()
if cpy.Spec.Backend != nil {
if cpy.Spec.Backend.ServiceName == svc.Name {
ings = append(ings, cpy)
}
}
for _, rules := range cpy.Spec.Rules {
if rules.IngressRuleValue.HTTP == nil {
continue
}
for _, p := range rules.IngressRuleValue.HTTP.Paths {
if p.Backend.ServiceName == svc.Name {
ings = append(ings, cpy)
}
}
}
}
if len(ings) == 0 {
worker.log.Infof("No Varnish Ingresses defined for service %s/%s",
svc.Namespace, svc.Name)
}
return ings, nil
}
func (worker *NamespaceWorker) enqueueIngressForService(
svc *api_v1.Service) error {
ings, err := worker.getIngsForSvc(svc)
if err != nil {
return err
}
for _, ing := range ings {
if !isVarnishIngress(ing) {
continue
}
worker.queue.Add(ing)
}
return nil
}
// Return true if changes in Varnish services may lead to changes in
// the VCL config generated for the Ingress.
func isVarnishInVCLSpec(ing *extensions.Ingress) bool {
_, selfShard := ing.Annotations[selfShardKey]
return selfShard
}
func (worker *NamespaceWorker) syncSvc(key string) error {
var addrs []vcl.Address
worker.log.Infof("Syncing Service: %s/%s", worker.namespace, key)
svc, err := worker.svc.Get(key)
if err != nil {
return err
}
if !worker.isVarnishIngSvc(svc) {
return worker.enqueueIngressForService(svc)
}
worker.log.Infof("Syncing Varnish Ingress Service %s/%s:",
svc.Namespace, svc.Name)
// Check if there are Ingresses for which the VCL spec may
// change due to changes in Varnish services.
updateVCL := false
ings, err := worker.ing.List(labels.Everything())
if err != nil {
return err
}
for _, ing := range ings {
if ing.Namespace != svc.Namespace {
continue
}
ingSvc, err := worker.getVarnishSvcForIng(ing)
if err != nil {
return err
}
if ingSvc.Namespace != svc.Namespace ||
ingSvc.Name != svc.Name {
continue
}
if !isVarnishInVCLSpec(ing) {
continue
}
updateVCL = true
worker.log.Debugf("Requeueing Ingress %s/%s after changed "+
"Varnish service %s/%s: %+v", ing.Namespace,
ing.Name, svc.Namespace, svc.Name, ing)
worker.queue.Add(ing)
}
if !updateVCL {
worker.log.Debugf("No change in VCL due to changed Varnish "+
"service %s/%s", svc.Namespace, svc.Name)
}
endps, err := worker.getServiceEndpoints(svc)
if err != nil {
return err
}
worker.log.Debugf("Varnish service %s/%s endpoints: %+v", svc.Namespace,
svc.Name, endps)
// Get the secret name and admin port for the service. We have
// to retrieve a Pod spec for the service, then look for the
// SecretVolumeSource, and the port matching admPortName.
secrName := ""
worker.log.Debugf("Searching Pods for the secret for %s/%s",
svc.Namespace, svc.Name)
pods, err := worker.getPods(svc)
if err != nil {
return fmt.Errorf("Cannot get a Pod for service %s/%s: %v",
svc.Namespace, svc.Name, err)
}
if len(pods.Items) == 0 {
return fmt.Errorf("No Pods for Service: %s/%s", svc.Namespace,
svc.Name)
}
pod := &pods.Items[0]
for _, vol := range pod.Spec.Volumes {
if secretVol := vol.Secret; secretVol != nil {
secrName = secretVol.SecretName
break
}
}
if secrName != "" {
secrName = worker.namespace + "/" + secrName
worker.log.Infof("Found secret name %s for Service %s/%s",
secrName, svc.Namespace, svc.Name)
} else {
worker.log.Warnf("No secret found for Service %s/%s",
svc.Namespace, svc.Name)
}
// XXX hard-wired Port name
for _, subset := range endps.Subsets {
admPort := int32(0)
for _, port := range subset.Ports {
if port.Name == admPortName {
admPort = port.Port
break
}
}
if admPort == 0 {
return fmt.Errorf("No Varnish admin port %s found for "+
"Service %s/%s endpoint", admPortName,
svc.Namespace, svc.Name)
}
for _, address := range subset.Addresses {
addr := vcl.Address{
IP: address.IP,
Port: admPort,
}
addrs = append(addrs, addr)
}
}
worker.log.Debugf("Varnish service %s/%s addresses: %+v", svc.Namespace,
svc.Name, addrs)
return worker.vController.AddOrUpdateVarnishSvc(
svc.Namespace+"/"+svc.Name, addrs, secrName, !updateVCL)
}
func (worker *NamespaceWorker) deleteSvc(key string) error {
nsKey := worker.namespace + "/" + key
worker.log.Info("Deleting Service:", nsKey)
svcObj, err := worker.svc.Get(key)
if err != nil {
return err
}
if !worker.isVarnishIngSvc(svcObj) {
return worker.enqueueIngressForService(svcObj)
}
return worker.vController.DeleteVarnishSvc(nsKey)
}
...@@ -30,219 +30,83 @@ package controller ...@@ -30,219 +30,83 @@ package controller
import ( import (
"fmt" "fmt"
"time"
"github.com/sirupsen/logrus"
api_v1 "k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/util/intstr" "k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
api_v1 "k8s.io/api/core/v1" "code.uplex.de/uplex-varnish/k8s-ingress/cmd/varnish/vcl"
extensions "k8s.io/api/extensions/v1beta1"
) )
// taskQueue manages a work queue through an independent worker that const (
// invokes the given sync function for every work item inserted. labelKey = "app"
type taskQueue struct { labelVal = "varnish-ingress"
log *logrus.Logger )
// queue is the work queue the worker polls
queue *workqueue.Type
// sync is called for each item in the queue
sync func(Task)
// workerDone is closed when the worker exits
workerDone chan struct{}
}
func (t *taskQueue) run(period time.Duration, stopCh <-chan struct{}) { var (
wait.Until(t.worker, period, stopCh) varnishIngressSet = labels.Set(map[string]string{
} labelKey: labelVal,
})
// Selector for use in List() operations to find resources
// with the app:varnish-ingress label.
varnishIngressSelector = labels.SelectorFromSet(varnishIngressSet)
)
// enqueue enqueues ns/name of the given api object in the task queue. // getServiceEndpoints returns the endpoints of a service, matched on
func (t *taskQueue) enqueue(obj interface{}) { // service name.
key, err := keyFunc(obj) func (worker *NamespaceWorker) getServiceEndpoints(
if err != nil { svc *api_v1.Service) (ep *api_v1.Endpoints, err error) {
t.log.Errorf("Couldn't get key for object %v: %v", obj, err)
return
}
task, err := NewTask(key, obj) eps, err := worker.endp.List(labels.Everything())
if err != nil { if err != nil {
t.log.Errorf("Couldn't create a task for object %v: %v", obj,
err)
return return
} }
for _, ep := range eps {
t.log.Info("Adding an element with a key:", task.Key) if svc.Name == ep.Name && svc.Namespace == ep.Namespace {
return ep, nil
t.queue.Add(task)
}
func (t *taskQueue) requeue(task Task, err error) {
t.log.Warnf("Requeuing %v, err %v", task.Key, err)
t.queue.Add(task)
}
func (t *taskQueue) requeueAfter(task Task, err error, after time.Duration) {
t.log.Warnf("Requeuing %v after %s, err %v", task.Key, after.String(),
err)
go func(task Task, after time.Duration) {
time.Sleep(after)
t.queue.Add(task)
}(task, after)
}
// worker processes work in the queue through sync.
func (t *taskQueue) worker() {
for {
task, quit := t.queue.Get()
if quit {
close(t.workerDone)
return
} }
t.log.Infof("Syncing %v", task.(Task).Key)
t.sync(task.(Task))
t.queue.Done(task)
}
}
// shutdown shuts down the work queue and waits for the worker to ACK
func (t *taskQueue) shutdown() {
t.queue.ShutDown()
<-t.workerDone
}
// NewTaskQueue creates a new task queue with the given sync function.
// The sync function is called for every element inserted into the queue.
func NewTaskQueue(syncFn func(Task), log *logrus.Logger) *taskQueue {
return &taskQueue{
log: log,
queue: workqueue.New(),
sync: syncFn,
workerDone: make(chan struct{}),
}
}
// Kind represents the kind of the Kubernetes resources of a task
type Kind int
const (
// Ingress resource
Ingress = iota
// Endpoints resource
Endpoints
// Service resource
Service
// Secret resource
Secret
)
// Task is an element of a taskQueue
type Task struct {
Kind Kind
Key string
}
// NewTask creates a new task
func NewTask(key string, obj interface{}) (Task, error) {
var k Kind
switch t := obj.(type) {
case *extensions.Ingress:
// ing := obj.(*extensions.Ingress)
k = Ingress
case *api_v1.Endpoints:
k = Endpoints
case *api_v1.Service:
k = Service
case *api_v1.Secret:
k = Secret
default:
return Task{}, fmt.Errorf("Unknown type: %v", t)
} }
err = fmt.Errorf("could not find endpoints for service: %s/%s",
return Task{k, key}, nil svc.Namespace, svc.Name)
}
// StoreToIngressLister makes a Store that lists Ingress.
type StoreToIngressLister struct {
cache.Store
}
// GetByKeySafe calls Store.GetByKeySafe and returns a copy of the
// ingress so it is safe to modify.
func (s *StoreToIngressLister) GetByKeySafe(key string) (ing *extensions.Ingress, exists bool, err error) {
item, exists, err := s.Store.GetByKey(key)
if !exists || err != nil {
return nil, exists, err
}
ing = item.(*extensions.Ingress).DeepCopy()
return return
} }
// List lists all Ingress' in the store. // endpsTargetPort2Addrs returns a list of addresses for VCL backend
func (s *StoreToIngressLister) List() (ing extensions.IngressList, err error) { // config, given the Endpoints of a Service and a target port number
for _, m := range s.Store.List() { // for their Pods.
ing.Items = append(ing.Items, *(m.(*extensions.Ingress)).DeepCopy()) func endpsTargetPort2Addrs(
} svc *api_v1.Service,
return ing, nil endps *api_v1.Endpoints,
} targetPort int32) ([]vcl.Address, error) {
// GetServiceIngress gets all the Ingress' that have rules pointing to a service. var addrs []vcl.Address
// Note that this ignores services without the right nodePorts. for _, subset := range endps.Subsets {
func (s *StoreToIngressLister) GetServiceIngress(svc *api_v1.Service) (ings []extensions.Ingress, err error) { for _, port := range subset.Ports {
for _, m := range s.Store.List() { if port.Port == targetPort {
ing := *m.(*extensions.Ingress).DeepCopy() for _, address := range subset.Addresses {
if ing.Namespace != svc.Namespace { addr := vcl.Address{
continue IP: address.IP,
} Port: port.Port,
if ing.Spec.Backend != nil { }
if ing.Spec.Backend.ServiceName == svc.Name { addrs = append(addrs, addr)
ings = append(ings, ing)
}
}
for _, rules := range ing.Spec.Rules {
if rules.IngressRuleValue.HTTP == nil {
continue
}
for _, p := range rules.IngressRuleValue.HTTP.Paths {
if p.Backend.ServiceName == svc.Name {
ings = append(ings, ing)
} }
return addrs, nil
} }
} }
} }
if len(ings) == 0 { return addrs, fmt.Errorf("No endpoints for service port %d in service "+
err = fmt.Errorf("No ingress for service %v", svc.Name) "%s/%s", targetPort, svc.Namespace, svc.Name)
}
return
}
// StoreToEndpointLister makes a Store that lists Endpoints
type StoreToEndpointLister struct {
cache.Store
}
// GetServiceEndpoints returns the endpoints of a service, matched on service name.
func (s *StoreToEndpointLister) GetServiceEndpoints(svc *api_v1.Service) (ep api_v1.Endpoints, err error) {
for _, m := range s.Store.List() {
ep = *m.(*api_v1.Endpoints)
if svc.Name == ep.Name && svc.Namespace == ep.Namespace {
return ep, nil
}
}
err = fmt.Errorf("could not find endpoints for service: %v", svc.Name)
return
} }
// FindPort locates the container port for the given pod and portName. If the // findPort returns the container port number for a Pod and
// targetPort is a number, use that. If the targetPort is a string, look that // ServicePort. If the targetPort is a string, search for a matching
// string up in all named ports in all containers in the target pod. If no // named ports in the specs for all containers in the Pod.
// match is found, fail. func findPort(pod *api_v1.Pod, svcPort *api_v1.ServicePort) (int32, error) {
func FindPort(pod *api_v1.Pod, svcPort *api_v1.ServicePort) (int32, error) {
portName := svcPort.TargetPort portName := svcPort.TargetPort
switch portName.Type { switch portName.Type {
case intstr.Int:
return int32(portName.IntValue()), nil
case intstr.String: case intstr.String:
name := portName.StrVal name := portName.StrVal
for _, container := range pod.Spec.Containers { for _, container := range pod.Spec.Containers {
...@@ -253,14 +117,49 @@ func FindPort(pod *api_v1.Pod, svcPort *api_v1.ServicePort) (int32, error) { ...@@ -253,14 +117,49 @@ func FindPort(pod *api_v1.Pod, svcPort *api_v1.ServicePort) (int32, error) {
} }
} }
} }
case intstr.Int:
return int32(portName.IntValue()), nil
} }
return 0, fmt.Errorf("no suitable port for manifest: %s", pod.UID) return 0, fmt.Errorf("No port number found for ServicePort %s and Pod "+
"%s/%s", svcPort.Name, pod.Namespace, pod.Name)
}
// getPodsForSvc queries the API for the Pods in a Service.
func (worker *NamespaceWorker) getPods(
svc *api_v1.Service) (*api_v1.PodList, error) {
return worker.client.CoreV1().Pods(svc.Namespace).
List(meta_v1.ListOptions{
LabelSelector: labels.Set(svc.Spec.Selector).String(),
})
} }
// StoreToSecretLister makes a Store that lists Secrets // getTargetPort returns a target port number for the Pods of a Service,
type StoreToSecretLister struct { // given a ServicePort.
cache.Store func (worker *NamespaceWorker) getTargetPort(svcPort *api_v1.ServicePort,
svc *api_v1.Service) (int32, error) {
if (svcPort.TargetPort == intstr.IntOrString{}) {
return svcPort.Port, nil
}
if svcPort.TargetPort.Type == intstr.Int {
return int32(svcPort.TargetPort.IntValue()), nil
}
pods, err := worker.getPods(svc)
if err != nil {
return 0, fmt.Errorf("Error getting pod information: %v", err)
}
if len(pods.Items) == 0 {
return 0, fmt.Errorf("No pods of service: %v", svc.Name)
}
pod := &pods.Items[0]
portNum, err := findPort(pod, svcPort)
if err != nil {
return 0, fmt.Errorf("Error finding named port %s in pod %s/%s"+
": %v", svcPort.Name, pod.Namespace, pod.Name, err)
}
return portNum, nil
} }
/*
* Copyright (c) 2018 UPLEX Nils Goroll Systemoptimierung
* All rights reserved
*
* Author: Geoffrey Simmons <geoffrey.simmons@uplex.de>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
package controller
import (
api_v1 "k8s.io/api/core/v1"
extensions "k8s.io/api/extensions/v1beta1"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/kubernetes"
core_v1_listers "k8s.io/client-go/listers/core/v1"
ext_listers "k8s.io/client-go/listers/extensions/v1beta1"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/workqueue"
"github.com/sirupsen/logrus"
"code.uplex.de/uplex-varnish/k8s-ingress/cmd/varnish"
)
const (
// syncSuccess and syncFailure are reasons for Events
syncSuccess = "SyncSuccess"
syncFailure = "SyncFailure"
)
type NamespaceWorker struct {
namespace string
log *logrus.Logger
vController *varnish.VarnishController
queue workqueue.RateLimitingInterface
stopChan chan struct{}
ing ext_listers.IngressNamespaceLister
svc core_v1_listers.ServiceNamespaceLister
endp core_v1_listers.EndpointsNamespaceLister
secr core_v1_listers.SecretNamespaceLister
client kubernetes.Interface
recorder record.EventRecorder
}
func (worker *NamespaceWorker) infoEvent(obj interface{}, reason, msgFmt string,
args ...interface{}) {
switch obj.(type) {
case *extensions.Ingress:
ing, _ := obj.(*extensions.Ingress)
worker.recorder.Eventf(ing, api_v1.EventTypeNormal, reason,
msgFmt, args...)
case *api_v1.Service:
svc, _ := obj.(*api_v1.Service)
worker.recorder.Eventf(svc, api_v1.EventTypeNormal, reason,
msgFmt, args...)
case *api_v1.Endpoints:
endp, _ := obj.(*api_v1.Endpoints)
worker.recorder.Eventf(endp, api_v1.EventTypeNormal, reason,
msgFmt, args...)
case *api_v1.Secret:
secr, _ := obj.(*api_v1.Secret)
worker.recorder.Eventf(secr, api_v1.EventTypeNormal, reason,
msgFmt, args...)
default:
worker.log.Warnf("Unhandled type %T, no event generated", obj)
}
}
func (worker *NamespaceWorker) warnEvent(obj interface{}, reason, msgFmt string,
args ...interface{}) {
switch obj.(type) {
case *extensions.Ingress:
ing, _ := obj.(*extensions.Ingress)
worker.recorder.Eventf(ing, api_v1.EventTypeWarning, reason,
msgFmt, args...)
case *api_v1.Service:
svc, _ := obj.(*api_v1.Service)
worker.recorder.Eventf(svc, api_v1.EventTypeWarning, reason,
msgFmt, args...)
case *api_v1.Endpoints:
endp, _ := obj.(*api_v1.Endpoints)
worker.recorder.Eventf(endp, api_v1.EventTypeWarning, reason,
msgFmt, args...)
case *api_v1.Secret:
secr, _ := obj.(*api_v1.Secret)
worker.recorder.Eventf(secr, api_v1.EventTypeWarning, reason,
msgFmt, args...)
default:
worker.log.Warnf("Unhandled type %T, no event generated", obj)
}
}
func (worker *NamespaceWorker) syncSuccess(obj interface{}, msgFmt string,
args ...interface{}) {
worker.log.Infof(msgFmt, args...)
worker.infoEvent(obj, syncSuccess, msgFmt, args...)
}
func (worker *NamespaceWorker) syncFailure(obj interface{}, msgFmt string,
args ...interface{}) {
worker.log.Errorf(msgFmt, args...)
worker.warnEvent(obj, syncFailure, msgFmt, args...)
}
func (worker *NamespaceWorker) dispatch(obj interface{}) error {
_, key, err := getNameSpace(obj)
if err != nil {
worker.syncFailure(obj, "Cannot get key for object %v: %v", obj,
err)
return nil
}
switch obj.(type) {
case *extensions.Ingress:
return worker.syncIng(key)
case *api_v1.Service:
return worker.syncSvc(key)
case *api_v1.Endpoints:
return worker.syncEndp(key)
case *api_v1.Secret:
return worker.syncSecret(key)
default:
deleted, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
worker.syncFailure(obj, "Unhandled object type: %T",
obj)
return nil
}
switch deleted.Obj.(type) {
case *extensions.Ingress:
return worker.deleteIng(key)
case *api_v1.Service:
return worker.deleteSvc(key)
case *api_v1.Endpoints:
// Delete and sync do the same thing
return worker.syncEndp(key)
case *api_v1.Secret:
return worker.deleteSecret(key)
default:
worker.syncFailure(deleted, "Unhandled object type: %T",
deleted)
return nil
}
return nil
}
}
func (worker *NamespaceWorker) next() {
select {
case <-worker.stopChan:
worker.queue.ShutDown()
return
default:
break
}
obj, quit := worker.queue.Get()
if quit {
return
}
defer worker.queue.Done(obj)
if err := worker.dispatch(obj); err == nil {
if ns, name, err := getNameSpace(obj); err == nil {
worker.syncSuccess(obj, "Successfully synced: %s/%s",
ns, name)
} else {
worker.syncSuccess(obj, "Successfully synced")
}
worker.queue.Forget(obj)
} else {
worker.syncFailure(obj, "Error, requeueing: %v", err)
worker.queue.AddRateLimited(obj)
}
}
func (worker *NamespaceWorker) work() {
worker.log.Info("Starting worker for namespace:", worker.namespace)
for !worker.queue.ShuttingDown() {
worker.next()
}
worker.log.Info("Shutting down worker for namespace:", worker.namespace)
}
type NamespaceQueues struct {
Queue workqueue.RateLimitingInterface
log *logrus.Logger
vController *varnish.VarnishController
workers map[string]*NamespaceWorker
listers *Listers
client kubernetes.Interface
recorder record.EventRecorder
}
func NewNamespaceQueues(
log *logrus.Logger,
vController *varnish.VarnishController,
listers *Listers,
client kubernetes.Interface,
recorder record.EventRecorder) *NamespaceQueues {
q := workqueue.NewRateLimitingQueue(
workqueue.DefaultControllerRateLimiter())
return &NamespaceQueues{
Queue: q,
log: log,
vController: vController,
workers: make(map[string]*NamespaceWorker),
listers: listers,
client: client,
recorder: recorder,
}
}
func getNameSpace(obj interface{}) (namespace, name string, err error) {
k, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
if err != nil {
return
}
namespace, name, err = cache.SplitMetaNamespaceKey(k)
if err != nil {
return
}
return
}
func (qs *NamespaceQueues) next() {
obj, quit := qs.Queue.Get()
if quit {
return
}
defer qs.Queue.Done(obj)
ns, _, err := getNameSpace(obj)
if err != nil {
utilruntime.HandleError(err)
return
}
worker, exists := qs.workers[ns]
if !exists {
q := workqueue.NewNamedRateLimitingQueue(
workqueue.DefaultControllerRateLimiter(), ns)
worker = &NamespaceWorker{
namespace: ns,
log: qs.log,
vController: qs.vController,
queue: q,
stopChan: make(chan struct{}),
ing: qs.listers.ing.Ingresses(ns),
svc: qs.listers.svc.Services(ns),
endp: qs.listers.endp.Endpoints(ns),
secr: qs.listers.secr.Secrets(ns),
client: qs.client,
recorder: qs.recorder,
}
qs.workers[ns] = worker
go worker.work()
}
worker.queue.Add(obj)
qs.Queue.Forget(obj)
}
func (qs *NamespaceQueues) Run() {
qs.log.Info("Starting dispatcher worker")
for !qs.Queue.ShuttingDown() {
qs.next()
}
qs.log.Info("Shutting down dispatcher worker")
}
func (qs *NamespaceQueues) Stop() {
qs.Queue.ShutDown()
for _, worker := range qs.workers {
worker.queue.ShutDown()
close(worker.stopChan)
}
// XXX wait for WaitGroup
}
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181129154850-be0a7893ac92 h1:KBZnpstNIE+yoIx/qZy0Udl0Qwg9Qlj1eY70sIvhjLM=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181129154850-be0a7893ac92/go.mod h1:J0znUDkk1j5lNWKZZ6zfISZWbA2fXvsxCM+FpDUxG9g=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205203203-20054d30408c h1:dDnkldbKT9fxuJrfEbG40ydbS/Q30oH7cvDpzwJkXoQ=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205203203-20054d30408c/go.mod h1:J0znUDkk1j5lNWKZZ6zfISZWbA2fXvsxCM+FpDUxG9g=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205221923-190a386386bc h1:fv3y7B7zaVH561OpjJgqDFAy1rINt7Xa8WhIlN8ZOU4=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205221923-190a386386bc/go.mod h1:J0znUDkk1j5lNWKZZ6zfISZWbA2fXvsxCM+FpDUxG9g=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205222454-e67a0f88a279 h1:8REUMrx0zvzsAZ/zKSsRrhYbftLSNQ0v2AYZljJqvcY=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205222454-e67a0f88a279/go.mod h1:J0znUDkk1j5lNWKZZ6zfISZWbA2fXvsxCM+FpDUxG9g=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181209154204-43826850baae h1:rV0/WL/NvJ8/TAQwmcSxFUyI5TVA5GM0KBXXfnWrsFY= code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181209154204-43826850baae h1:rV0/WL/NvJ8/TAQwmcSxFUyI5TVA5GM0KBXXfnWrsFY=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181209154204-43826850baae/go.mod h1:J0znUDkk1j5lNWKZZ6zfISZWbA2fXvsxCM+FpDUxG9g= code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181209154204-43826850baae/go.mod h1:J0znUDkk1j5lNWKZZ6zfISZWbA2fXvsxCM+FpDUxG9g=
github.com/PuerkitoBio/purell v1.1.0 h1:rmGxhojJlM0tuKtfdvliR84CFHljx9ag64t2xmVkjK4= github.com/PuerkitoBio/purell v1.1.0 h1:rmGxhojJlM0tuKtfdvliR84CFHljx9ag64t2xmVkjK4=
...@@ -16,6 +8,7 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c ...@@ -16,6 +8,7 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/emicklei/go-restful v2.8.0+incompatible h1:wN8GCRDPGHguIynsnBartv5GUgGUg1LAU7+xnSn1j7Q= github.com/emicklei/go-restful v2.8.0+incompatible h1:wN8GCRDPGHguIynsnBartv5GUgGUg1LAU7+xnSn1j7Q=
github.com/emicklei/go-restful v2.8.0+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs= github.com/emicklei/go-restful v2.8.0+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk= github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
...@@ -45,11 +38,13 @@ github.com/gregjones/httpcache v0.0.0-20181110185634-c63ab54fda8f h1:ShTPMJQes6t ...@@ -45,11 +38,13 @@ github.com/gregjones/httpcache v0.0.0-20181110185634-c63ab54fda8f h1:ShTPMJQes6t
github.com/gregjones/httpcache v0.0.0-20181110185634-c63ab54fda8f/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA= github.com/gregjones/httpcache v0.0.0-20181110185634-c63ab54fda8f/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/hashicorp/golang-lru v0.5.0 h1:CL2msUPvZTLb5O648aiLNJw3hnBxN2+1Jq8rCOH9wdo= github.com/hashicorp/golang-lru v0.5.0 h1:CL2msUPvZTLb5O648aiLNJw3hnBxN2+1Jq8rCOH9wdo=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8= github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/json-iterator/go v1.1.5 h1:gL2yXlmiIo4+t+y32d4WGwOjKGYcGOuyrg46vadswDE= github.com/json-iterator/go v1.1.5 h1:gL2yXlmiIo4+t+y32d4WGwOjKGYcGOuyrg46vadswDE=
github.com/json-iterator/go v1.1.5/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU= github.com/json-iterator/go v1.1.5/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/juju/ratelimit v1.0.1 h1:+7AIFJVQ0EQgq/K9+0Krm7m530Du7tIz0METWzN0RgY= github.com/juju/ratelimit v1.0.1 h1:+7AIFJVQ0EQgq/K9+0Krm7m530Du7tIz0METWzN0RgY=
github.com/juju/ratelimit v1.0.1/go.mod h1:qapgC/Gy+xNh9UxzV13HGGl/6UXNN+ct+vwSgWNm/qk= github.com/juju/ratelimit v1.0.1/go.mod h1:qapgC/Gy+xNh9UxzV13HGGl/6UXNN+ct+vwSgWNm/qk=
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329 h1:2gxZ0XQIU/5z3Z3bUBu+FXuk2pFbkN6tcwi/pjyaDic= github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329 h1:2gxZ0XQIU/5z3Z3bUBu+FXuk2pFbkN6tcwi/pjyaDic=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc= github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
...@@ -58,16 +53,20 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJ ...@@ -58,16 +53,20 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJ
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI= github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0 h1:WSHQ+IS43OoUrWtD1/bbclrwK8TTH5hzp+umCiuxHgs=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.4.3 h1:RE1xgDvH7imwFD45h+u2SgIfERHlS2yNG4DObb5BSKU=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI= github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU= github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/sirupsen/logrus v1.2.0 h1:juTguoYk5qI21pwyTXY3B3Y5cOTH3ZUyZCg1v/mihuo= github.com/sirupsen/logrus v1.2.0 h1:juTguoYk5qI21pwyTXY3B3Y5cOTH3ZUyZCg1v/mihuo=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg= github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4= github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793 h1:u+LnwYTOOW7Ukr/fppxEb1Nwz0AtPflrblfvUudpo+I= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793 h1:u+LnwYTOOW7Ukr/fppxEb1Nwz0AtPflrblfvUudpo+I=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
...@@ -76,6 +75,7 @@ golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73r ...@@ -76,6 +75,7 @@ golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc h1:a3CU5tJYVj92DY2LaA1kUkrsqD5/3mLDhx2NcNqyW+0= golang.org/x/net v0.0.0-20181201002055-351d144fa1fc h1:a3CU5tJYVj92DY2LaA1kUkrsqD5/3mLDhx2NcNqyW+0=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f h1:Bl/8QSvNqXvPGPGXa2z5xUTmV7VDcZyvRZ+QQXkXTZQ=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
...@@ -83,10 +83,13 @@ golang.org/x/sys v0.0.0-20181128092732-4ed8d59d0b35 h1:YAFjXN64LMvktoUZH9zgY4lGc ...@@ -83,10 +83,13 @@ golang.org/x/sys v0.0.0-20181128092732-4ed8d59d0b35 h1:YAFjXN64LMvktoUZH9zgY4lGc
golang.org/x/sys v0.0.0-20181128092732-4ed8d59d0b35/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181128092732-4ed8d59d0b35/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg= golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw= gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
......
...@@ -43,6 +43,9 @@ import ( ...@@ -43,6 +43,9 @@ import (
"github.com/sirupsen/logrus" "github.com/sirupsen/logrus"
api_v1 "k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes" "k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest" "k8s.io/client-go/rest"
) )
...@@ -52,6 +55,13 @@ var ( ...@@ -52,6 +55,13 @@ var (
loglvlF = flag.String("log-level", "INFO", loglvlF = flag.String("log-level", "INFO",
"log level: one of PANIC, FATAL, ERROR, WARN, INFO, DEBUG, \n"+ "log level: one of PANIC, FATAL, ERROR, WARN, INFO, DEBUG, \n"+
"or TRACE") "or TRACE")
namespaceF = flag.String("namespace", api_v1.NamespaceAll,
"namespace in which to listen for resources (default all)")
tmplDirF = flag.String("templatedir", "",
"directory of templates for VCL generation. Defaults to \n"+
"the TEMPLATE_DIR env variable, if set, or the \n"+
"current working directory when the ingress \n"+
"controller is invoked")
logFormat = logrus.TextFormatter{ logFormat = logrus.TextFormatter{
DisableColors: true, DisableColors: true,
FullTimestamp: true, FullTimestamp: true,
...@@ -61,13 +71,23 @@ var ( ...@@ -61,13 +71,23 @@ var (
Formatter: &logFormat, Formatter: &logFormat,
Level: logrus.InfoLevel, Level: logrus.InfoLevel,
} }
informerStop = make(chan struct{})
) )
const resyncPeriod = 0
// resyncPeriod = 30 * time.Second
// Satisifes type TweakListOptionsFunc in
// k8s.io/client-go/informers/internalinterfaces, for use in
// NewFilteredSharedInformerFactory below.
func noop(opts *meta_v1.ListOptions) {}
func main() { func main() {
flag.Parse() flag.Parse()
if *versionF { if *versionF {
fmt.Printf("%s version %s", os.Args[0], version) fmt.Printf("%s version %s\n", os.Args[0], version)
os.Exit(0) os.Exit(0)
} }
...@@ -94,10 +114,13 @@ func main() { ...@@ -94,10 +114,13 @@ func main() {
log.Info("Starting Varnish Ingress controller version:", version) log.Info("Starting Varnish Ingress controller version:", version)
var err error vController, err := varnish.NewVarnishController(log, *tmplDirF)
var config *rest.Config if err != nil {
log.Fatal("Cannot initialize Varnish controller: ", err)
os.Exit(-1)
}
config, err = rest.InClusterConfig() config, err := rest.InClusterConfig()
if err != nil { if err != nil {
log.Fatalf("error creating client configuration: %v", err) log.Fatalf("error creating client configuration: %v", err)
} }
...@@ -106,20 +129,36 @@ func main() { ...@@ -106,20 +129,36 @@ func main() {
log.Fatal("Failed to create client:", err) log.Fatal("Failed to create client:", err)
} }
vController := varnish.NewVarnishController(log) var informerFactory informers.SharedInformerFactory
if *namespaceF == api_v1.NamespaceAll {
informerFactory = informers.NewSharedInformerFactory(
kubeClient, resyncPeriod)
} else {
informerFactory = informers.NewFilteredSharedInformerFactory(
kubeClient, resyncPeriod, *namespaceF, noop)
// XXX this is prefered, but only available in newer
// versions of client-go.
// informerFactory = informers.NewSharedInformerFactoryWithOptions(
// kubeClient, resyncPeriod,
// informers.WithNamespace(*namespaceF))
}
varnishDone := make(chan error, 1) varnishDone := make(chan error, 1)
vController.Start(varnishDone) vController.Start(varnishDone)
namespace := os.Getenv("POD_NAMESPACE")
ingController := controller.NewIngressController(log, kubeClient, ingController := controller.NewIngressController(log, kubeClient,
vController, namespace) vController, informerFactory)
go handleTermination(log, ingController, vController, varnishDone) go handleTermination(log, ingController, vController, varnishDone)
informerFactory.Start(informerStop)
ingController.Run() ingController.Run()
} }
func handleTermination(log *logrus.Logger, ingc *controller.IngressController, func handleTermination(
vc *varnish.VarnishController, varnishDone chan error) { log *logrus.Logger,
ingc *controller.IngressController,
vc *varnish.VarnishController,
varnishDone chan error) {
signalChan := make(chan os.Signal, 1) signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan, syscall.SIGTERM) signal.Notify(signalChan, syscall.SIGTERM)
...@@ -141,11 +180,14 @@ func handleTermination(log *logrus.Logger, ingc *controller.IngressController, ...@@ -141,11 +180,14 @@ func handleTermination(log *logrus.Logger, ingc *controller.IngressController,
log.Info("Received SIGTERM, shutting down") log.Info("Received SIGTERM, shutting down")
} }
log.Info("Shutting down informers")
informerStop <- struct{}{}
log.Info("Shutting down the ingress controller") log.Info("Shutting down the ingress controller")
ingc.Stop() ingc.Stop()
if !exited { if !exited {
log.Info("Shutting down Varnish") log.Info("Shutting down the Varnish controller")
vc.Quit() vc.Quit()
} }
......
...@@ -37,11 +37,15 @@ import ( ...@@ -37,11 +37,15 @@ import (
const monitorIntvl = time.Second * 30 const monitorIntvl = time.Second * 30
func (vc *VarnishController) checkInst(inst *varnishSvc) { func (vc *VarnishController) checkInst(inst *varnishInst) {
if inst.admSecret == nil {
vc.log.Warnf("No admin secret known for endpoint %s", inst.addr)
return
}
inst.admMtx.Lock() inst.admMtx.Lock()
defer inst.admMtx.Unlock() defer inst.admMtx.Unlock()
adm, err := admin.Dial(inst.addr, vc.admSecret, admTimeout) adm, err := admin.Dial(inst.addr, *inst.admSecret, admTimeout)
if err != nil { if err != nil {
vc.log.Errorf("Error connecting to %s: %v", inst.addr, err) vc.log.Errorf("Error connecting to %s: %v", inst.addr, err)
return return
...@@ -52,21 +56,21 @@ func (vc *VarnishController) checkInst(inst *varnishSvc) { ...@@ -52,21 +56,21 @@ func (vc *VarnishController) checkInst(inst *varnishSvc) {
pong, err := adm.Ping() pong, err := adm.Ping()
if err != nil { if err != nil {
vc.log.Error("Error pinging at %s: %v", inst.addr, err) vc.log.Errorf("Error pinging at %s: %v", inst.addr, err)
return return
} }
vc.log.Infof("Succesfully pinged instance %s: %+v", inst.addr, pong) vc.log.Infof("Succesfully pinged instance %s: %+v", inst.addr, pong)
state, err := adm.Status() state, err := adm.Status()
if err != nil { if err != nil {
vc.log.Error("Error getting status at %s: %v", inst.addr, err) vc.log.Errorf("Error getting status at %s: %v", inst.addr, err)
return return
} }
vc.log.Infof("Status at %s: %s", inst.addr, state) vc.log.Infof("Status at %s: %s", inst.addr, state)
panic, err := adm.GetPanic() panic, err := adm.GetPanic()
if err != nil { if err != nil {
vc.log.Error("Error getting panic at %s: %v", inst.addr, err) vc.log.Errorf("Error getting panic at %s: %v", inst.addr, err)
return return
} }
if panic == "" { if panic == "" {
...@@ -78,7 +82,7 @@ func (vc *VarnishController) checkInst(inst *varnishSvc) { ...@@ -78,7 +82,7 @@ func (vc *VarnishController) checkInst(inst *varnishSvc) {
vcls, err := adm.VCLList() vcls, err := adm.VCLList()
if err != nil { if err != nil {
vc.log.Error("Error getting VCL list at %s: %v", inst.addr, err) vc.log.Errorf("Error getting VCL list at %s: %v", inst.addr, err)
return return
} }
for _, vcl := range vcls { for _, vcl := range vcls {
...@@ -101,16 +105,17 @@ func (vc *VarnishController) monitor() { ...@@ -101,16 +105,17 @@ func (vc *VarnishController) monitor() {
for { for {
time.Sleep(monitorIntvl) time.Sleep(monitorIntvl)
for svc, insts := range vc.varnishSvcs { for svcName, svc := range vc.svcs {
vc.log.Infof("Monitoring Varnish instances in %s", svc) vc.log.Infof("Monitoring Varnish instances in %s",
svcName)
for _, inst := range insts { for _, inst := range svc.instances {
vc.checkInst(inst) vc.checkInst(inst)
} }
if err := vc.updateVarnishInstances(insts); err != nil { if err := vc.updateVarnishSvc(svcName); err != nil {
vc.log.Errorf("Errors updating Varnish "+ vc.log.Errorf("Errors updating Varnish "+
"instances: %+v", err) "Service %s: %+v", svcName, err)
} }
} }
} }
......
This diff is collapsed.
/*
* Copyright (c) 2018 UPLEX Nils Goroll Systemoptimierung
* All rights reserved
*
* Author: Geoffrey Simmons <geoffrey.simmons@uplex.de>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
package varnish
import (
"fmt"
"testing"
)
func TestVarnishAdmError(t *testing.T) {
vadmErr := VarnishAdmError{
addr: "123.45.67.89:4711",
err: fmt.Errorf("Error message"),
}
err := vadmErr.Error()
want := "123.45.67.89:4711: Error message"
if err != want {
t.Errorf("VarnishAdmError.Error() want=%s got=%s", want, err)
}
vadmErrs := VarnishAdmErrors{
vadmErr,
VarnishAdmError{
addr: "98.76.54.321:815",
err: fmt.Errorf("Error 2"),
},
VarnishAdmError{
addr: "192.0.2.255:80",
err: fmt.Errorf("Error 3"),
},
}
err = vadmErrs.Error()
want = "[{123.45.67.89:4711: Error message}{98.76.54.321:815: Error 2}" +
"{192.0.2.255:80: Error 3}]"
if err != want {
t.Errorf("VarnishAdmErrors.Error() want=%s got=%s", want, err)
}
}
...@@ -34,6 +34,7 @@ import ( ...@@ -34,6 +34,7 @@ import (
"fmt" "fmt"
"hash" "hash"
"hash/fnv" "hash/fnv"
"path"
"regexp" "regexp"
"sort" "sort"
"text/template" "text/template"
...@@ -214,13 +215,30 @@ const ( ...@@ -214,13 +215,30 @@ const (
) )
var ( var (
IngressTmpl = template.Must(template.New(ingTmplSrc).Funcs(fMap).ParseFiles(ingTmplSrc)) IngressTmpl *template.Template
ShardTmpl = template.Must(template.New(shardTmplSrc).Funcs(fMap).ParseFiles(shardTmplSrc)) ShardTmpl *template.Template
symPattern = regexp.MustCompile("^[[:alpha:]][[:word:]-]*$") symPattern = regexp.MustCompile("^[[:alpha:]][[:word:]-]*$")
first = regexp.MustCompile("[[:alpha:]]") first = regexp.MustCompile("[[:alpha:]]")
restIllegal = regexp.MustCompile("[^[:word:]-]+") restIllegal = regexp.MustCompile("[^[:word:]-]+")
) )
func InitTemplates(tmplDir string) error {
var err error
ingTmplPath := path.Join(tmplDir, ingTmplSrc)
shardTmplPath := path.Join(tmplDir, shardTmplSrc)
IngressTmpl, err = template.New(ingTmplSrc).
Funcs(fMap).ParseFiles(ingTmplPath)
if err != nil {
return err
}
ShardTmpl, err = template.New(shardTmplSrc).
Funcs(fMap).ParseFiles(shardTmplPath)
if err != nil {
return err
}
return nil
}
func replIllegal(ill []byte) []byte { func replIllegal(ill []byte) []byte {
repl := []byte("_") repl := []byte("_")
for _, b := range ill { for _, b := range ill {
......
...@@ -30,7 +30,9 @@ package vcl ...@@ -30,7 +30,9 @@ package vcl
import ( import (
"bytes" "bytes"
"fmt"
"io/ioutil" "io/ioutil"
"os"
"path/filepath" "path/filepath"
"reflect" "reflect"
"testing" "testing"
...@@ -46,6 +48,20 @@ func cmpGold(got []byte, goldfile string) (bool, error) { ...@@ -46,6 +48,20 @@ func cmpGold(got []byte, goldfile string) (bool, error) {
return bytes.Equal(got, gold), nil return bytes.Equal(got, gold), nil
} }
func TestMain(m *testing.M) {
tmplDir := ""
tmplEnv, exists := os.LookupEnv("TEMPLATE_DIR")
if !exists {
tmplDir = tmplEnv
}
if err := InitTemplates(tmplDir); err != nil {
fmt.Printf("Cannot parse templates: %v\n", err)
os.Exit(-1)
}
code := m.Run()
os.Exit(code)
}
var teaSvc = Service{ var teaSvc = Service{
Name: "tea-svc", Name: "tea-svc",
Addresses: []Address{ Addresses: []Address{
......
...@@ -6,42 +6,45 @@ suitable to your requirements. The YAML configurations in this folder ...@@ -6,42 +6,45 @@ suitable to your requirements. The YAML configurations in this folder
prepare a method of deployment, suitable for testing and editing as prepare a method of deployment, suitable for testing and editing as
needed. needed.
## Namespace and ServiceAccount These instruction target a setup in which the Ingress controller runs
in the ``kube-system`` namespace, and watches for Varnish Ingresses in
all namespaces. If you need to restrict your deployment to a single
namespace, see the [instructions for single-namespace
deployments](/examples/namespace) in the [``/examples``
folder](/examples).
Define the Namespace ``varnish-ingress``, and a ServiceAccount named ## ServiceAccount and RBAC
``varnish-ingress`` in that namespace:
```
$ kubectl apply -f ns-and-sa.yaml
```
**NOTE**: You can choose any Namespace, but currently all further
operations are restricted to that Namespace -- all resources described
in the following must be defined in the same Namespace. The controller
currently only reads information from the cluster API about Ingresses,
Services and so forth in the namespace of the pod in which it is
running; so all Varnish instances and every resource named in an
Ingress definition must defined in that namespace. This is likely to
become more flexible in future development.
## RBAC
Apply [Role-based access Define a ServiceAccount named ``varnish-ingress-controller`` and apply
[Role-based access
control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
(RBAC) by creating a ClusterRole named ``varnish-ingress`` that (RBAC) to permits the necessary API access for the Ingress controller:
permits the necessary API access for the Ingress controller, and a
ClusterRoleBinding that assigns the ClusterRole to the ServiceAccount
defined in the first step:
``` ```
$ kubectl apply -f rbac.yaml $ kubectl apply -f serviceaccount.yaml
``` ```
## Admin Secret ## Admin Secret
The controller uses Varnish's admin interface to manage the Varnish The controller uses Varnish's admin interface to manage Varnish
instance, which requires authorization using a shared secret. This is instances, which requires authorization using a shared secret. This is
prepared by defining a k8s Secret: prepared by defining a k8s Secret:
``` ```
$ kubectl apply -f adm-secret.yaml $ kubectl apply -f adm-secret.yaml
``` ```
The ``metadata`` section MUST specify the label
``app: varnish-ingress``. The Ingress controller ignores all Secrets
that do not have this label.
The ``metadata.name`` field MUST match the secret name provided for
the Varnish deployment described below (``adm-secret`` in the
example):
```
metadata:
name: adm-secret
labels:
app: varnish-ingress
```
32 bytes of randomness are sufficient for the secret: 32 bytes of randomness are sufficient for the secret:
``` ```
# This command can be used to generate the value in the data field of # This command can be used to generate the value in the data field of
...@@ -53,10 +56,8 @@ manifest in your deployment; create a new secret, for example using ...@@ -53,10 +56,8 @@ manifest in your deployment; create a new secret, for example using
the command shown above. The purpose of authorization is defeated if the command shown above. The purpose of authorization is defeated if
everyone uses the same secret from an example. everyone uses the same secret from an example.
**TO DO**: The ``metadata.name`` field of the Secret is currently **TO DO**: The key for the Secret (in the ``data`` field) is
hard-wired to the value ``adm-secret``, and the key for the Secret (in hard-wired to ``admin``.
the ``data`` field) is hard-wired to ``admin``. The Secret must be
defined in the same Namespace defined above.
## Deploy Varnish containers ## Deploy Varnish containers
...@@ -72,14 +73,10 @@ There are some requirements on the configuration of the Varnish ...@@ -72,14 +73,10 @@ There are some requirements on the configuration of the Varnish
deployment that must be fulfilled in order for the Ingress to work deployment that must be fulfilled in order for the Ingress to work
properly: properly:
* Currently it must be defined in the same Namespace as defined
above.
* The ``serviceAccountName`` must match the ServiceAccount defined
above.
* The ``image`` must be specified as ``varnish-ingress/varnish``. * The ``image`` must be specified as ``varnish-ingress/varnish``.
* ``spec.template`` must specify a ``label`` with a value that is * ``spec.template`` must specify the label ``app: varnish-ingress``.
matched by the Varnish admin Service described below. In this The controller recognizes Services with this label as Varnish
example: deployments meant to implement Ingress:
``` ```
template: template:
...@@ -103,7 +100,8 @@ properly: ...@@ -103,7 +100,8 @@ properly:
**TO DO**: The ports are currently hard-wired to these port numbers. **TO DO**: The ports are currently hard-wired to these port numbers.
A port for TLS access is currently not supported. A port for TLS access is currently not supported.
* ``volumeMounts`` and ``volumes`` must be specified so that the * ``volumeMounts`` and ``volumes`` must be specified so that the
Secret defined above is available to Varnish: Secret defined above is available to Varnish. The ``secretName``
MUST match the name of the Secret (``adm-secret`` in the example):
``` ```
volumeMounts: volumeMounts:
...@@ -122,9 +120,8 @@ properly: ...@@ -122,9 +120,8 @@ properly:
path: _.secret path: _.secret
``` ```
**TO DO**: The ``mountPath`` is currently hard-wired to **TO DO**: The ``mountPath`` is currently hard-wired to
``/var/run/varnish``. The ``secretName`` is hard-wired to ``/var/run/varnish``. The ``key`` is hard-wired to ``admin``, and
``adm-secret``, the ``key`` to ``admin``, and ``path`` to ``path`` to ``_.secret``.
``_.secret``.
* The liveness check should determine if the Varnish master process is * The liveness check should determine if the Varnish master process is
running. Since Varnish is started in the foreground as the entry running. Since Varnish is started in the foreground as the entry
...@@ -209,7 +206,7 @@ Among these restrictions are: ...@@ -209,7 +206,7 @@ Among these restrictions are:
unload one, otherwise it interferes with the implementation of unload one, otherwise it interferes with the implementation of
Ingress. Ingress.
## Expose the Varnish HTTP port ## Expose the Varnish HTTP and admin ports
With a Deployment, you may choose a resource such as a LoadBalancer or With a Deployment, you may choose a resource such as a LoadBalancer or
Nodeport to create external access to Varnish's HTTP port. The present Nodeport to create external access to Varnish's HTTP port. The present
...@@ -221,17 +218,25 @@ $ kubectl apply -f nodeport.yaml ...@@ -221,17 +218,25 @@ $ kubectl apply -f nodeport.yaml
The cluster then assigns an external port over which HTTP requests are The cluster then assigns an external port over which HTTP requests are
directed to Varnish instances. directed to Varnish instances.
## Varnish admin Service The Service definition must fulfill some requirements:
The controller discovers Varnish instances that it manages by * A port with the name ``varnishadm`` and whose ``targetPort``
obtaining the Endpoints for a headless Service that the admin port: matches the admin port defined above (named `´admport`` in the
sample Deployment for Varnish) MUST be specified:
``` ```
$ kubectl apply -f varnish-adm-svc.yaml ports:
- port: 6081
targetPort: 6081
protocol: TCP
name: varnishadm
``` ```
This makes it possible for the controller to find the internal
addresses of Varnish instances and connect to their admin listeners.
The Service definition must fulfill some requirements: The external port for admin is not used; this allows the
controller to identify the admin ports by name for the Endpoints
that realize the Varnish Service.
**TO DO**: The port number is hard-wired to 6081, and the
``port.name`` is hardwired to ``varnishadm``.
* The Service must be defined so that the cluster API will allow * The Service must be defined so that the cluster API will allow
Endpoints to be listed when the container is not ready (since Endpoints to be listed when the container is not ready (since
...@@ -255,38 +260,21 @@ spec: ...@@ -255,38 +260,21 @@ spec:
In recent versions, both specifications are permitted in the YAML, In recent versions, both specifications are permitted in the YAML,
as in example YAML (the annotation is deprecated, but is not yet an as in example YAML (the annotation is deprecated, but is not yet an
error). error).
* The ``selector`` must match the ``label`` given for the Varnish * The ``selector`` must specify the label ``app: varnish-ingress``:
deployment, as discussed above. In the present example:
``` ```
selector: selector:
app: varnish-ingress matchLabels:
app: varnish-ingress
``` ```
**TO DO**: The Service must be defined in the Namespace of the pod in
which the controller runs. The ``name`` of the Service is currently
hard-wired to ``varnish-ingress-admin``. The port number is hard-wired
to 6081, and the ``port.name`` is hardwired to ``varnishadm``.
## Deploy the controller ## Deploy the controller
This example uses a Deployment to run the controller container: This example uses a Deployment to run the controller container:
``` ```
$ kubectl apply -f controller.yaml $ kubectl apply -f controller.yaml
``` ```
The requirements are: The ``image`` must be ``varnish-ingress/controller``.
* The ``image`` must be ``varnish-ingress/controller``.
* ``spec.template.spec`` must specify the ``POD_NAMESPACE``
environment variable:
```
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
```
It does *not* make sense to deploy more than one replica of the It does *not* make sense to deploy more than one replica of the
controller. If there are more controllers, all of them will connect to controller. If there are more controllers, all of them will connect to
...@@ -294,10 +282,6 @@ the Varnish instances and send them the same administrative ...@@ -294,10 +282,6 @@ the Varnish instances and send them the same administrative
commands. That is not an error (or there is a bug in the controller if commands. That is not an error (or there is a bug in the controller if
it does cause errors), but the extra work is superflous. it does cause errors), but the extra work is superflous.
**TO DO**: The controller currently only acts on Ingress, Service,
Endpoint and Secret definitions in the same Namespace as the pod in
which it is running.
### Controller options ### Controller options
Command-line options for the controller invocation can be set using the Command-line options for the controller invocation can be set using the
...@@ -312,12 +296,6 @@ Command-line options for the controller invocation can be set using the ...@@ -312,12 +296,6 @@ Command-line options for the controller invocation can be set using the
- -log-level=info - -log-level=info
``` ```
Currently supported options are:
* ``log-level`` to set the verbosity of logging. Possible values are
``panic``, ``fatal``, ``error``, ``warn``, ``info``, ``debug`` or
``trace``; default ``info``.
# Done # Done
When these commands succeed: When these commands succeed:
......
...@@ -2,7 +2,8 @@ apiVersion: v1 ...@@ -2,7 +2,8 @@ apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: adm-secret name: adm-secret
namespace: varnish-ingress labels:
app: varnish-ingress
type: Opaque type: Opaque
data: data:
admin: XGASQn0dd/oEsWh5WMXUpmKAKNZYnQGsHSmO/nHkv1w= admin: XGASQn0dd/oEsWh5WMXUpmKAKNZYnQGsHSmO/nHkv1w=
...@@ -2,7 +2,7 @@ apiVersion: extensions/v1beta1 ...@@ -2,7 +2,7 @@ apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
name: varnish-ingress-controller name: varnish-ingress-controller
namespace: varnish-ingress namespace: kube-system
spec: spec:
replicas: 1 replicas: 1
selector: selector:
...@@ -13,21 +13,12 @@ spec: ...@@ -13,21 +13,12 @@ spec:
labels: labels:
app: varnish-ingress-controller app: varnish-ingress-controller
spec: spec:
serviceAccountName: varnish-ingress serviceAccountName: varnish-ingress-controller
containers: containers:
- image: varnish-ingress/controller - image: varnish-ingress/controller
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
name: varnish-ingress-controller name: varnish-ingress-controller
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args: args:
# log-level default is info, so this can be left out. # log-level default is info, so this can be left out.
# Shown here to demonstrate setting options for the controller. # Shown here to demonstrate setting options for the controller.
- -log-level=debug - -log-level=info
...@@ -2,13 +2,21 @@ apiVersion: v1 ...@@ -2,13 +2,21 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: varnish-ingress name: varnish-ingress
namespace: varnish-ingress labels:
app: varnish-ingress
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec: spec:
type: NodePort type: NodePort
ports: ports:
- port: 6081
targetPort: 6081
protocol: TCP
name: varnishadm
- port: 80 - port: 80
targetPort: 80 targetPort: 80
protocol: TCP protocol: TCP
name: http name: http
selector: selector:
app: varnish-ingress app: varnish-ingress
publishNotReadyAddresses: true
kind: ClusterRole kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
metadata: metadata:
name: varnish-ingress name: varnish-ingress-controller
rules: rules:
- apiGroups: - apiGroups:
- "" - ""
...@@ -51,12 +51,12 @@ rules: ...@@ -51,12 +51,12 @@ rules:
kind: ClusterRoleBinding kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1 apiVersion: rbac.authorization.k8s.io/v1beta1
metadata: metadata:
name: varnish-ingress name: varnish-ingress-controller
subjects: subjects:
- kind: ServiceAccount - kind: ServiceAccount
name: varnish-ingress name: varnish-ingress-controller
namespace: varnish-ingress namespace: kube-system
roleRef: roleRef:
kind: ClusterRole kind: ClusterRole
name: varnish-ingress name: varnish-ingress-controller
apiGroup: rbac.authorization.k8s.io apiGroup: rbac.authorization.k8s.io
apiVersion: v1
kind: ServiceAccount
metadata:
name: varnish-ingress-controller
namespace: kube-system
...@@ -2,9 +2,8 @@ apiVersion: extensions/v1beta1 ...@@ -2,9 +2,8 @@ apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
name: varnish name: varnish
namespace: varnish-ingress
spec: spec:
replicas: 1 replicas: 2
selector: selector:
matchLabels: matchLabels:
app: varnish-ingress app: varnish-ingress
...@@ -13,7 +12,6 @@ spec: ...@@ -13,7 +12,6 @@ spec:
labels: labels:
app: varnish-ingress app: varnish-ingress
spec: spec:
serviceAccountName: varnish-ingress
containers: containers:
- image: varnish-ingress/varnish - image: varnish-ingress/varnish
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
...@@ -21,9 +19,9 @@ spec: ...@@ -21,9 +19,9 @@ spec:
ports: ports:
- name: http - name: http
containerPort: 80 containerPort: 80
- name: k8sport - name: k8s
containerPort: 8080 containerPort: 8080
- name: admport - name: varnishadm
containerPort: 6081 containerPort: 6081
volumeMounts: volumeMounts:
- name: adm-secret - name: adm-secret
...@@ -39,7 +37,7 @@ spec: ...@@ -39,7 +37,7 @@ spec:
readinessProbe: readinessProbe:
httpGet: httpGet:
path: /ready path: /ready
port: k8sport port: k8s
args: args:
# varnishd command-line options # varnishd command-line options
# In this example: varnishd -l 80M -p default_grace=10 # In this example: varnishd -l 80M -p default_grace=10
......
...@@ -7,5 +7,8 @@ requirements. ...@@ -7,5 +7,8 @@ requirements.
* [The "cafe" example](/examples/hello), a "hello world" for Ingress. * [The "cafe" example](/examples/hello), a "hello world" for Ingress.
* [Self-sharding Varnish cluser](/examples/self-sharding) * Limiting the Ingress controller to
[a single namespace](/examples/namespace).
* [Self-sharding Varnish cluster](/examples/self-sharding)
([docs](/docs/self-sharding.md)) ([docs](/docs/self-sharding.md))
...@@ -13,18 +13,10 @@ $ kubectl create -f cafe.yaml ...@@ -13,18 +13,10 @@ $ kubectl create -f cafe.yaml
$ kubectl create -f cafe-ingress.yaml $ kubectl create -f cafe-ingress.yaml
``` ```
To note: Note that the Ingress has the annotation
``kubernetes.io/ingress.class: "varnish"``, identifying it as an
* The Ingress has the annotation Ingress to be implemented by the Varnish controller (the Varnish
``kubernetes.io/ingress.class: "varnish"``, identifying it as an controller ignores any Ingress that does not have this annotation).
Ingress to be implemented by the Varnish controller (the Varnish
controller ignores any Ingress that does not have this annotation).
* Both the Ingress and the Services are created in the
``varnish-ingress`` namespace, as are the resources defined by the
configurations in the [``deploy/``](/deploy) folder. The controller
currently ignores all definitions that are not defined in the same
namespace as the Pod in which it is running (this is likely to
become more flexible in future development).
The Ingress rules in the example require that: The Ingress rules in the example require that:
......
...@@ -4,7 +4,6 @@ metadata: ...@@ -4,7 +4,6 @@ metadata:
name: cafe-ingress-varnish name: cafe-ingress-varnish
annotations: annotations:
kubernetes.io/ingress.class: "varnish" kubernetes.io/ingress.class: "varnish"
namespace: varnish-ingress
spec: spec:
rules: rules:
- host: cafe.example.com - host: cafe.example.com
......
...@@ -2,7 +2,6 @@ apiVersion: extensions/v1beta1 ...@@ -2,7 +2,6 @@ apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
name: coffee name: coffee
namespace: varnish-ingress
spec: spec:
replicas: 2 replicas: 2
selector: selector:
...@@ -23,7 +22,6 @@ apiVersion: v1 ...@@ -23,7 +22,6 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: coffee-svc name: coffee-svc
namespace: varnish-ingress
spec: spec:
ports: ports:
- port: 80 - port: 80
...@@ -37,7 +35,6 @@ apiVersion: extensions/v1beta1 ...@@ -37,7 +35,6 @@ apiVersion: extensions/v1beta1
kind: Deployment kind: Deployment
metadata: metadata:
name: tea name: tea
namespace: varnish-ingress
spec: spec:
replicas: 3 replicas: 3
selector: selector:
...@@ -58,7 +55,6 @@ apiVersion: v1 ...@@ -58,7 +55,6 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: tea-svc name: tea-svc
namespace: varnish-ingress
labels: labels:
spec: spec:
ports: ports:
......
# Restricting the Ingress controller to a single namespace
This folder contains an example of the use of the ``-namespace``
argument of the Ingress controller to limit its actions to Services,
Ingresses, Secrets and so on to a single namespace. The controller can
then be deployed in that namespace. You may need to do so, for
example, due to limits on authorization in the cluster.
The sample manifests use ``varnish-ingress`` as the example namespace,
and re-uses the ["cafe" example](/examples/hello), with Services and
an Ingress defined for the namespace.
In the following we follow the steps in the documentation for a
[cluster-wide deployment of the Ingress controller](/deploy) and the
[deployment of the example](/examples/hello). Only the details
concerning the single-namespace deployment are covered here; see the
other doc links for further information.
## Deployment
### Namespace and ServiceAccount
Define the Namespace ``varnish-ingress``, and a ServiceAccount named
``varnish-ingress`` in that namespace:
```
$ kubectl apply -f ns-and-sa.yaml
```
All further operations will be restricted to the Namespace.
### RBAC
The RBAC manifest grants the same authorizations as for the
[cluster-wide deployment](/deploy), but they are bound to the
ServiceAccount just defined. This allows the Ingress controller
to operate in the sample Namespace (only).
```
$ kubectl apply -f rbac.yaml
```
### Admin Secret, Varnish deployment and Nodeport
These are created by this sequence of commands:
```
$ kubectl apply -f adm-secret.yaml
$ kubectl apply -f varnish.yaml
$ kubectl apply -f nodeport.yaml
```
The manifests only differ from their counterparts from the
[cluster-wide deployment instructions](/deploy) in the namespace.
### Controller
To run the controller container:
```
$ kubectl apply -f controller.yaml
```
This manifest differs from a cluster-wide deployment in that:
* The ``serviceAccountName`` is assigned the ServiceAccount defined
above.
* The container is invoked with the ``-namespace`` option to limit its
work to the given namespace:
```
spec:
serviceAccountName: varnish-ingress
containers:
- args:
# Controller only acts on Ingresses, Services etc. in the
# given namespace.
- -namespace=varnish-ingress
```
The controller now only watches Services, Ingresses, Secrets and so
forth in the ``varnish-ingress`` namespace.
## Example
The "cafe" example can be deployed with:
```
$ kubectl apply -f cafe.yaml
$ kubectl apply -f cafe-ingress.yaml
```
Their manifests differ from those of the ["hello" example](/examples/hello)
in their restriction to the sample namespace.
apiVersion: v1
kind: Secret
metadata:
name: adm-secret
namespace: varnish-ingress
labels:
app: varnish-ingress
type: Opaque
data:
admin: XGASQn0dd/oEsWh5WMXUpmKAKNZYnQGsHSmO/nHkv1w=
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress-varnish
namespace: varnish-ingress
annotations:
kubernetes.io/ingress.class: "varnish"
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coffee
namespace: varnish-ingress
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
namespace: varnish-ingress
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: coffee
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tea
namespace: varnish-ingress
spec:
replicas: 3
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
- name: tea
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: tea-svc
namespace: varnish-ingress
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: tea
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: varnish-ingress-controller
namespace: varnish-ingress
spec:
replicas: 1
selector:
matchLabels:
app: varnish-ingress-controller
template:
metadata:
labels:
app: varnish-ingress-controller
spec:
serviceAccountName: varnish-ingress
containers:
- image: varnish-ingress/controller
imagePullPolicy: IfNotPresent
name: varnish-ingress-controller
args:
# Controller only observes Ingresses, Services etc. in the
# given namespace.
- -namespace=varnish-ingress
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: varnish-ingress-admin name: varnish-ingress
namespace: varnish-ingress namespace: varnish-ingress
labels:
app: varnish-ingress
annotations: annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec: spec:
clusterIP: None type: NodePort
ports: ports:
- port: 6081 - port: 6081
targetPort: 6081
protocol: TCP
name: varnishadm name: varnishadm
- port: 80
targetPort: 80
protocol: TCP
name: http
selector: selector:
app: varnish-ingress app: varnish-ingress
publishNotReadyAddresses: true publishNotReadyAddresses: true
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: varnish-ingress
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- list
- watch
- get
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: varnish-ingress
subjects:
- kind: ServiceAccount
name: varnish-ingress
namespace: varnish-ingress
roleRef:
kind: ClusterRole
name: varnish-ingress
apiGroup: rbac.authorization.k8s.io
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: varnish
namespace: varnish-ingress
spec:
replicas: 2
selector:
matchLabels:
app: varnish-ingress
template:
metadata:
labels:
app: varnish-ingress
spec:
containers:
- image: varnish-ingress/varnish
imagePullPolicy: IfNotPresent
name: varnish-ingress
ports:
- name: http
containerPort: 80
- name: k8s
containerPort: 8080
- name: varnishadm
containerPort: 6081
volumeMounts:
- name: adm-secret
mountPath: "/var/run/varnish"
readOnly: true
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -P
- "0"
- varnishd
readinessProbe:
httpGet:
path: /ready
port: k8s
args:
# varnishd command-line options
# In this example: varnishd -l 80M -p default_grace=10
# These are default values for the given options in Varnish 6.1.
# Shown here to demonstrate setting options for Varnish.
- -l
- 80M
- -p
- default_grace=10
volumes:
- name: adm-secret
secret:
secretName: adm-secret
items:
- key: admin
path: _.secret
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment