Commit 1140313d authored by Geoff Simmons's avatar Geoff Simmons

Watch for resources in all namespaces.

This commit includes a comprehensive refactoring of the
controller:

- ResourceEventHandlers dispatch work to a workqueue for
  a worker that in turn dispatches work to a separate
  queue for each namespace. For each namespace, there
  is a worker that syncs the various resources.

- The controller uses a SharedInformer, with Informers
  and Listers for each resource type. The Listers are
  handed off to the namespace-specific workers.

- Separate source files contain code for syncing
  Endpoints, Ingress, Service and Secret.

- Added the controller CLI options:
  -namespace: for single-namespace deployments
  -templatedir: to locate the templates for VCL

- The director for templates is given either by the
  templatedir CLI option or the TEMPLATE_DIR env
  variable. This makes it possible to run the
  controller from the command-line, and run tests
  from any working directory.

- The Ingress annotation
  ingress.varnish-cache.org/varnish-svc identifies
  the Service name of the Varnish Service intended
  to implement it. If the annotation is absent,
  then the controller looks for exactly one Service
  in the namespace with the label "app:varnish-ingress".
  If none is found, the Ingress is rejected.
  (This is not yet documented.)

- Docs and examples updated

- Example for single-namespace deployment added.
parent 16af59ba
......@@ -20,10 +20,9 @@ is undergoing initial testing. It is currently subject to a number of
limitations, expected to be removed over time, including:
* No support for TLS connections
* The controller only attends to definitions (Ingresses, Services and
Endpoints) in the namespace of the Pod in which it is deployed.
* Only one Ingress definition is valid at a time. If more than one definition
is added to the namespace, then the most recent definition becomes valid.
* Only one Ingress definition in a namespace is valid at a time. If
more than one definition is added to the namespace, then the most
recent definition becomes valid.
* A variety of elements in the implementation are hard-wired, as
detailed in the documentation. These are expected to become configurable
in further development.
......@@ -48,6 +47,13 @@ The Ingress can then be deployed by any of the means that are
customary for Kubernetes. The [``deploy/``](/deploy) folder contains
manifests (YAMLs) for some of the ways to deploy an Ingress.
The deployment described in [``deploy/``](/deploy) targets a default
setup in which the controller runs in the ``kube-system`` namespace
and watches in all namespaces of the cluster for Ingresses, Services
and so on that are intended for the Varnish implementation. See the
[instructions for single-namespace deployments](/examples/namespace)
if you need to limit the deployment to one namespace.
The [``examples/``](/examples) folder contains YAMLs for Services and
Ingresses to test and demonstrate the Varnish implementation and its
features. You might want to begin with the
......
This diff is collapsed.
/*
* Copyright (c) 2018 UPLEX Nils Goroll Systemoptimierung
* All rights reserved
*
* Author: Geoffrey Simmons <geoffrey.simmons@uplex.de>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
package controller
func (worker *NamespaceWorker) syncEndp(key string) error {
worker.log.Infof("Syncing Endpoints: %s/%s", worker.namespace, key)
svc, err := worker.svc.Get(key)
if err != nil {
worker.log.Warnf("Cannot get service for endpoints %s/%s, "+
"ignoring", worker.namespace, key)
return nil
}
if worker.isVarnishIngSvc(svc) {
worker.log.Infof("Endpoints changed for Varnish Ingress "+
"service %s/%s, enqueuing service sync", svc.Namespace,
svc.Name)
worker.queue.Add(svc)
return nil
}
worker.log.Debugf("Checking ingresses for endpoints: %s/%s",
worker.namespace, key)
ings, err := worker.getIngsForSvc(svc)
if err != nil {
return err
}
if len(ings) == 0 {
worker.log.Debugf("No ingresses for endpoints: %s/%s",
worker.namespace, key)
return nil
}
worker.log.Debugf("Update ingresses for endpoints %s", key)
for _, ing := range ings {
if !isVarnishIngress(ing) {
worker.log.Debugf("Ingress %s/%s: not Varnish",
ing.Namespace, ing.Name)
continue
}
err = worker.addOrUpdateIng(ing)
if err != nil {
return err
}
}
return nil
}
This diff is collapsed.
/*
* Copyright (c) 2018 UPLEX Nils Goroll Systemoptimierung
* All rights reserved
*
* Author: Geoffrey Simmons <geoffrey.simmons@uplex.de>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
package controller
import (
"fmt"
api_v1 "k8s.io/api/core/v1"
)
// XXX make this configurable
const admSecretKey = "admin"
func (worker *NamespaceWorker) getVarnishSvcsForSecret(
secretName string) ([]*api_v1.Service, error) {
var secrSvcs []*api_v1.Service
svcs, err := worker.svc.List(varnishIngressSelector)
if err != nil {
return secrSvcs, err
}
for _, svc := range svcs {
pods, err := worker.getPods(svc)
if err != nil {
return secrSvcs,
fmt.Errorf("Error getting pod information for "+
"service %s/%s: %v", svc.Namespace,
svc.Name, err)
}
if len(pods.Items) == 0 {
continue
}
// The secret is meant for the service if a
// SecretVolumeSource is specified in the Pod spec
// that names the secret.
pod := pods.Items[0]
for _, vol := range pod.Spec.Volumes {
if vol.Secret == nil {
continue
}
if vol.Secret.SecretName == secretName {
secrSvcs = append(secrSvcs, svc)
}
}
}
return secrSvcs, nil
}
func (worker *NamespaceWorker) syncSecret(key string) error {
worker.log.Infof("Syncing Secret: %s/%s", worker.namespace, key)
secret, err := worker.secr.Get(key)
if err != nil {
return err
}
app, ok := secret.Labels[labelKey]
if !ok || app != labelVal {
worker.log.Infof("Not a Varnish admin secret, ignoring: %s/%s",
secret.Namespace, secret.Name)
return nil
}
secretData, exists := secret.Data[admSecretKey]
if !exists {
return fmt.Errorf("Secret %s/%s does not have key %s",
secret.Namespace, secret.Name, admSecretKey)
}
secretKey := secret.Namespace + "/" + secret.Name
worker.log.Debugf("Setting secret %s", secretKey)
worker.vController.SetAdmSecret(secretKey, secretData)
svcs, err := worker.getVarnishSvcsForSecret(secret.Name)
if err != nil {
return err
}
worker.log.Debugf("Found Varnish services for secret %s/%s: %v",
secret.Namespace, secret.Name, svcs)
for _, svc := range svcs {
svcKey := svc.Namespace + "/" + svc.Name
if err = worker.vController.
UpdateSvcForSecret(svcKey, secretKey); err != nil {
return err
}
}
return nil
}
func (worker *NamespaceWorker) deleteSecret(key string) error {
worker.log.Infof("Deleting Secret: %s", key)
svcs, err := worker.getVarnishSvcsForSecret(key)
if err != nil {
return err
}
worker.log.Debugf("Found Varnish services for secret %s/%s: %v",
worker.namespace, key, svcs)
secretKey := worker.namespace + "/" + key
worker.vController.DeleteAdmSecret(secretKey)
for _, svc := range svcs {
svcKey := svc.Namespace + "/" + svc.Name
if err = worker.vController.
UpdateSvcForSecret(svcKey, secretKey); err != nil {
return err
}
}
return nil
}
/*
* Copyright (c) 2018 UPLEX Nils Goroll Systemoptimierung
* All rights reserved
*
* Author: Geoffrey Simmons <geoffrey.simmons@uplex.de>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
package controller
import (
"fmt"
"code.uplex.de/uplex-varnish/k8s-ingress/cmd/varnish/vcl"
api_v1 "k8s.io/api/core/v1"
extensions "k8s.io/api/extensions/v1beta1"
"k8s.io/apimachinery/pkg/labels"
)
// XXX make this configurable
const admPortName = "varnishadm"
// isVarnishIngSvc determines if a Service represents a Varnish that
// can implement Ingress, for which this controller is responsible.
// Currently the app label must point to a hardwired name.
func (worker *NamespaceWorker) isVarnishIngSvc(svc *api_v1.Service) bool {
app, exists := svc.Labels[labelKey]
return exists && app == labelVal
}
func (worker *NamespaceWorker) getIngsForSvc(
svc *api_v1.Service) (ings []*extensions.Ingress, err error) {
allIngs, err := worker.ing.List(labels.Everything())
if err != nil {
return
}
for _, ing := range allIngs {
if ing.Namespace != svc.Namespace {
// Shouldn't be possible
continue
}
cpy := ing.DeepCopy()
if cpy.Spec.Backend != nil {
if cpy.Spec.Backend.ServiceName == svc.Name {
ings = append(ings, cpy)
}
}
for _, rules := range cpy.Spec.Rules {
if rules.IngressRuleValue.HTTP == nil {
continue
}
for _, p := range rules.IngressRuleValue.HTTP.Paths {
if p.Backend.ServiceName == svc.Name {
ings = append(ings, cpy)
}
}
}
}
if len(ings) == 0 {
worker.log.Infof("No Varnish Ingresses defined for service %s/%s",
svc.Namespace, svc.Name)
}
return ings, nil
}
func (worker *NamespaceWorker) enqueueIngressForService(
svc *api_v1.Service) error {
ings, err := worker.getIngsForSvc(svc)
if err != nil {
return err
}
for _, ing := range ings {
if !isVarnishIngress(ing) {
continue
}
worker.queue.Add(ing)
}
return nil
}
// Return true if changes in Varnish services may lead to changes in
// the VCL config generated for the Ingress.
func isVarnishInVCLSpec(ing *extensions.Ingress) bool {
_, selfShard := ing.Annotations[selfShardKey]
return selfShard
}
func (worker *NamespaceWorker) syncSvc(key string) error {
var addrs []vcl.Address
worker.log.Infof("Syncing Service: %s/%s", worker.namespace, key)
svc, err := worker.svc.Get(key)
if err != nil {
return err
}
if !worker.isVarnishIngSvc(svc) {
return worker.enqueueIngressForService(svc)
}
worker.log.Infof("Syncing Varnish Ingress Service %s/%s:",
svc.Namespace, svc.Name)
// Check if there are Ingresses for which the VCL spec may
// change due to changes in Varnish services.
updateVCL := false
ings, err := worker.ing.List(labels.Everything())
if err != nil {
return err
}
for _, ing := range ings {
if ing.Namespace != svc.Namespace {
continue
}
ingSvc, err := worker.getVarnishSvcForIng(ing)
if err != nil {
return err
}
if ingSvc.Namespace != svc.Namespace ||
ingSvc.Name != svc.Name {
continue
}
if !isVarnishInVCLSpec(ing) {
continue
}
updateVCL = true
worker.log.Debugf("Requeueing Ingress %s/%s after changed "+
"Varnish service %s/%s: %+v", ing.Namespace,
ing.Name, svc.Namespace, svc.Name, ing)
worker.queue.Add(ing)
}
if !updateVCL {
worker.log.Debugf("No change in VCL due to changed Varnish "+
"service %s/%s", svc.Namespace, svc.Name)
}
endps, err := worker.getServiceEndpoints(svc)
if err != nil {
return err
}
worker.log.Debugf("Varnish service %s/%s endpoints: %+v", svc.Namespace,
svc.Name, endps)
// Get the secret name and admin port for the service. We have
// to retrieve a Pod spec for the service, then look for the
// SecretVolumeSource, and the port matching admPortName.
secrName := ""
worker.log.Debugf("Searching Pods for the secret for %s/%s",
svc.Namespace, svc.Name)
pods, err := worker.getPods(svc)
if err != nil {
return fmt.Errorf("Cannot get a Pod for service %s/%s: %v",
svc.Namespace, svc.Name, err)
}
if len(pods.Items) == 0 {
return fmt.Errorf("No Pods for Service: %s/%s", svc.Namespace,
svc.Name)
}
pod := &pods.Items[0]
for _, vol := range pod.Spec.Volumes {
if secretVol := vol.Secret; secretVol != nil {
secrName = secretVol.SecretName
break
}
}
if secrName != "" {
secrName = worker.namespace + "/" + secrName
worker.log.Infof("Found secret name %s for Service %s/%s",
secrName, svc.Namespace, svc.Name)
} else {
worker.log.Warnf("No secret found for Service %s/%s",
svc.Namespace, svc.Name)
}
// XXX hard-wired Port name
for _, subset := range endps.Subsets {
admPort := int32(0)
for _, port := range subset.Ports {
if port.Name == admPortName {
admPort = port.Port
break
}
}
if admPort == 0 {
return fmt.Errorf("No Varnish admin port %s found for "+
"Service %s/%s endpoint", admPortName,
svc.Namespace, svc.Name)
}
for _, address := range subset.Addresses {
addr := vcl.Address{
IP: address.IP,
Port: admPort,
}
addrs = append(addrs, addr)
}
}
worker.log.Debugf("Varnish service %s/%s addresses: %+v", svc.Namespace,
svc.Name, addrs)
return worker.vController.AddOrUpdateVarnishSvc(
svc.Namespace+"/"+svc.Name, addrs, secrName, !updateVCL)
}
func (worker *NamespaceWorker) deleteSvc(key string) error {
nsKey := worker.namespace + "/" + key
worker.log.Info("Deleting Service:", nsKey)
svcObj, err := worker.svc.Get(key)
if err != nil {
return err
}
if !worker.isVarnishIngSvc(svcObj) {
return worker.enqueueIngressForService(svcObj)
}
return worker.vController.DeleteVarnishSvc(nsKey)
}
......@@ -30,219 +30,83 @@ package controller
import (
"fmt"
"time"
"github.com/sirupsen/logrus"
api_v1 "k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/util/intstr"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/util/workqueue"
api_v1 "k8s.io/api/core/v1"
extensions "k8s.io/api/extensions/v1beta1"
"code.uplex.de/uplex-varnish/k8s-ingress/cmd/varnish/vcl"
)
// taskQueue manages a work queue through an independent worker that
// invokes the given sync function for every work item inserted.
type taskQueue struct {
log *logrus.Logger
// queue is the work queue the worker polls
queue *workqueue.Type
// sync is called for each item in the queue
sync func(Task)
// workerDone is closed when the worker exits
workerDone chan struct{}
}
const (
labelKey = "app"
labelVal = "varnish-ingress"
)
func (t *taskQueue) run(period time.Duration, stopCh <-chan struct{}) {
wait.Until(t.worker, period, stopCh)
}
var (
varnishIngressSet = labels.Set(map[string]string{
labelKey: labelVal,
})
// Selector for use in List() operations to find resources
// with the app:varnish-ingress label.
varnishIngressSelector = labels.SelectorFromSet(varnishIngressSet)
)
// enqueue enqueues ns/name of the given api object in the task queue.
func (t *taskQueue) enqueue(obj interface{}) {
key, err := keyFunc(obj)
if err != nil {
t.log.Errorf("Couldn't get key for object %v: %v", obj, err)
return
}
// getServiceEndpoints returns the endpoints of a service, matched on
// service name.
func (worker *NamespaceWorker) getServiceEndpoints(
svc *api_v1.Service) (ep *api_v1.Endpoints, err error) {
task, err := NewTask(key, obj)
eps, err := worker.endp.List(labels.Everything())
if err != nil {
t.log.Errorf("Couldn't create a task for object %v: %v", obj,
err)
return
}
t.log.Info("Adding an element with a key:", task.Key)
t.queue.Add(task)
}
func (t *taskQueue) requeue(task Task, err error) {
t.log.Warnf("Requeuing %v, err %v", task.Key, err)
t.queue.Add(task)
}
func (t *taskQueue) requeueAfter(task Task, err error, after time.Duration) {
t.log.Warnf("Requeuing %v after %s, err %v", task.Key, after.String(),
err)
go func(task Task, after time.Duration) {
time.Sleep(after)
t.queue.Add(task)
}(task, after)
}
// worker processes work in the queue through sync.
func (t *taskQueue) worker() {
for {
task, quit := t.queue.Get()
if quit {
close(t.workerDone)
return
for _, ep := range eps {
if svc.Name == ep.Name && svc.Namespace == ep.Namespace {
return ep, nil
}
t.log.Infof("Syncing %v", task.(Task).Key)
t.sync(task.(Task))
t.queue.Done(task)
}
}
// shutdown shuts down the work queue and waits for the worker to ACK
func (t *taskQueue) shutdown() {
t.queue.ShutDown()
<-t.workerDone
}
// NewTaskQueue creates a new task queue with the given sync function.
// The sync function is called for every element inserted into the queue.
func NewTaskQueue(syncFn func(Task), log *logrus.Logger) *taskQueue {
return &taskQueue{
log: log,
queue: workqueue.New(),
sync: syncFn,
workerDone: make(chan struct{}),
}
}
// Kind represents the kind of the Kubernetes resources of a task
type Kind int
const (
// Ingress resource
Ingress = iota
// Endpoints resource
Endpoints
// Service resource
Service
// Secret resource
Secret
)
// Task is an element of a taskQueue
type Task struct {
Kind Kind
Key string
}
// NewTask creates a new task
func NewTask(key string, obj interface{}) (Task, error) {
var k Kind
switch t := obj.(type) {
case *extensions.Ingress:
// ing := obj.(*extensions.Ingress)
k = Ingress
case *api_v1.Endpoints:
k = Endpoints
case *api_v1.Service:
k = Service
case *api_v1.Secret:
k = Secret
default:
return Task{}, fmt.Errorf("Unknown type: %v", t)
}
return Task{k, key}, nil
}
// StoreToIngressLister makes a Store that lists Ingress.
type StoreToIngressLister struct {
cache.Store
}
// GetByKeySafe calls Store.GetByKeySafe and returns a copy of the
// ingress so it is safe to modify.
func (s *StoreToIngressLister) GetByKeySafe(key string) (ing *extensions.Ingress, exists bool, err error) {
item, exists, err := s.Store.GetByKey(key)
if !exists || err != nil {
return nil, exists, err
}
ing = item.(*extensions.Ingress).DeepCopy()
err = fmt.Errorf("could not find endpoints for service: %s/%s",
svc.Namespace, svc.Name)
return
}
// List lists all Ingress' in the store.
func (s *StoreToIngressLister) List() (ing extensions.IngressList, err error) {
for _, m := range s.Store.List() {
ing.Items = append(ing.Items, *(m.(*extensions.Ingress)).DeepCopy())
}
return ing, nil
}
// GetServiceIngress gets all the Ingress' that have rules pointing to a service.
// Note that this ignores services without the right nodePorts.
func (s *StoreToIngressLister) GetServiceIngress(svc *api_v1.Service) (ings []extensions.Ingress, err error) {
for _, m := range s.Store.List() {
ing := *m.(*extensions.Ingress).DeepCopy()
if ing.Namespace != svc.Namespace {
continue
}
if ing.Spec.Backend != nil {
if ing.Spec.Backend.ServiceName == svc.Name {
ings = append(ings, ing)
}
}
for _, rules := range ing.Spec.Rules {
if rules.IngressRuleValue.HTTP == nil {
continue
}
for _, p := range rules.IngressRuleValue.HTTP.Paths {
if p.Backend.ServiceName == svc.Name {
ings = append(ings, ing)
// endpsTargetPort2Addrs returns a list of addresses for VCL backend
// config, given the Endpoints of a Service and a target port number
// for their Pods.
func endpsTargetPort2Addrs(
svc *api_v1.Service,
endps *api_v1.Endpoints,
targetPort int32) ([]vcl.Address, error) {
var addrs []vcl.Address
for _, subset := range endps.Subsets {
for _, port := range subset.Ports {
if port.Port == targetPort {
for _, address := range subset.Addresses {
addr := vcl.Address{
IP: address.IP,
Port: port.Port,
}
addrs = append(addrs, addr)
}
return addrs, nil
}
}
}
if len(ings) == 0 {
err = fmt.Errorf("No ingress for service %v", svc.Name)
}
return
}
// StoreToEndpointLister makes a Store that lists Endpoints
type StoreToEndpointLister struct {
cache.Store
}
// GetServiceEndpoints returns the endpoints of a service, matched on service name.
func (s *StoreToEndpointLister) GetServiceEndpoints(svc *api_v1.Service) (ep api_v1.Endpoints, err error) {
for _, m := range s.Store.List() {
ep = *m.(*api_v1.Endpoints)
if svc.Name == ep.Name && svc.Namespace == ep.Namespace {
return ep, nil
}
}
err = fmt.Errorf("could not find endpoints for service: %v", svc.Name)
return
return addrs, fmt.Errorf("No endpoints for service port %d in service "+
"%s/%s", targetPort, svc.Namespace, svc.Name)
}
// FindPort locates the container port for the given pod and portName. If the
// targetPort is a number, use that. If the targetPort is a string, look that
// string up in all named ports in all containers in the target pod. If no
// match is found, fail.
func FindPort(pod *api_v1.Pod, svcPort *api_v1.ServicePort) (int32, error) {
// findPort returns the container port number for a Pod and
// ServicePort. If the targetPort is a string, search for a matching
// named ports in the specs for all containers in the Pod.
func findPort(pod *api_v1.Pod, svcPort *api_v1.ServicePort) (int32, error) {
portName := svcPort.TargetPort
switch portName.Type {
case intstr.Int:
return int32(portName.IntValue()), nil
case intstr.String:
name := portName.StrVal
for _, container := range pod.Spec.Containers {
......@@ -253,14 +117,49 @@ func FindPort(pod *api_v1.Pod, svcPort *api_v1.ServicePort) (int32, error) {
}
}
}
case intstr.Int:
return int32(portName.IntValue()), nil
}
return 0, fmt.Errorf("no suitable port for manifest: %s", pod.UID)
return 0, fmt.Errorf("No port number found for ServicePort %s and Pod "+
"%s/%s", svcPort.Name, pod.Namespace, pod.Name)
}
// getPodsForSvc queries the API for the Pods in a Service.
func (worker *NamespaceWorker) getPods(
svc *api_v1.Service) (*api_v1.PodList, error) {
return worker.client.CoreV1().Pods(svc.Namespace).
List(meta_v1.ListOptions{
LabelSelector: labels.Set(svc.Spec.Selector).String(),
})
}
// StoreToSecretLister makes a Store that lists Secrets
type StoreToSecretLister struct {
cache.Store
// getTargetPort returns a target port number for the Pods of a Service,
// given a ServicePort.
func (worker *NamespaceWorker) getTargetPort(svcPort *api_v1.ServicePort,
svc *api_v1.Service) (int32, error) {
if (svcPort.TargetPort == intstr.IntOrString{}) {
return svcPort.Port, nil
}
if svcPort.TargetPort.Type == intstr.Int {
return int32(svcPort.TargetPort.IntValue()), nil
}
pods, err := worker.getPods(svc)
if err != nil {
return 0, fmt.Errorf("Error getting pod information: %v", err)
}
if len(pods.Items) == 0 {
return 0, fmt.Errorf("No pods of service: %v", svc.Name)
}
pod := &pods.Items[0]
portNum, err := findPort(pod, svcPort)
if err != nil {
return 0, fmt.Errorf("Error finding named port %s in pod %s/%s"+
": %v", svcPort.Name, pod.Namespace, pod.Name, err)
}
return portNum, nil
}
/*
* Copyright (c) 2018 UPLEX Nils Goroll Systemoptimierung
* All rights reserved
*
* Author: Geoffrey Simmons <geoffrey.simmons@uplex.de>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
package controller
import (
api_v1 "k8s.io/api/core/v1"
extensions "k8s.io/api/extensions/v1beta1"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/client-go/kubernetes"
core_v1_listers "k8s.io/client-go/listers/core/v1"
ext_listers "k8s.io/client-go/listers/extensions/v1beta1"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/record"
"k8s.io/client-go/util/workqueue"
"github.com/sirupsen/logrus"
"code.uplex.de/uplex-varnish/k8s-ingress/cmd/varnish"
)
const (
// syncSuccess and syncFailure are reasons for Events
syncSuccess = "SyncSuccess"
syncFailure = "SyncFailure"
)
type NamespaceWorker struct {
namespace string
log *logrus.Logger
vController *varnish.VarnishController
queue workqueue.RateLimitingInterface
stopChan chan struct{}
ing ext_listers.IngressNamespaceLister
svc core_v1_listers.ServiceNamespaceLister
endp core_v1_listers.EndpointsNamespaceLister
secr core_v1_listers.SecretNamespaceLister
client kubernetes.Interface
recorder record.EventRecorder
}
func (worker *NamespaceWorker) infoEvent(obj interface{}, reason, msgFmt string,
args ...interface{}) {
switch obj.(type) {
case *extensions.Ingress:
ing, _ := obj.(*extensions.Ingress)
worker.recorder.Eventf(ing, api_v1.EventTypeNormal, reason,
msgFmt, args...)
case *api_v1.Service:
svc, _ := obj.(*api_v1.Service)
worker.recorder.Eventf(svc, api_v1.EventTypeNormal, reason,
msgFmt, args...)
case *api_v1.Endpoints:
endp, _ := obj.(*api_v1.Endpoints)
worker.recorder.Eventf(endp, api_v1.EventTypeNormal, reason,
msgFmt, args...)
case *api_v1.Secret:
secr, _ := obj.(*api_v1.Secret)
worker.recorder.Eventf(secr, api_v1.EventTypeNormal, reason,
msgFmt, args...)
default:
worker.log.Warnf("Unhandled type %T, no event generated", obj)
}
}
func (worker *NamespaceWorker) warnEvent(obj interface{}, reason, msgFmt string,
args ...interface{}) {
switch obj.(type) {
case *extensions.Ingress:
ing, _ := obj.(*extensions.Ingress)
worker.recorder.Eventf(ing, api_v1.EventTypeWarning, reason,
msgFmt, args...)
case *api_v1.Service:
svc, _ := obj.(*api_v1.Service)
worker.recorder.Eventf(svc, api_v1.EventTypeWarning, reason,
msgFmt, args...)
case *api_v1.Endpoints:
endp, _ := obj.(*api_v1.Endpoints)
worker.recorder.Eventf(endp, api_v1.EventTypeWarning, reason,
msgFmt, args...)
case *api_v1.Secret:
secr, _ := obj.(*api_v1.Secret)
worker.recorder.Eventf(secr, api_v1.EventTypeWarning, reason,
msgFmt, args...)
default:
worker.log.Warnf("Unhandled type %T, no event generated", obj)
}
}
func (worker *NamespaceWorker) syncSuccess(obj interface{}, msgFmt string,
args ...interface{}) {
worker.log.Infof(msgFmt, args...)
worker.infoEvent(obj, syncSuccess, msgFmt, args...)
}
func (worker *NamespaceWorker) syncFailure(obj interface{}, msgFmt string,
args ...interface{}) {
worker.log.Errorf(msgFmt, args...)
worker.warnEvent(obj, syncFailure, msgFmt, args...)
}
func (worker *NamespaceWorker) dispatch(obj interface{}) error {
_, key, err := getNameSpace(obj)
if err != nil {
worker.syncFailure(obj, "Cannot get key for object %v: %v", obj,
err)
return nil
}
switch obj.(type) {
case *extensions.Ingress:
return worker.syncIng(key)
case *api_v1.Service:
return worker.syncSvc(key)
case *api_v1.Endpoints:
return worker.syncEndp(key)
case *api_v1.Secret:
return worker.syncSecret(key)
default:
deleted, ok := obj.(cache.DeletedFinalStateUnknown)
if !ok {
worker.syncFailure(obj, "Unhandled object type: %T",
obj)
return nil
}
switch deleted.Obj.(type) {
case *extensions.Ingress:
return worker.deleteIng(key)
case *api_v1.Service:
return worker.deleteSvc(key)
case *api_v1.Endpoints:
// Delete and sync do the same thing
return worker.syncEndp(key)
case *api_v1.Secret:
return worker.deleteSecret(key)
default:
worker.syncFailure(deleted, "Unhandled object type: %T",
deleted)
return nil
}
return nil
}
}
func (worker *NamespaceWorker) next() {
select {
case <-worker.stopChan:
worker.queue.ShutDown()
return
default:
break
}
obj, quit := worker.queue.Get()
if quit {
return
}
defer worker.queue.Done(obj)
if err := worker.dispatch(obj); err == nil {
if ns, name, err := getNameSpace(obj); err == nil {
worker.syncSuccess(obj, "Successfully synced: %s/%s",
ns, name)
} else {
worker.syncSuccess(obj, "Successfully synced")
}
worker.queue.Forget(obj)
} else {
worker.syncFailure(obj, "Error, requeueing: %v", err)
worker.queue.AddRateLimited(obj)
}
}
func (worker *NamespaceWorker) work() {
worker.log.Info("Starting worker for namespace:", worker.namespace)
for !worker.queue.ShuttingDown() {
worker.next()
}
worker.log.Info("Shutting down worker for namespace:", worker.namespace)
}
type NamespaceQueues struct {
Queue workqueue.RateLimitingInterface
log *logrus.Logger
vController *varnish.VarnishController
workers map[string]*NamespaceWorker
listers *Listers
client kubernetes.Interface
recorder record.EventRecorder
}
func NewNamespaceQueues(
log *logrus.Logger,
vController *varnish.VarnishController,
listers *Listers,
client kubernetes.Interface,
recorder record.EventRecorder) *NamespaceQueues {
q := workqueue.NewRateLimitingQueue(
workqueue.DefaultControllerRateLimiter())
return &NamespaceQueues{
Queue: q,
log: log,
vController: vController,
workers: make(map[string]*NamespaceWorker),
listers: listers,
client: client,
recorder: recorder,
}
}
func getNameSpace(obj interface{}) (namespace, name string, err error) {
k, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
if err != nil {
return
}
namespace, name, err = cache.SplitMetaNamespaceKey(k)
if err != nil {
return
}
return
}
func (qs *NamespaceQueues) next() {
obj, quit := qs.Queue.Get()
if quit {
return
}
defer qs.Queue.Done(obj)
ns, _, err := getNameSpace(obj)
if err != nil {
utilruntime.HandleError(err)
return
}
worker, exists := qs.workers[ns]
if !exists {
q := workqueue.NewNamedRateLimitingQueue(
workqueue.DefaultControllerRateLimiter(), ns)
worker = &NamespaceWorker{
namespace: ns,
log: qs.log,
vController: qs.vController,
queue: q,
stopChan: make(chan struct{}),
ing: qs.listers.ing.Ingresses(ns),
svc: qs.listers.svc.Services(ns),
endp: qs.listers.endp.Endpoints(ns),
secr: qs.listers.secr.Secrets(ns),
client: qs.client,
recorder: qs.recorder,
}
qs.workers[ns] = worker
go worker.work()
}
worker.queue.Add(obj)
qs.Queue.Forget(obj)
}
func (qs *NamespaceQueues) Run() {
qs.log.Info("Starting dispatcher worker")
for !qs.Queue.ShuttingDown() {
qs.next()
}
qs.log.Info("Shutting down dispatcher worker")
}
func (qs *NamespaceQueues) Stop() {
qs.Queue.ShutDown()
for _, worker := range qs.workers {
worker.queue.ShutDown()
close(worker.stopChan)
}
// XXX wait for WaitGroup
}
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181129154850-be0a7893ac92 h1:KBZnpstNIE+yoIx/qZy0Udl0Qwg9Qlj1eY70sIvhjLM=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181129154850-be0a7893ac92/go.mod h1:J0znUDkk1j5lNWKZZ6zfISZWbA2fXvsxCM+FpDUxG9g=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205203203-20054d30408c h1:dDnkldbKT9fxuJrfEbG40ydbS/Q30oH7cvDpzwJkXoQ=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205203203-20054d30408c/go.mod h1:J0znUDkk1j5lNWKZZ6zfISZWbA2fXvsxCM+FpDUxG9g=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205221923-190a386386bc h1:fv3y7B7zaVH561OpjJgqDFAy1rINt7Xa8WhIlN8ZOU4=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205221923-190a386386bc/go.mod h1:J0znUDkk1j5lNWKZZ6zfISZWbA2fXvsxCM+FpDUxG9g=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205222454-e67a0f88a279 h1:8REUMrx0zvzsAZ/zKSsRrhYbftLSNQ0v2AYZljJqvcY=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181205222454-e67a0f88a279/go.mod h1:J0znUDkk1j5lNWKZZ6zfISZWbA2fXvsxCM+FpDUxG9g=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181209154204-43826850baae h1:rV0/WL/NvJ8/TAQwmcSxFUyI5TVA5GM0KBXXfnWrsFY=
code.uplex.de/uplex-varnish/varnishapi v0.0.0-20181209154204-43826850baae/go.mod h1:J0znUDkk1j5lNWKZZ6zfISZWbA2fXvsxCM+FpDUxG9g=
github.com/PuerkitoBio/purell v1.1.0 h1:rmGxhojJlM0tuKtfdvliR84CFHljx9ag64t2xmVkjK4=
......@@ -16,6 +8,7 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/emicklei/go-restful v2.8.0+incompatible h1:wN8GCRDPGHguIynsnBartv5GUgGUg1LAU7+xnSn1j7Q=
github.com/emicklei/go-restful v2.8.0+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
......@@ -45,11 +38,13 @@ github.com/gregjones/httpcache v0.0.0-20181110185634-c63ab54fda8f h1:ShTPMJQes6t
github.com/gregjones/httpcache v0.0.0-20181110185634-c63ab54fda8f/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/hashicorp/golang-lru v0.5.0 h1:CL2msUPvZTLb5O648aiLNJw3hnBxN2+1Jq8rCOH9wdo=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/json-iterator/go v1.1.5 h1:gL2yXlmiIo4+t+y32d4WGwOjKGYcGOuyrg46vadswDE=
github.com/json-iterator/go v1.1.5/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/juju/ratelimit v1.0.1 h1:+7AIFJVQ0EQgq/K9+0Krm7m530Du7tIz0METWzN0RgY=
github.com/juju/ratelimit v1.0.1/go.mod h1:qapgC/Gy+xNh9UxzV13HGGl/6UXNN+ct+vwSgWNm/qk=
github.com/konsorten/go-windows-terminal-sequences v1.0.1 h1:mweAR1A6xJ3oS2pRaGiHgQ4OO8tzTaLawm8vnODuwDk=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329 h1:2gxZ0XQIU/5z3Z3bUBu+FXuk2pFbkN6tcwi/pjyaDic=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
......@@ -58,16 +53,20 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJ
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0 h1:WSHQ+IS43OoUrWtD1/bbclrwK8TTH5hzp+umCiuxHgs=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.4.3 h1:RE1xgDvH7imwFD45h+u2SgIfERHlS2yNG4DObb5BSKU=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/sirupsen/logrus v1.2.0 h1:juTguoYk5qI21pwyTXY3B3Y5cOTH3ZUyZCg1v/mihuo=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793 h1:u+LnwYTOOW7Ukr/fppxEb1Nwz0AtPflrblfvUudpo+I=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
......@@ -76,6 +75,7 @@ golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73r
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc h1:a3CU5tJYVj92DY2LaA1kUkrsqD5/3mLDhx2NcNqyW+0=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f h1:Bl/8QSvNqXvPGPGXa2z5xUTmV7VDcZyvRZ+QQXkXTZQ=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
......@@ -83,10 +83,13 @@ golang.org/x/sys v0.0.0-20181128092732-4ed8d59d0b35 h1:YAFjXN64LMvktoUZH9zgY4lGc
golang.org/x/sys v0.0.0-20181128092732-4ed8d59d0b35/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
......
......@@ -43,6 +43,9 @@ import (
"github.com/sirupsen/logrus"
api_v1 "k8s.io/api/core/v1"
meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/informers"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
......@@ -52,6 +55,13 @@ var (
loglvlF = flag.String("log-level", "INFO",
"log level: one of PANIC, FATAL, ERROR, WARN, INFO, DEBUG, \n"+
"or TRACE")
namespaceF = flag.String("namespace", api_v1.NamespaceAll,
"namespace in which to listen for resources (default all)")
tmplDirF = flag.String("templatedir", "",
"directory of templates for VCL generation. Defaults to \n"+
"the TEMPLATE_DIR env variable, if set, or the \n"+
"current working directory when the ingress \n"+
"controller is invoked")
logFormat = logrus.TextFormatter{
DisableColors: true,
FullTimestamp: true,
......@@ -61,13 +71,23 @@ var (
Formatter: &logFormat,
Level: logrus.InfoLevel,
}
informerStop = make(chan struct{})
)
const resyncPeriod = 0
// resyncPeriod = 30 * time.Second
// Satisifes type TweakListOptionsFunc in
// k8s.io/client-go/informers/internalinterfaces, for use in
// NewFilteredSharedInformerFactory below.
func noop(opts *meta_v1.ListOptions) {}
func main() {
flag.Parse()
if *versionF {
fmt.Printf("%s version %s", os.Args[0], version)
fmt.Printf("%s version %s\n", os.Args[0], version)
os.Exit(0)
}
......@@ -94,10 +114,13 @@ func main() {
log.Info("Starting Varnish Ingress controller version:", version)
var err error
var config *rest.Config
vController, err := varnish.NewVarnishController(log, *tmplDirF)
if err != nil {
log.Fatal("Cannot initialize Varnish controller: ", err)
os.Exit(-1)
}
config, err = rest.InClusterConfig()
config, err := rest.InClusterConfig()
if err != nil {
log.Fatalf("error creating client configuration: %v", err)
}
......@@ -106,20 +129,36 @@ func main() {
log.Fatal("Failed to create client:", err)
}
vController := varnish.NewVarnishController(log)
var informerFactory informers.SharedInformerFactory
if *namespaceF == api_v1.NamespaceAll {
informerFactory = informers.NewSharedInformerFactory(
kubeClient, resyncPeriod)
} else {
informerFactory = informers.NewFilteredSharedInformerFactory(
kubeClient, resyncPeriod, *namespaceF, noop)
// XXX this is prefered, but only available in newer
// versions of client-go.
// informerFactory = informers.NewSharedInformerFactoryWithOptions(
// kubeClient, resyncPeriod,
// informers.WithNamespace(*namespaceF))
}
varnishDone := make(chan error, 1)
vController.Start(varnishDone)
namespace := os.Getenv("POD_NAMESPACE")
ingController := controller.NewIngressController(log, kubeClient,
vController, namespace)
vController, informerFactory)
go handleTermination(log, ingController, vController, varnishDone)
informerFactory.Start(informerStop)
ingController.Run()
}
func handleTermination(log *logrus.Logger, ingc *controller.IngressController,
vc *varnish.VarnishController, varnishDone chan error) {
func handleTermination(
log *logrus.Logger,
ingc *controller.IngressController,
vc *varnish.VarnishController,
varnishDone chan error) {
signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan, syscall.SIGTERM)
......@@ -141,11 +180,14 @@ func handleTermination(log *logrus.Logger, ingc *controller.IngressController,
log.Info("Received SIGTERM, shutting down")
}
log.Info("Shutting down informers")
informerStop <- struct{}{}
log.Info("Shutting down the ingress controller")
ingc.Stop()
if !exited {
log.Info("Shutting down Varnish")
log.Info("Shutting down the Varnish controller")
vc.Quit()
}
......
......@@ -37,11 +37,15 @@ import (
const monitorIntvl = time.Second * 30
func (vc *VarnishController) checkInst(inst *varnishSvc) {
func (vc *VarnishController) checkInst(inst *varnishInst) {
if inst.admSecret == nil {
vc.log.Warnf("No admin secret known for endpoint %s", inst.addr)
return
}
inst.admMtx.Lock()
defer inst.admMtx.Unlock()
adm, err := admin.Dial(inst.addr, vc.admSecret, admTimeout)
adm, err := admin.Dial(inst.addr, *inst.admSecret, admTimeout)
if err != nil {
vc.log.Errorf("Error connecting to %s: %v", inst.addr, err)
return
......@@ -52,21 +56,21 @@ func (vc *VarnishController) checkInst(inst *varnishSvc) {
pong, err := adm.Ping()
if err != nil {
vc.log.Error("Error pinging at %s: %v", inst.addr, err)
vc.log.Errorf("Error pinging at %s: %v", inst.addr, err)
return
}
vc.log.Infof("Succesfully pinged instance %s: %+v", inst.addr, pong)
state, err := adm.Status()
if err != nil {
vc.log.Error("Error getting status at %s: %v", inst.addr, err)
vc.log.Errorf("Error getting status at %s: %v", inst.addr, err)
return
}
vc.log.Infof("Status at %s: %s", inst.addr, state)
panic, err := adm.GetPanic()
if err != nil {
vc.log.Error("Error getting panic at %s: %v", inst.addr, err)
vc.log.Errorf("Error getting panic at %s: %v", inst.addr, err)
return
}
if panic == "" {
......@@ -78,7 +82,7 @@ func (vc *VarnishController) checkInst(inst *varnishSvc) {
vcls, err := adm.VCLList()
if err != nil {
vc.log.Error("Error getting VCL list at %s: %v", inst.addr, err)
vc.log.Errorf("Error getting VCL list at %s: %v", inst.addr, err)
return
}
for _, vcl := range vcls {
......@@ -101,16 +105,17 @@ func (vc *VarnishController) monitor() {
for {
time.Sleep(monitorIntvl)
for svc, insts := range vc.varnishSvcs {
vc.log.Infof("Monitoring Varnish instances in %s", svc)
for svcName, svc := range vc.svcs {
vc.log.Infof("Monitoring Varnish instances in %s",
svcName)
for _, inst := range insts {
for _, inst := range svc.instances {
vc.checkInst(inst)
}
if err := vc.updateVarnishInstances(insts); err != nil {
if err := vc.updateVarnishSvc(svcName); err != nil {
vc.log.Errorf("Errors updating Varnish "+
"instances: %+v", err)
"Service %s: %+v", svcName, err)
}
}
}
......
This diff is collapsed.
/*
* Copyright (c) 2018 UPLEX Nils Goroll Systemoptimierung
* All rights reserved
*
* Author: Geoffrey Simmons <geoffrey.simmons@uplex.de>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
package varnish
import (
"fmt"
"testing"
)
func TestVarnishAdmError(t *testing.T) {
vadmErr := VarnishAdmError{
addr: "123.45.67.89:4711",
err: fmt.Errorf("Error message"),
}
err := vadmErr.Error()
want := "123.45.67.89:4711: Error message"
if err != want {
t.Errorf("VarnishAdmError.Error() want=%s got=%s", want, err)
}
vadmErrs := VarnishAdmErrors{
vadmErr,
VarnishAdmError{
addr: "98.76.54.321:815",
err: fmt.Errorf("Error 2"),
},
VarnishAdmError{
addr: "192.0.2.255:80",
err: fmt.Errorf("Error 3"),
},
}
err = vadmErrs.Error()
want = "[{123.45.67.89:4711: Error message}{98.76.54.321:815: Error 2}" +
"{192.0.2.255:80: Error 3}]"
if err != want {
t.Errorf("VarnishAdmErrors.Error() want=%s got=%s", want, err)
}
}
......@@ -34,6 +34,7 @@ import (
"fmt"
"hash"
"hash/fnv"
"path"
"regexp"
"sort"
"text/template"
......@@ -214,13 +215,30 @@ const (
)
var (
IngressTmpl = template.Must(template.New(ingTmplSrc).Funcs(fMap).ParseFiles(ingTmplSrc))
ShardTmpl = template.Must(template.New(shardTmplSrc).Funcs(fMap).ParseFiles(shardTmplSrc))
IngressTmpl *template.Template
ShardTmpl *template.Template
symPattern = regexp.MustCompile("^[[:alpha:]][[:word:]-]*$")
first = regexp.MustCompile("[[:alpha:]]")
restIllegal = regexp.MustCompile("[^[:word:]-]+")
)
func InitTemplates(tmplDir string) error {
var err error
ingTmplPath := path.Join(tmplDir, ingTmplSrc)
shardTmplPath := path.Join(tmplDir, shardTmplSrc)
IngressTmpl, err = template.New(ingTmplSrc).
Funcs(fMap).ParseFiles(ingTmplPath)
if err != nil {
return err
}
ShardTmpl, err = template.New(shardTmplSrc).
Funcs(fMap).ParseFiles(shardTmplPath)
if err != nil {
return err
}
return nil
}
func replIllegal(ill []byte) []byte {
repl := []byte("_")
for _, b := range ill {
......
......@@ -30,7 +30,9 @@ package vcl
import (
"bytes"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"reflect"
"testing"
......@@ -46,6 +48,20 @@ func cmpGold(got []byte, goldfile string) (bool, error) {
return bytes.Equal(got, gold), nil
}
func TestMain(m *testing.M) {
tmplDir := ""
tmplEnv, exists := os.LookupEnv("TEMPLATE_DIR")
if !exists {
tmplDir = tmplEnv
}
if err := InitTemplates(tmplDir); err != nil {
fmt.Printf("Cannot parse templates: %v\n", err)
os.Exit(-1)
}
code := m.Run()
os.Exit(code)
}
var teaSvc = Service{
Name: "tea-svc",
Addresses: []Address{
......
......@@ -6,42 +6,45 @@ suitable to your requirements. The YAML configurations in this folder
prepare a method of deployment, suitable for testing and editing as
needed.
## Namespace and ServiceAccount
These instruction target a setup in which the Ingress controller runs
in the ``kube-system`` namespace, and watches for Varnish Ingresses in
all namespaces. If you need to restrict your deployment to a single
namespace, see the [instructions for single-namespace
deployments](/examples/namespace) in the [``/examples``
folder](/examples).
Define the Namespace ``varnish-ingress``, and a ServiceAccount named
``varnish-ingress`` in that namespace:
```
$ kubectl apply -f ns-and-sa.yaml
```
**NOTE**: You can choose any Namespace, but currently all further
operations are restricted to that Namespace -- all resources described
in the following must be defined in the same Namespace. The controller
currently only reads information from the cluster API about Ingresses,
Services and so forth in the namespace of the pod in which it is
running; so all Varnish instances and every resource named in an
Ingress definition must defined in that namespace. This is likely to
become more flexible in future development.
## RBAC
## ServiceAccount and RBAC
Apply [Role-based access
Define a ServiceAccount named ``varnish-ingress-controller`` and apply
[Role-based access
control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
(RBAC) by creating a ClusterRole named ``varnish-ingress`` that
permits the necessary API access for the Ingress controller, and a
ClusterRoleBinding that assigns the ClusterRole to the ServiceAccount
defined in the first step:
(RBAC) to permits the necessary API access for the Ingress controller:
```
$ kubectl apply -f rbac.yaml
$ kubectl apply -f serviceaccount.yaml
```
## Admin Secret
The controller uses Varnish's admin interface to manage the Varnish
instance, which requires authorization using a shared secret. This is
The controller uses Varnish's admin interface to manage Varnish
instances, which requires authorization using a shared secret. This is
prepared by defining a k8s Secret:
```
$ kubectl apply -f adm-secret.yaml
```
The ``metadata`` section MUST specify the label
``app: varnish-ingress``. The Ingress controller ignores all Secrets
that do not have this label.
The ``metadata.name`` field MUST match the secret name provided for
the Varnish deployment described below (``adm-secret`` in the
example):
```
metadata:
name: adm-secret
labels:
app: varnish-ingress
```
32 bytes of randomness are sufficient for the secret:
```
# This command can be used to generate the value in the data field of
......@@ -53,10 +56,8 @@ manifest in your deployment; create a new secret, for example using
the command shown above. The purpose of authorization is defeated if
everyone uses the same secret from an example.
**TO DO**: The ``metadata.name`` field of the Secret is currently
hard-wired to the value ``adm-secret``, and the key for the Secret (in
the ``data`` field) is hard-wired to ``admin``. The Secret must be
defined in the same Namespace defined above.
**TO DO**: The key for the Secret (in the ``data`` field) is
hard-wired to ``admin``.
## Deploy Varnish containers
......@@ -72,14 +73,10 @@ There are some requirements on the configuration of the Varnish
deployment that must be fulfilled in order for the Ingress to work
properly:
* Currently it must be defined in the same Namespace as defined
above.
* The ``serviceAccountName`` must match the ServiceAccount defined
above.
* The ``image`` must be specified as ``varnish-ingress/varnish``.
* ``spec.template`` must specify a ``label`` with a value that is
matched by the Varnish admin Service described below. In this
example:
* ``spec.template`` must specify the label ``app: varnish-ingress``.
The controller recognizes Services with this label as Varnish
deployments meant to implement Ingress:
```
template:
......@@ -103,7 +100,8 @@ properly:
**TO DO**: The ports are currently hard-wired to these port numbers.
A port for TLS access is currently not supported.
* ``volumeMounts`` and ``volumes`` must be specified so that the
Secret defined above is available to Varnish:
Secret defined above is available to Varnish. The ``secretName``
MUST match the name of the Secret (``adm-secret`` in the example):
```
volumeMounts:
......@@ -122,9 +120,8 @@ properly:
path: _.secret
```
**TO DO**: The ``mountPath`` is currently hard-wired to
``/var/run/varnish``. The ``secretName`` is hard-wired to
``adm-secret``, the ``key`` to ``admin``, and ``path`` to
``_.secret``.
``/var/run/varnish``. The ``key`` is hard-wired to ``admin``, and
``path`` to ``_.secret``.
* The liveness check should determine if the Varnish master process is
running. Since Varnish is started in the foreground as the entry
......@@ -209,7 +206,7 @@ Among these restrictions are:
unload one, otherwise it interferes with the implementation of
Ingress.
## Expose the Varnish HTTP port
## Expose the Varnish HTTP and admin ports
With a Deployment, you may choose a resource such as a LoadBalancer or
Nodeport to create external access to Varnish's HTTP port. The present
......@@ -221,17 +218,25 @@ $ kubectl apply -f nodeport.yaml
The cluster then assigns an external port over which HTTP requests are
directed to Varnish instances.
## Varnish admin Service
The Service definition must fulfill some requirements:
The controller discovers Varnish instances that it manages by
obtaining the Endpoints for a headless Service that the admin port:
* A port with the name ``varnishadm`` and whose ``targetPort``
matches the admin port defined above (named `´admport`` in the
sample Deployment for Varnish) MUST be specified:
```
$ kubectl apply -f varnish-adm-svc.yaml
ports:
- port: 6081
targetPort: 6081
protocol: TCP
name: varnishadm
```
This makes it possible for the controller to find the internal
addresses of Varnish instances and connect to their admin listeners.
The Service definition must fulfill some requirements:
The external port for admin is not used; this allows the
controller to identify the admin ports by name for the Endpoints
that realize the Varnish Service.
**TO DO**: The port number is hard-wired to 6081, and the
``port.name`` is hardwired to ``varnishadm``.
* The Service must be defined so that the cluster API will allow
Endpoints to be listed when the container is not ready (since
......@@ -255,38 +260,21 @@ spec:
In recent versions, both specifications are permitted in the YAML,
as in example YAML (the annotation is deprecated, but is not yet an
error).
* The ``selector`` must match the ``label`` given for the Varnish
deployment, as discussed above. In the present example:
* The ``selector`` must specify the label ``app: varnish-ingress``:
```
selector:
app: varnish-ingress
matchLabels:
app: varnish-ingress
```
**TO DO**: The Service must be defined in the Namespace of the pod in
which the controller runs. The ``name`` of the Service is currently
hard-wired to ``varnish-ingress-admin``. The port number is hard-wired
to 6081, and the ``port.name`` is hardwired to ``varnishadm``.
## Deploy the controller
This example uses a Deployment to run the controller container:
```
$ kubectl apply -f controller.yaml
```
The requirements are:
* The ``image`` must be ``varnish-ingress/controller``.
* ``spec.template.spec`` must specify the ``POD_NAMESPACE``
environment variable:
```
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
```
The ``image`` must be ``varnish-ingress/controller``.
It does *not* make sense to deploy more than one replica of the
controller. If there are more controllers, all of them will connect to
......@@ -294,10 +282,6 @@ the Varnish instances and send them the same administrative
commands. That is not an error (or there is a bug in the controller if
it does cause errors), but the extra work is superflous.
**TO DO**: The controller currently only acts on Ingress, Service,
Endpoint and Secret definitions in the same Namespace as the pod in
which it is running.
### Controller options
Command-line options for the controller invocation can be set using the
......@@ -312,12 +296,6 @@ Command-line options for the controller invocation can be set using the
- -log-level=info
```
Currently supported options are:
* ``log-level`` to set the verbosity of logging. Possible values are
``panic``, ``fatal``, ``error``, ``warn``, ``info``, ``debug`` or
``trace``; default ``info``.
# Done
When these commands succeed:
......
......@@ -2,7 +2,8 @@ apiVersion: v1
kind: Secret
metadata:
name: adm-secret
namespace: varnish-ingress
labels:
app: varnish-ingress
type: Opaque
data:
admin: XGASQn0dd/oEsWh5WMXUpmKAKNZYnQGsHSmO/nHkv1w=
......@@ -2,7 +2,7 @@ apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: varnish-ingress-controller
namespace: varnish-ingress
namespace: kube-system
spec:
replicas: 1
selector:
......@@ -13,21 +13,12 @@ spec:
labels:
app: varnish-ingress-controller
spec:
serviceAccountName: varnish-ingress
serviceAccountName: varnish-ingress-controller
containers:
- image: varnish-ingress/controller
imagePullPolicy: IfNotPresent
name: varnish-ingress-controller
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
# log-level default is info, so this can be left out.
# Shown here to demonstrate setting options for the controller.
- -log-level=debug
- -log-level=info
......@@ -2,13 +2,21 @@ apiVersion: v1
kind: Service
metadata:
name: varnish-ingress
namespace: varnish-ingress
labels:
app: varnish-ingress
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
type: NodePort
ports:
- port: 6081
targetPort: 6081
protocol: TCP
name: varnishadm
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: varnish-ingress
publishNotReadyAddresses: true
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: varnish-ingress
name: varnish-ingress-controller
rules:
- apiGroups:
- ""
......@@ -51,12 +51,12 @@ rules:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: varnish-ingress
name: varnish-ingress-controller
subjects:
- kind: ServiceAccount
name: varnish-ingress
namespace: varnish-ingress
name: varnish-ingress-controller
namespace: kube-system
roleRef:
kind: ClusterRole
name: varnish-ingress
name: varnish-ingress-controller
apiGroup: rbac.authorization.k8s.io
apiVersion: v1
kind: ServiceAccount
metadata:
name: varnish-ingress-controller
namespace: kube-system
......@@ -2,9 +2,8 @@ apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: varnish
namespace: varnish-ingress
spec:
replicas: 1
replicas: 2
selector:
matchLabels:
app: varnish-ingress
......@@ -13,7 +12,6 @@ spec:
labels:
app: varnish-ingress
spec:
serviceAccountName: varnish-ingress
containers:
- image: varnish-ingress/varnish
imagePullPolicy: IfNotPresent
......@@ -21,9 +19,9 @@ spec:
ports:
- name: http
containerPort: 80
- name: k8sport
- name: k8s
containerPort: 8080
- name: admport
- name: varnishadm
containerPort: 6081
volumeMounts:
- name: adm-secret
......@@ -39,7 +37,7 @@ spec:
readinessProbe:
httpGet:
path: /ready
port: k8sport
port: k8s
args:
# varnishd command-line options
# In this example: varnishd -l 80M -p default_grace=10
......
......@@ -7,5 +7,8 @@ requirements.
* [The "cafe" example](/examples/hello), a "hello world" for Ingress.
* [Self-sharding Varnish cluser](/examples/self-sharding)
* Limiting the Ingress controller to
[a single namespace](/examples/namespace).
* [Self-sharding Varnish cluster](/examples/self-sharding)
([docs](/docs/self-sharding.md))
......@@ -13,18 +13,10 @@ $ kubectl create -f cafe.yaml
$ kubectl create -f cafe-ingress.yaml
```
To note:
* The Ingress has the annotation
``kubernetes.io/ingress.class: "varnish"``, identifying it as an
Ingress to be implemented by the Varnish controller (the Varnish
controller ignores any Ingress that does not have this annotation).
* Both the Ingress and the Services are created in the
``varnish-ingress`` namespace, as are the resources defined by the
configurations in the [``deploy/``](/deploy) folder. The controller
currently ignores all definitions that are not defined in the same
namespace as the Pod in which it is running (this is likely to
become more flexible in future development).
Note that the Ingress has the annotation
``kubernetes.io/ingress.class: "varnish"``, identifying it as an
Ingress to be implemented by the Varnish controller (the Varnish
controller ignores any Ingress that does not have this annotation).
The Ingress rules in the example require that:
......
......@@ -4,7 +4,6 @@ metadata:
name: cafe-ingress-varnish
annotations:
kubernetes.io/ingress.class: "varnish"
namespace: varnish-ingress
spec:
rules:
- host: cafe.example.com
......
......@@ -2,7 +2,6 @@ apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coffee
namespace: varnish-ingress
spec:
replicas: 2
selector:
......@@ -23,7 +22,6 @@ apiVersion: v1
kind: Service
metadata:
name: coffee-svc
namespace: varnish-ingress
spec:
ports:
- port: 80
......@@ -37,7 +35,6 @@ apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tea
namespace: varnish-ingress
spec:
replicas: 3
selector:
......@@ -58,7 +55,6 @@ apiVersion: v1
kind: Service
metadata:
name: tea-svc
namespace: varnish-ingress
labels:
spec:
ports:
......
# Restricting the Ingress controller to a single namespace
This folder contains an example of the use of the ``-namespace``
argument of the Ingress controller to limit its actions to Services,
Ingresses, Secrets and so on to a single namespace. The controller can
then be deployed in that namespace. You may need to do so, for
example, due to limits on authorization in the cluster.
The sample manifests use ``varnish-ingress`` as the example namespace,
and re-uses the ["cafe" example](/examples/hello), with Services and
an Ingress defined for the namespace.
In the following we follow the steps in the documentation for a
[cluster-wide deployment of the Ingress controller](/deploy) and the
[deployment of the example](/examples/hello). Only the details
concerning the single-namespace deployment are covered here; see the
other doc links for further information.
## Deployment
### Namespace and ServiceAccount
Define the Namespace ``varnish-ingress``, and a ServiceAccount named
``varnish-ingress`` in that namespace:
```
$ kubectl apply -f ns-and-sa.yaml
```
All further operations will be restricted to the Namespace.
### RBAC
The RBAC manifest grants the same authorizations as for the
[cluster-wide deployment](/deploy), but they are bound to the
ServiceAccount just defined. This allows the Ingress controller
to operate in the sample Namespace (only).
```
$ kubectl apply -f rbac.yaml
```
### Admin Secret, Varnish deployment and Nodeport
These are created by this sequence of commands:
```
$ kubectl apply -f adm-secret.yaml
$ kubectl apply -f varnish.yaml
$ kubectl apply -f nodeport.yaml
```
The manifests only differ from their counterparts from the
[cluster-wide deployment instructions](/deploy) in the namespace.
### Controller
To run the controller container:
```
$ kubectl apply -f controller.yaml
```
This manifest differs from a cluster-wide deployment in that:
* The ``serviceAccountName`` is assigned the ServiceAccount defined
above.
* The container is invoked with the ``-namespace`` option to limit its
work to the given namespace:
```
spec:
serviceAccountName: varnish-ingress
containers:
- args:
# Controller only acts on Ingresses, Services etc. in the
# given namespace.
- -namespace=varnish-ingress
```
The controller now only watches Services, Ingresses, Secrets and so
forth in the ``varnish-ingress`` namespace.
## Example
The "cafe" example can be deployed with:
```
$ kubectl apply -f cafe.yaml
$ kubectl apply -f cafe-ingress.yaml
```
Their manifests differ from those of the ["hello" example](/examples/hello)
in their restriction to the sample namespace.
apiVersion: v1
kind: Secret
metadata:
name: adm-secret
namespace: varnish-ingress
labels:
app: varnish-ingress
type: Opaque
data:
admin: XGASQn0dd/oEsWh5WMXUpmKAKNZYnQGsHSmO/nHkv1w=
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress-varnish
namespace: varnish-ingress
annotations:
kubernetes.io/ingress.class: "varnish"
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coffee
namespace: varnish-ingress
spec:
replicas: 2
selector:
matchLabels:
app: coffee
template:
metadata:
labels:
app: coffee
spec:
containers:
- name: coffee
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
namespace: varnish-ingress
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: coffee
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tea
namespace: varnish-ingress
spec:
replicas: 3
selector:
matchLabels:
app: tea
template:
metadata:
labels:
app: tea
spec:
containers:
- name: tea
image: nginxdemos/hello:plain-text
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: tea-svc
namespace: varnish-ingress
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: tea
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: varnish-ingress-controller
namespace: varnish-ingress
spec:
replicas: 1
selector:
matchLabels:
app: varnish-ingress-controller
template:
metadata:
labels:
app: varnish-ingress-controller
spec:
serviceAccountName: varnish-ingress
containers:
- image: varnish-ingress/controller
imagePullPolicy: IfNotPresent
name: varnish-ingress-controller
args:
# Controller only observes Ingresses, Services etc. in the
# given namespace.
- -namespace=varnish-ingress
apiVersion: v1
kind: Service
metadata:
name: varnish-ingress-admin
name: varnish-ingress
namespace: varnish-ingress
labels:
app: varnish-ingress
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None
type: NodePort
ports:
- port: 6081
targetPort: 6081
protocol: TCP
name: varnishadm
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: varnish-ingress
publishNotReadyAddresses: true
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: varnish-ingress
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- pods
verbs:
- list
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- list
- watch
- get
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: varnish-ingress
subjects:
- kind: ServiceAccount
name: varnish-ingress
namespace: varnish-ingress
roleRef:
kind: ClusterRole
name: varnish-ingress
apiGroup: rbac.authorization.k8s.io
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: varnish
namespace: varnish-ingress
spec:
replicas: 2
selector:
matchLabels:
app: varnish-ingress
template:
metadata:
labels:
app: varnish-ingress
spec:
containers:
- image: varnish-ingress/varnish
imagePullPolicy: IfNotPresent
name: varnish-ingress
ports:
- name: http
containerPort: 80
- name: k8s
containerPort: 8080
- name: varnishadm
containerPort: 6081
volumeMounts:
- name: adm-secret
mountPath: "/var/run/varnish"
readOnly: true
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -P
- "0"
- varnishd
readinessProbe:
httpGet:
path: /ready
port: k8s
args:
# varnishd command-line options
# In this example: varnishd -l 80M -p default_grace=10
# These are default values for the given options in Varnish 6.1.
# Shown here to demonstrate setting options for Varnish.
- -l
- 80M
- -p
- default_grace=10
volumes:
- name: adm-secret
secret:
secretName: adm-secret
items:
- key: admin
path: _.secret
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment