- 25 Nov, 2020 1 commit
-
-
Geoff Simmons authored
-
- 24 Nov, 2020 3 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
The helm charts already do this, it is a change for the kubectl/yaml examples, which have been using NodePort. NodePort has been unnecessary since we've automated testing by using port-forward to the Service. This brings the two different deployment methods more in line. We will add tests for LoadBalancer and NodePort Services. The docs are now even more out of sync with the actual example/test code, since they still reference the NodePort type.
-
Geoff Simmons authored
Previously the status was only updated when the sync to implement an Ingress was successful. This will update the status due to changes in addresses (IPs or hosts) assigned to the Service.
-
- 20 Nov, 2020 1 commit
-
-
Geoff Simmons authored
Also, the Service info sources for Ingress loadBalancer status depend on the Service type, mostly following the nginx ingress implementation.
-
- 15 Oct, 2020 2 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
The addresses for this array are taken from the public names and/or IPs in the spec for Service(s) that expose the Ingress. These are identified as: - in the same namespace as the admin Service - have the label viking.uplex.de/svc=public - have the same selectors as the admin Service - type is one of ClusterIP, NodePort or LoadBalancer The label viking.uplex.de/svc is only required if the Ingress status update is required. For example to use a tool like ArgoCD, or if the cloud provider requires it. Set the label in the Service template for the viking-service chart.
-
- 14 Oct, 2020 1 commit
-
-
Geoff Simmons authored
No longer needed since we've set the Pods to ready regardless of whether an Ingress has been configured.
-
- 13 Oct, 2020 8 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
- 12 Oct, 2020 11 commits
-
-
Geoff Simmons authored
The controller executable is the only artifact needed for its container. The templatedir CLI argument is removed.
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
This will be done incrementally for all templates, making the controller executable wholly self-contained.
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
- 08 Oct, 2020 1 commit
-
-
Geoff Simmons authored
-
- 07 Oct, 2020 1 commit
-
-
Geoff Simmons authored
-
- 02 Oct, 2020 9 commits
-
-
Geoff Simmons authored
Closes #38
-
Geoff Simmons authored
-
Geoff Simmons authored
Service spec.ports maps to the public TLS port. First just for kubectl/yaml deployments. Addresses #38
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
Addresses #38
-
Geoff Simmons authored
-
Geoff Simmons authored
Addresses #38
-
Geoff Simmons authored
Initially for kubectl/yaml deployments. While here, ensure that /etc/varnish is world-readable, since it's been reported that it might not be. Evidently not always, but setting the permissions is never wrong. Addresses #38
-
- 04 Sep, 2020 2 commits
-
-
Nils Goroll authored
in vcl_backend_response {} and vcl_backend_error {} we test if `bereq.backend == vk8s_cluster.backend(resolve=LAZY)`. With the current code, this condition will never evaluate true, because `resolve=NOW` will make the shard director return the "real" (VBE) backend. With this patch, the net effect, the chosen cluster varnish backend, should still be the same, except for the rare race event that the backend becomes unhealthy between the time the vcl code executes and when the backend connection is made. We already account for this race by setting `req.hash_ignore_busy` in vcl_revc {}.
-
Geoff Simmons authored
-