- 05 Jun, 2024 2 commits
-
-
Nils Goroll authored
Unfortunately, we got out of sync, so I will need to push uplex/master
-
Nils Goroll authored
-
- 04 Jun, 2024 1 commit
-
-
Geoff Simmons authored
-
- 29 May, 2024 2 commits
- 28 May, 2024 1 commit
-
-
Tim Leers authored
-
- 21 May, 2024 1 commit
-
-
Tim Leers authored
-
- 17 May, 2024 1 commit
-
-
Tim Leers authored
chore: apply lint rules to ingress.go See merge request uplex/varnish/k8s-ingress!15
-
- 14 May, 2024 1 commit
-
-
Tim Leers authored
-
- 03 May, 2024 1 commit
-
-
Nils Goroll authored
-
- 20 Apr, 2024 1 commit
-
-
Nils Goroll authored
-
- 19 Apr, 2024 5 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
Both are now obtained from the repo at pkg.uplex.de, and differ only in: - the version, dist and pool parameters of the source repo, as expressed in /etc/apt/sources.list, and - the list of VMODs to install. We now have one Dockerfile for the two containers, and the four parameters listed above are passed into the build as build-args, using values set in the Makefile. This also has the effect of changing the base image for klarlack to Debian slim, updated to the currently most recent version. The Dockerfile is now simpler than the previous version for klarlack, in that we set the version once in the repo path, rather than specify "=${VERSION}" for Varnish and each VMOD in the apt install invocation.
-
- 18 Apr, 2024 1 commit
-
-
Tim Leers authored
-
- 16 Apr, 2024 2 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
-
- 12 Apr, 2024 5 commits
-
-
Geoff Simmons authored
This almost always happens because the API server's current version of the Ingress has a newer ResourceVersion than the controller's cached copy. Sometimes it happens during e2e tests when the Ingress has already been deleted but the worker queue has not caught up. Make one attempt to fetch a fresh copy of the Ingress from the API server, and if necessary, update the LoadBalancer field for the fresh version. Inspired by the nginx ingress' solution to this problem.
-
Geoff Simmons authored
-
Geoff Simmons authored
$ vikingctrl --help [...] -maxSyncRetries uint maximum number of retires for cluster synchronizations that fail due to recoverable errors, or because necessary information is missing. 0 for unlimited retries (default 0) [...] IOW set a maximum number of re-queues for SyncIncomplete and SyncRecoverable type failures. By default unlimited. While here, update some missing function parameter docs.
-
Tim Leers authored
-
Tim Leers authored
-
- 11 Apr, 2024 6 commits
-
-
Tim Leers authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
This reverts commit a42ae714. kind could not be started in gitlab CI.
-
Geoff Simmons authored
This sets the k8s version for both server and client to 1.29.1.
-
Geoff Simmons authored
-
- 10 Apr, 2024 10 commits
-
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
That is, when the addresses are not stored in the controller's internal data structures.
-
Geoff Simmons authored
-
Geoff Simmons authored
-
Geoff Simmons authored
Non-timeout network errors are not fatal when an instance is deleted, for the same reasons given for removing a haproxy instance in the prior commit. It is also not fatal if the admin Secret is missing when we remove a varnish instance, or when setting its config to the NotReady VCL. Similar to reasons given for haproxy in prior commits -- in such cases we assume that Secret has been deleted in an undeployment operation, so its not necessary to set a "not configured" state, since the instance will be removed imminently.
-
Geoff Simmons authored
While here, make sure that dataplane transactions get deleted, including after errors.
-
Geoff Simmons authored
-
Geoff Simmons authored
We assume that haproxy has been stopped (and the Pod may have been deleted). Go's "permanent network error" is now deprecated, so we limit this distinction to timeout errors.
-