Merge branch v3.6 into master

This commit is contained in:
kevinpollet
2026-01-29 15:03:42 +01:00
60 changed files with 471 additions and 1328 deletions
+94
View File
@@ -20,3 +20,97 @@ For detailed steps tailored to your environment, follow the guide for your platf
- [Kubernetes](./kubernetes/basic.md)
- [Docker](./docker/basic.md)
- [Docker Swarm](./swarm/basic.md)
## Advanced Use Cases
### Exposing gRPC Services
Traefik Proxy supports gRPC applications without requiring specific configuration. You can expose gRPC services using either HTTP (h2c) or HTTPS.
??? example "Using HTTP (h2c)"
For unencrypted gRPC communication, configure your service to use the `h2c://` protocol scheme:
```yaml
http:
routers:
grpc-router:
service: grpc-service
rule: Host(`grpc.example.com`)
services:
grpc-service:
loadBalancer:
servers:
- url: h2c://backend:8080
```
!!! note
For providers with labels (Docker, Kubernetes), specify the scheme using:
`traefik.http.services.<service-name>.loadbalancer.server.scheme=h2c`
??? example "Using HTTPS"
For encrypted gRPC communication, use standard HTTPS URLs. Traefik will use HTTP/2 over TLS to communicate with your gRPC backend:
```yaml
http:
routers:
grpc-router:
service: grpc-service
rule: Host(`grpc.example.com`)
tls: {}
services:
grpc-service:
loadBalancer:
servers:
- url: https://backend:8080
```
Traefik handles the protocol negotiation automatically. Configure TLS certificates for your backends using [ServersTransport](../reference/routing-configuration/http/load-balancing/serverstransport.md) if needed.
### Exposing WebSocket Services
Traefik Proxy supports WebSocket (WS) and WebSocket Secure (WSS) connections out of the box. No special configuration is required beyond standard HTTP routing.
??? example "Basic WebSocket"
Configure a router and service pointing to your WebSocket server. Traefik automatically detects and handles the WebSocket upgrade:
```yaml
http:
routers:
websocket-router:
rule: Host(`ws.example.com`)
service: websocket-service
services:
websocket-service:
loadBalancer:
servers:
- url: http://websocket-backend:8000
```
??? example "WebSocket Secure (WSS)"
For encrypted WebSocket connections, enable TLS on your router. Clients connect using `wss://` while you can choose whether backends use encrypted or unencrypted connections:
```yaml
http:
routers:
websocket-secure-router:
rule: Host(`wss.example.com`)
service: websocket-service
tls: {}
services:
websocket-service:
loadBalancer:
servers:
- url: http://websocket-backend:8000 # SSL termination at Traefik
# OR
# - url: https://websocket-backend:8443 # End-to-end encryption
```
Traefik preserves WebSocket headers including `Origin`, `Sec-WebSocket-Key`, and `Sec-WebSocket-Version`. Use the [Headers middleware](../reference/routing-configuration/http/middlewares/headers.md) if you need to modify headers for origin checking or other requirements.
+1 -1
View File
@@ -13,7 +13,7 @@ The GrpcWeb middleware converts gRPC Web requests to HTTP/2 gRPC requests before
!!! tip
Please note, that Traefik needs to communicate using gRPC with the backends (h2c or HTTP/2 over TLS).
Check out the [gRPC](../../user-guides/grpc.md) user guide for more details.
Check out [Exposing gRPC Services](../../expose/overview.md#exposing-grpc-services) for more details.
## Configuration Examples
+38
View File
@@ -636,6 +636,10 @@ The `Headers` and `HeadersRegexp` matchers have been renamed to `Header` and `He
`PathPrefix` no longer uses regular expressions to match path prefixes.
`Path` and `PathPrefix` no longer support path parameter placeholders (e.g., `{id}`, `{name}`).
Routes using placeholders like ``Path(`/route/{id}`)`` will not match in v3 syntax.
Use `PathRegexp` instead for dynamic path segments.
`QueryRegexp` has been introduced to match query values using a regular expression.
`HeaderRegexp`, `HostRegexp`, `PathRegexp`, `QueryRegexp`, and `HostSNIRegexp` matchers now uses the [Go regexp syntax](https://golang.org/pkg/regexp/syntax/).
@@ -716,6 +720,40 @@ http:
ruleSyntax = "v2"
```
##### Migrate Path Placeholders to PathRegexp
In v2, `Path` and `PathPrefix` supported path parameter placeholders like `{id}` for matching dynamic path segments.
In v3, this is no longer supported and `PathRegexp` should be used instead.
??? example "Migrating a route with path placeholders"
v2 syntax (no longer works in v3):
```yaml
match: Host(`example.com`) && Path(`/products/{id}`)
```
v3 syntax using `PathRegexp`:
```yaml
match: Host(`example.com`) && PathRegexp(`^/products/[^/]+$`)
```
For more complex patterns with multiple placeholders:
v2 syntax:
```yaml
match: Host(`example.com`) && Path(`/users/{userId}/orders/{orderId}`)
```
v3 syntax:
```yaml
match: Host(`example.com`) && PathRegexp(`^/users/[^/]+/orders/[^/]+$`) ## matches any non-slash characters
match: Host(`example.com`) && PathRegexp(`^/users/[a-zA-Z0-9_-]+/orders/[a-zA-Z0-9_-]+$`) ## restricts to alphanumeric, hyphens, and underscores
```
### IPWhiteList
In v3, we renamed the `IPWhiteList` middleware to `IPAllowList` without changing anything to the configuration.
+1 -1
View File
@@ -363,6 +363,6 @@ providers:
## Full Example
For additional information, refer to the [full example](../user-guides/crd-acme/index.md) with Let's Encrypt.
For additional information on exposing services with Kubernetes, refer to the [Kubernetes guide](../expose/kubernetes/basic.md).
{% include-markdown "includes/traefik-for-business-applications.md" %}
@@ -1,19 +1,18 @@
---
title: "Traefik FastProxy Experimental Configuration"
description: "This section of the Traefik Proxy documentation explains how to use the new FastProxy option."
description: "This section of the Traefik Proxy documentation explains how to use the new FastProxy install configuration option."
---
# Traefik FastProxy Experimental Configuration
## Overview
This guide provides instructions on how to configure and use the new experimental `fastProxy` static configuration option in Traefik.
The `fastProxy` option introduces a high-performance reverse proxy designed to enhance the performance of routing.
This guide provides instructions on how to configure and use the new experimental `fastProxy` install configuration option in Traefik. The `fastProxy` option introduces a high-performance reverse proxy designed to enhance the performance of routing.
!!! info "Limitations"
Please note that the new fast proxy implementation does not work with HTTP/2.
This means that when a H2C or HTTPS request with [HTTP2 enabled](../routing/services/index.md#disablehttp2) is sent to a backend, the fallback proxy is the regular one.
This means that when a H2C or HTTPS request with [HTTP2 enabled](../../routing-configuration/http/load-balancing/service.md#disablehttp2) is sent to a backend, the fallback proxy is the regular one.
Additionnaly, observability features like tracing and OTEL semconv metrics are not supported for the moment.
@@ -22,10 +21,10 @@ The `fastProxy` option introduces a high-performance reverse proxy designed to e
The `fastProxy` option is currently experimental and subject to change in future releases.
Use with caution in production environments.
### Enabling FastProxy
## Enabling FastProxy
The fastProxy option is a static configuration parameter.
To enable it, you need to configure it in your Traefik static configuration
The fastProxy option is an install configuration parameter.
To enable it, you need to configure it in your Traefik install configuration
```yaml tab="File (YAML)"
experimental:
@@ -0,0 +1,43 @@
---
title: "Traefik Plugins Experimental Configuration"
description: "This section of the Traefik Proxy documentation explains how to use the new Plugins install configuration option."
---
# Traefik Plugins Experimental Configuration
## Overview
This guide provides instructions on how to configure and use the new experimental `plugins` install configuration option in Traefik. The `plugins` option introduces a system to extend Traefik capabilities with custom middlewares and providers.
!!! warning "Experimental"
The `plugins` option is currently experimental and subject to change in future releases.
Use with caution in production environments.
## Enabling Plugins
The plugins option is an install configuration parameter.
To enable a plugin, you need to define it in your Traefik install configuration
```yaml tab="File (YAML)"
experimental:
plugins:
plugin-name: # The name of the plugin in the routing configuration
moduleName: "github.com/github-organization/github-repository" # The plugin module name
version: "vX.XX.X" # The version to use
```
```toml tab="File (TOML)"
[experimental.plugins.plugin-name]
moduleName = "github.com/github-organization/github-repository" # The plugin module name
version = "vX.XX.X" # The version to use
```
```bash tab="CLI"
# The plugin module name
# With plugin-name the name of the plugin in the routing configuration
--experimental.plugins.plugin-name.modulename=github.com/github-organization/github-repository
--experimental.plugins.plugin-name.version=vX.XX.X # The version to use
```
To learn more about how to add a new plugin to a Traefik instance, please refer to the [developer documentation](https://plugins.traefik.io/install).
@@ -128,6 +128,6 @@ See the dedicated section in [routing](../../../../routing/providers/kubernetes-
## Full Example
For additional information, refer to the [full example](../../../../user-guides/crd-acme/index.md) with Let's Encrypt.
For additional information on exposing services with Kubernetes, refer to the [Kubernetes guide](../../../../expose/kubernetes/basic.md).
{% include-markdown "includes/traefik-for-business-applications.md" %}
@@ -8,7 +8,7 @@ The `grpcWeb` middleware converts gRPC Web requests to HTTP/2 gRPC requests befo
!!! tip
Please note, that Traefik needs to communicate using gRPC with the backends (h2c or HTTP/2 over TLS).
Check out the [gRPC](../../../../user-guides/grpc.md) user guide for more details.
Check out [Exposing gRPC Services](../../../../expose/overview.md#exposing-grpc-services) for more details.
## Configuration Examples
@@ -2045,6 +2045,6 @@ If the ServersTransportTCP CRD is defined in another provider the cross-provider
## Further
Also see the [full example](../../user-guides/crd-acme/index.md) with Let's Encrypt.
For additional information on exposing services with Kubernetes, see the [Kubernetes guide](../../expose/kubernetes/basic.md).
{% include-markdown "includes/traefik-for-business-applications.md" %}
-183
View File
@@ -1,183 +0,0 @@
---
title: "Integration with cert-manager"
description: "Learn how to use cert-manager certificates with Traefik Proxy for your routers. Read the technical documentation."
---
# cert-manager
Provision TLS Certificate for Traefik Proxy with cert-manager on Kubernetes
{: .subtitle }
## Pre-requisites
To obtain certificates from cert-manager that can be used in Traefik Proxy, you will need to:
1. Have cert-manager properly configured
2. Have Traefik Proxy configured
The certificates can then be used in an Ingress / IngressRoute / HTTPRoute.
## Example with ACME and HTTP challenge
!!! example "ACME issuer for HTTP challenge"
```yaml tab="Issuer"
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: acme
spec:
acme:
# Production server is on https://acme-v02.api.letsencrypt.org/directory
# Use staging by default.
server: https://acme-staging-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: acme
solvers:
- http01:
ingress:
ingressClassName: traefik
```
```yaml tab="Certificate"
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: whoami
namespace: traefik
spec:
secretName: domain-tls # <=== Name of secret where the generated certificate will be stored.
dnsNames:
- "domain.example.com"
issuerRef:
name: acme
kind: Issuer
```
Let's see now how to use it with the various Kubernetes providers of Traefik Proxy.
The enabled providers can be seen on the [dashboard](../reference/install-configuration/api-dashboard.md) of Traefik Proxy and also in the INFO logs when Traefik Proxy starts.
### With an Ingress
To use this certificate with an Ingress, the [Kubernetes Ingress](../providers/kubernetes-ingress.md) provider has to be enabled.
!!! info Traefik Helm Chart
This provider is enabled by default in the Traefik Helm Chart.
!!! example "Route with this Certificate"
```yaml tab="Ingress"
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: domain
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: websecure
spec:
rules:
- host: domain.example.com
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: domain-service
port:
number: 80
tls:
- secretName: domain-tls # <=== Use the name defined in Certificate resource.
```
### With an IngressRoute
To use this certificate with an IngressRoute, the [Kubernetes CRD](../providers/kubernetes-crd.md) provider has to be enabled.
!!! info Traefik Helm Chart
This provider is enabled by default in the Traefik Helm Chart.
!!! example "Route with this Certificate"
```yaml tab="IngressRoute"
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: domain
spec:
entryPoints:
- websecure
routes:
- match: Host(`domain.example.com`)
kind: Rule
services:
- name: domain-service
port: 80
tls:
secretName: domain-tls # <=== Use the name defined in Certificate resource.
```
### With an HTTPRoute
To use this certificate with an HTTPRoute, the [Kubernetes Gateway](../routing/providers/kubernetes-gateway.md) provider has to be enabled.
!!! info Traefik Helm Chart
This provider is disabled by default in the Traefik Helm Chart.
!!! example "Route with this Certificate"
```yaml tab="HTTPRoute"
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: domain-gateway
spec:
gatewayClassName: traefik
listeners:
- name: websecure
port: 8443
protocol: HTTPS
hostname: domain.example.com
tls:
certificateRefs:
- name: domain-tls # <==== Use the name defined in Certificate resource.
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: domain
spec:
parentRefs:
- name: domain-gateway
hostnames:
- domain.example.com
rules:
- matches:
- path:
type: Exact
value: /
backendRefs:
- name: domain-service
port: 80
weight: 1
```
## Troubleshooting
There are multiple event sources available to investigate when using cert-manager:
1. Kubernetes events in `Certificate` and `CertificateRequest` resources
2. cert-manager logs
3. Dashboard and/or (debug) logs from Traefik Proxy
cert-manager documentation provides a [detailed guide](https://cert-manager.io/docs/troubleshooting/) on how to troubleshoot a certificate request.
{% include-markdown "includes/traefik-for-business-applications.md" %}
@@ -1,32 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 8000
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 4443
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: whoami
spec:
ports:
- protocol: TCP
name: web
port: 80
selector:
app: whoami
@@ -1,74 +0,0 @@
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v3.6
args:
- --api.insecure
- --accesslog
- --entryPoints.web.Address=:8000
- --entryPoints.websecure.Address=:4443
- --providers.kubernetescrd
- --certificatesresolvers.myresolver.acme.tlschallenge
- --certificatesresolvers.myresolver.acme.email=foo@you.com
- --certificatesresolvers.myresolver.acme.storage=acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
- --certificatesresolvers.myresolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
ports:
- name: web
containerPort: 8000
- name: websecure
containerPort: 4443
- name: admin
containerPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: whoami
labels:
app: whoami
spec:
replicas: 2
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- name: web
containerPort: 80
@@ -1,32 +0,0 @@
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: simpleingressroute
namespace: default
spec:
entryPoints:
- web
routes:
- match: Host(`your.example.com`) && PathPrefix(`/notls`)
kind: Rule
services:
- name: whoami
port: 80
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`your.example.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: whoami
port: 80
tls:
certResolver: myresolver
@@ -1,17 +0,0 @@
---
apiVersion: traefik.io/v1alpha1
kind: TLSOption
metadata:
name: default
namespace: default
spec:
minVersion: VersionTLS12
cipherSuites:
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 # TLS 1.2
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 # TLS 1.2
- TLS_AES_256_GCM_SHA384 # TLS 1.3
- TLS_CHACHA20_POLY1305_SHA256 # TLS 1.3
curvePreferences:
- CurveP521
- CurveP384
sniStrict: true
-134
View File
@@ -1,134 +0,0 @@
---
title: "Traefik CRD TLS Documentation"
description: "Learn how to use Traefik Proxy w/ an IngressRoute Custom Resource Definition (CRD) for Kubernetes, and TLS with Let's Encrypt. Read the technical documentation."
---
# Traefik & CRD & Let's Encrypt
Traefik with an IngressRoute Custom Resource Definition for Kubernetes, and TLS Through Let's Encrypt.
{: .subtitle }
This document is intended to be a fully working example demonstrating how to set up Traefik in [Kubernetes](https://kubernetes.io),
with the dynamic configuration coming from the [IngressRoute Custom Resource](../../providers/kubernetes-crd.md),
and TLS setup with [Let's Encrypt](https://letsencrypt.org).
However, for the sake of simplicity, we're using [k3s](https://github.com/rancher/k3s) docker image for the Kubernetes cluster setup.
Please note that for this setup, given that we're going to use ACME's TLS-ALPN-01 challenge, the host you'll be running it on must be able to receive connections from the outside on port 443.
And of course its internet facing IP address must match the domain name you intend to use.
In the following, the Kubernetes resources defined in YAML configuration files can be applied to the setup in two different ways:
- the first, and usual way, is simply with the `kubectl apply` command.
- the second, which can be used for this tutorial, is to directly place the files in the directory used by the k3s docker image for such inputs (`/var/lib/rancher/k3s/server/manifests`).
!!! important "Kubectl Version"
With the `rancher/k3s` version used in this guide (`0.8.0`), the kubectl version needs to be >= `1.11`.
## k3s Docker-compose Configuration
Our starting point is the docker-compose configuration file, to start the k3s cluster.
You can start it with:
```bash
docker compose -f k3s.yml up
```
```yaml
--8<-- "content/user-guides/crd-acme/k3s.yml"
```
## Cluster Resources
Let's now have a look (in the order they should be applied, if using `kubectl apply`) at all the required resources for the full setup.
### IngressRoute Definition
First, you will need to install Traefik CRDs containing the definition of the `IngressRoute` and the `Middleware` kinds,
and the RBAC authorization resources which will be referenced through the `serviceAccountName` of the deployment.
```bash
# Install Traefik Resource Definitions:
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.6/docs/content/reference/dynamic-configuration/kubernetes-crd-definition-v1.yml
# Install RBAC for Traefik:
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.6/docs/content/reference/dynamic-configuration/kubernetes-crd-rbac.yml
```
### Services
Then, the services. One for Traefik itself, and one for the app it routes for, i.e. in this case our demo HTTP server: [whoami](https://github.com/traefik/whoami).
```bash
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.6/docs/content/user-guides/crd-acme/02-services.yml
```
```yaml
--8<-- "content/user-guides/crd-acme/02-services.yml"
```
### Deployments
Next, the deployments, i.e. the actual pods behind the services.
Again, one pod for Traefik, and one for the whoami app.
```bash
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.6/docs/content/user-guides/crd-acme/03-deployments.yml
```
```yaml
--8<-- "content/user-guides/crd-acme/03-deployments.yml"
```
### Port Forwarding
Now, as an exception to what we said above, please note that you should not let the ingressRoute resources below be applied automatically to your cluster.
The reason is, as soon as the ACME provider of Traefik detects we have TLS routers, it will try to generate the certificates for the corresponding domains.
And this will not work, because as it is, our Traefik pod is not reachable from the outside, which will make the ACME TLS challenge fail.
Therefore, for the whole thing to work, we must delay applying the ingressRoute resources until we have port-forwarding set up properly, which is the next step.
```bash
kubectl port-forward --address 0.0.0.0 service/traefik 8000:8000 8080:8080 443:4443 -n default
```
Also, and this is out of the scope of this guide, please note that because of the privileged ports limitation on Linux, the above command might fail to listen on port 443.
In which case you can use tricks such as elevating caps of `kubectl` with `setcaps`, or using `authbind`, or setting up a NAT between your host and the WAN.
Look it up.
### Traefik Routers
We can now finally apply the actual ingressRoutes, with:
```bash
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.6/docs/content/user-guides/crd-acme/04-ingressroutes.yml
```
```yaml
--8<-- "content/user-guides/crd-acme/04-ingressroutes.yml"
```
Give it a few seconds for the ACME TLS challenge to complete, and you should then be able to access your whoami pod (routed through Traefik), from the outside.
Both with or (just for fun, do not do that in production) without TLS:
```bash
curl [-k] https://your.example.com/tls
```
```bash
curl http://your.example.com:8000/notls
```
Note that you'll have to use `-k` as long as you're using the staging server of Let's Encrypt, since it is not an authorized certificate authority on systems where it hasn't been manually added.
### Force TLS v1.2+
Nowadays, TLS v1.0 and v1.1 are deprecated.
In order to force TLS v1.2 or later on all your IngressRoute, you can define the `default` TLSOption:
```bash
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v3.6/docs/content/user-guides/crd-acme/05-tlsoption.yml
```
```yaml
--8<-- "content/user-guides/crd-acme/05-tlsoption.yml"
```
-30
View File
@@ -1,30 +0,0 @@
server:
image: rancher/k3s:v1.34.2-k3s1
command: server --disable-agent --no-deploy traefik
environment:
- K3S_CLUSTER_SECRET=somethingtotallyrandom
- K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
- K3S_KUBECONFIG_MODE=666
volumes:
# k3s will generate a kubeconfig.yaml in this directory. This volume is mounted
# on your host, so you can then 'export KUBECONFIG=/somewhere/on/your/host/out/kubeconfig.yaml',
# in order for your kubectl commands to work.
- /somewhere/on/your/host/out:/output
# This directory is where you put all the (yaml) configuration files of
# the Kubernetes resources.
- /somewhere/on/your/host/in:/var/lib/rancher/k3s/server/manifests
ports:
- 6443:6443
node:
image: rancher/k3s:v1.34.2-k3s1
privileged: true
links:
- server
environment:
- K3S_URL=https://server:6443
- K3S_CLUSTER_SECRET=somethingtotallyrandom
volumes:
# this is where you would place a alternative traefik image (saved as a .tar file with
# 'docker save'), if you want to use it, instead of the traefik:v3.6 image.
- /somewhere/on/your/host/custom-image:/var/lib/rancher/k3s/agent/images
-283
View File
@@ -1,283 +0,0 @@
---
title: "Traefik Proxy gRPC Examples"
description: "This section of the Traefik Proxy documentation explains how to use Traefik as reverse proxy for gRPC applications."
---
# gRPC Examples
## With HTTP (h2c)
This section explains how to use Traefik as reverse proxy for gRPC application.
### Traefik Configuration
Static configuration:
```yaml tab="File (YAML)"
entryPoints:
web:
address: :80
providers:
file:
directory: /path/to/dynamic/config
api: {}
```
```toml tab="File (TOML)"
[entryPoints]
[entryPoints.web]
address = ":80"
[api]
[providers.file]
directory = "/path/to/dynamic/config"
```
```yaml tab="CLI"
--entryPoints.web.address=:80
--providers.file.directory=/path/to/dynamic/config
--api.insecure=true
```
`/path/to/dynamic/config/dynamic_conf.{yml,toml}`:
```yaml tab="YAML"
## dynamic configuration ##
http:
routers:
routerTest:
service: srv-grpc
rule: Host(`frontend.local`)
services:
srv-grpc:
loadBalancer:
servers:
- url: h2c://backend.local:8080
```
```toml tab="TOML"
## dynamic configuration ##
[http]
[http.routers]
[http.routers.routerTest]
service = "srv-grpc"
rule = "Host(`frontend.local`)"
[http.services]
[http.services.srv-grpc]
[http.services.srv-grpc.loadBalancer]
[[http.services.srv-grpc.loadBalancer.servers]]
url = "h2c://backend.local:8080"
```
!!! warning
For providers with labels, you will have to specify the `traefik.http.services.<my-service-name>.loadbalancer.server.scheme=h2c`
### Conclusion
We don't need specific configuration to use gRPC in Traefik, we just need to use `h2c` protocol, or use HTTPS communications to have HTTP2 with the backend.
## With HTTPS
This section explains how to use Traefik as reverse proxy for gRPC application with self-signed certificates.
![gRPC architecture](../assets/img/user-guides/grpc.svg)
### gRPC Server Certificate
In order to secure the gRPC server, we generate a self-signed certificate for service url:
```bash
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./backend.key -out ./backend.cert
```
That will prompt for information, the important answer is:
```txt
Common Name (e.g. server FQDN or YOUR name) []: backend.local
```
### gRPC Client Certificate
Generate your self-signed certificate for router url:
```bash
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./frontend.key -out ./frontend.cert
```
with
```txt
Common Name (e.g. server FQDN or YOUR name) []: frontend.local
```
### Traefik Configuration
At last, we configure our Traefik instance to use both self-signed certificates.
Static configuration:
```yaml tab="File (YAML)"
entryPoints:
websecure:
address: :4443
serversTransport:
# For secure connection on backend.local
rootCAs:
- ./backend.cert
providers:
file:
directory: /path/to/dynamic/config
api: {}
```
```toml tab="File (TOML)"
[entryPoints]
[entryPoints.websecure]
address = ":4443"
[serversTransport]
# For secure connection on backend.local
rootCAs = [ "./backend.cert" ]
[api]
[provider.file]
directory = "/path/to/dynamic/config"
```
```yaml tab="CLI"
--entryPoints.websecure.address=:4443
# For secure connection on backend.local
--serversTransport.rootCAs=./backend.cert
--providers.file.directory=/path/to/dynamic/config
--api.insecure=true
```
`/path/to/dynamic/config/dynamic_conf.{yml,toml}`:
```yaml tab="YAML"
## dynamic configuration ##
http:
routers:
routerTest:
service: srv-grpc
rule: Host(`frontend.local`)
services:
srv-grpc:
loadBalancer:
servers:
# Access on backend with HTTPS
- url: https://backend.local:8080
tls:
# For secure connection on frontend.local
certificates:
- certfile: ./frontend.cert
keyfile: ./frontend.key
```
```toml tab="TOML"
## dynamic configuration ##
[http]
[http.routers]
[http.routers.routerTest]
service = "srv-grpc"
rule = "Host(`frontend.local`)"
[http.services]
[http.services.srv-grpc]
[http.services.srv-grpc.loadBalancer]
[[http.services.srv-grpc.loadBalancer.servers]]
# Access on backend with HTTPS
url = "https://backend.local:8080"
[tls]
# For secure connection on frontend.local
[[tls.certificates]]
certFile = "./frontend.cert"
keyFile = "./frontend.key"
```
!!! warning
With some services, the server URLs use the IP, so you may need to configure `insecureSkipVerify` instead of the `rootCAs` to activate HTTPS without hostname verification.
### A gRPC example in go (modify for https)
We use the gRPC greeter example in [grpc-go](https://github.com/grpc/grpc-go/tree/master/examples/helloworld)
!!! warning
In order to use this gRPC example, we need to modify it to use HTTPS
So we modify the "gRPC server example" to use our own self-signed certificate:
```go
// ...
// Read cert and key file
backendCert, _ := os.ReadFile("./backend.cert")
backendKey, _ := os.ReadFile("./backend.key")
// Generate Certificate struct
cert, err := tls.X509KeyPair(backendCert, backendKey)
if err != nil {
log.Fatalf("failed to parse certificate: %v", err)
}
// Create credentials
creds := credentials.NewServerTLSFromCert(&cert)
// Use Credentials in gRPC server options
serverOption := grpc.Creds(creds)
var s *grpc.Server = grpc.NewServer(serverOption)
defer s.Stop()
pb.RegisterGreeterServer(s, &server{})
err := s.Serve(lis)
// ...
```
Next we will modify gRPC Client to use our Traefik self-signed certificate:
```go
// ...
// Read cert file
frontendCert, _ := os.ReadFile("./frontend.cert")
// Create CertPool
roots := x509.NewCertPool()
roots.AppendCertsFromPEM(frontendCert)
// Create credentials
credsClient := credentials.NewClientTLSFromCert(roots, "")
// Dial with specific Transport (with credentials)
conn, err := grpc.Dial("frontend.local:4443", grpc.WithTransportCredentials(credsClient))
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
client := pb.NewGreeterClient(conn)
name := "World"
r, err := client.SayHello(context.Background(), &pb.HelloRequest{Name: name})
// ...
```
-355
View File
@@ -1,355 +0,0 @@
---
title: "Traefik WebSocket Documentation"
description: "How to configure WebSocket and WebSocket Secure (WSS) connections with Traefik Proxy."
---
# WebSocket
Configuring Traefik to handle WebSocket and WebSocket Secure (WSS) connections.
{: .subtitle }
## Overview
WebSocket is a communication protocol that provides full-duplex communication channels over a single TCP connection.
WebSocket Secure (WSS) is the encrypted version of WebSocket, using TLS/SSL encryption.
Traefik supports WebSocket and WebSocket Secure (WSS) out of the box. This guide will walk through examples of how to configure Traefik for different WebSocket scenarios.
## Basic WebSocket Configuration
A basic WebSocket configuration only requires defining a router and a service that points to your WebSocket server.
```yaml tab="Docker & Swarm"
labels:
- "traefik.http.routers.my-websocket.rule=Host(`ws.example.com`)"
- "traefik.http.routers.my-websocket.service=my-websocket-service"
- "traefik.http.services.my-websocket-service.loadbalancer.server.port=8000"
```
```yaml tab="Kubernetes"
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: my-websocket-route
spec:
entryPoints:
- web
routes:
- match: Host(`ws.example.com`)
kind: Rule
services:
- name: my-websocket-service
port: 8000
```
```yaml tab="File (YAML)"
http:
routers:
my-websocket:
rule: "Host(`ws.example.com`)"
service: my-websocket-service
services:
my-websocket-service:
loadBalancer:
servers:
- url: "http://my-websocket-server:8000"
```
```toml tab="File (TOML)"
[http.routers]
[http.routers.my-websocket]
rule = "Host(`ws.example.com`)"
service = "my-websocket-service"
[http.services]
[http.services.my-websocket-service]
[http.services.my-websocket-service.loadBalancer]
[[http.services.my-websocket-service.loadBalancer.servers]]
url = "http://my-websocket-server:8000"
```
## WebSocket Secure (WSS) Configuration
WebSocket Secure (WSS) requires TLS configuration.
The client connects using the `wss://` protocol instead of `ws://`.
```yaml tab="Docker & Swarm"
labels:
- "traefik.http.routers.my-websocket-secure.rule=Host(`wss.example.com`)"
- "traefik.http.routers.my-websocket-secure.service=my-websocket-service"
- "traefik.http.routers.my-websocket-secure.tls=true"
- "traefik.http.services.my-websocket-service.loadbalancer.server.port=8000"
```
```yaml tab="Kubernetes"
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: my-websocket-secure-route
spec:
entryPoints:
- websecure
routes:
- match: Host(`wss.example.com`)
kind: Rule
services:
- name: my-websocket-service
port: 8000
tls: {}
```
```yaml tab="File (YAML)"
http:
routers:
my-websocket-secure:
rule: "Host(`wss.example.com`)"
service: my-websocket-service
tls: {}
services:
my-websocket-service:
loadBalancer:
servers:
- url: "http://my-websocket-server:8000"
```
```toml tab="File (TOML)"
[http.routers]
[http.routers.my-websocket-secure]
rule = "Host(`wss.example.com`)"
service = "my-websocket-service"
[http.routers.my-websocket-secure.tls]
[http.services]
[http.services.my-websocket-service]
[http.services.my-websocket-service.loadBalancer]
[[http.services.my-websocket-service.loadBalancer.servers]]
url = "http://my-websocket-server:8000"
```
## SSL Termination for WebSockets
In this scenario, clients connect to Traefik using WSS (encrypted), but Traefik connects to your backend server using WS (unencrypted).
This is called SSL termination.
```yaml tab="Docker & Swarm"
labels:
- "traefik.http.routers.my-wss-termination.rule=Host(`wss.example.com`)"
- "traefik.http.routers.my-wss-termination.service=my-ws-service"
- "traefik.http.routers.my-wss-termination.tls=true"
- "traefik.http.services.my-ws-service.loadbalancer.server.port=8000"
```
```yaml tab="Kubernetes"
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: my-wss-termination-route
spec:
entryPoints:
- websecure
routes:
- match: Host(`wss.example.com`)
kind: Rule
services:
- name: my-ws-service
port: 8000
tls: {}
```
```yaml tab="File (YAML)"
http:
routers:
my-wss-termination:
rule: "Host(`wss.example.com`)"
service: my-ws-service
tls: {}
services:
my-ws-service:
loadBalancer:
servers:
- url: "http://my-ws-server:8000"
```
```toml tab="File (TOML)"
[http.routers]
[http.routers.my-wss-termination]
rule = "Host(`wss.example.com`)"
service = "my-ws-service"
[http.routers.my-wss-termination.tls]
[http.services]
[http.services.my-ws-service]
[http.services.my-ws-service.loadBalancer]
[[http.services.my-ws-service.loadBalancer.servers]]
url = "http://my-ws-server:8000"
```
## End-to-End WebSocket Secure (WSS)
For end-to-end encryption, Traefik can be configured to connect to your backend using HTTPS.
```yaml tab="Docker & Swarm"
labels:
- "traefik.http.routers.my-wss-e2e.rule=Host(`wss.example.com`)"
- "traefik.http.routers.my-wss-e2e.service=my-wss-service"
- "traefik.http.routers.my-wss-e2e.tls=true"
- "traefik.http.services.my-wss-service.loadbalancer.server.port=8443"
# If the backend uses a self-signed certificate
- "traefik.http.serversTransports.insecureTransport.insecureSkipVerify=true"
- "traefik.http.services.my-wss-service.loadBalancer.serversTransport=insecureTransport"
```
```yaml tab="Kubernetes"
apiVersion: traefik.io/v1alpha1
kind: ServersTransport
metadata:
name: insecure-transport
spec:
insecureSkipVerify: true
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: my-wss-e2e-route
spec:
entryPoints:
- websecure
routes:
- match: Host(`wss.example.com`)
kind: Rule
services:
- name: my-wss-service
port: 8443
serversTransport: insecure-transport
tls: {}
```
```yaml tab="File (YAML)"
http:
serversTransports:
insecureTransport:
insecureSkipVerify: true
routers:
my-wss-e2e:
rule: "Host(`wss.example.com`)"
service: my-wss-service
tls: {}
services:
my-wss-service:
loadBalancer:
serversTransport: insecureTransport
servers:
- url: "https://my-wss-server:8443"
```
```toml tab="File (TOML)"
[http.serversTransports]
[http.serversTransports.insecureTransport]
insecureSkipVerify = true
[http.routers]
[http.routers.my-wss-e2e]
rule = "Host(`wss.example.com`)"
service = "my-wss-service"
[http.routers.my-wss-e2e.tls]
[http.services]
[http.services.my-wss-service]
[http.services.my-wss-service.loadBalancer]
serversTransport = "insecureTransport"
[[http.services.my-wss-service.loadBalancer.servers]]
url = "https://my-wss-server:8443"
```
## EntryPoints Configuration for WebSockets
In your Traefik static configuration, you'll need to define entryPoints for both WS and WSS:
```yaml tab="File (YAML)"
entryPoints:
web:
address: ":80"
websecure:
address: ":443"
```
```toml tab="File (TOML)"
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.websecure]
address = ":443"
```
## Testing WebSocket Connections
You can test your WebSocket configuration using various tools:
1. Browser Developer Tools: Most modern browsers include WebSocket debugging in their developer tools.
2. WebSocket client tools like [wscat](https://github.com/websockets/wscat) or online tools like [Piesocket's WebSocket Tester](https://www.piesocket.com/websocket-tester).
Example wscat commands:
```bash
# Test standard WebSocket
wscat -c ws://ws.example.com
# Test WebSocket Secure
wscat -c wss://wss.example.com
```
## Common Issues and Solutions
### Headers and Origin Checks
Some WebSocket servers implement origin checking. Traefik passes the original headers to your backend, including the `Origin` header.
If you need to manipulate headers for WebSocket connections, you can use Traefik's Headers middleware:
```yaml tab="Docker & Swarm"
labels:
- "traefik.http.middlewares.my-headers.headers.customrequestheaders.Origin=https://allowed-origin.com"
- "traefik.http.routers.my-websocket.middlewares=my-headers"
```
```yaml tab="Kubernetes"
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: my-headers
spec:
headers:
customRequestHeaders:
Origin: "https://allowed-origin.com"
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: my-websocket-route
spec:
routes:
- match: Host(`ws.example.com`)
kind: Rule
middlewares:
- name: my-headers
services:
- name: my-websocket-service
port: 8000
```
### Certificate Issues with WSS
If you're experiencing certificate issues with WSS:
1. Ensure your certificates are valid and not expired
2. For testing with self-signed certificates, configure your clients to accept them
3. When using Let's Encrypt, ensure your domain is properly configured
For backends with self-signed certificates, use the `insecureSkipVerify` option in the ServersTransport configuration as shown in the examples above.
+9 -13
View File
@@ -139,13 +139,12 @@ plugins:
'user-guides/crd-acme/index.md': 'expose/kubernetes/basic.md'
'user-guides/cert-manager.md': 'expose/kubernetes/advanced.md'
'user-guides/docker-compose/basic-example/index.md': 'expose/docker/basic.md'
'user-guides/docker-compose/acme-tls/index.md': 'expose/docker/advanced.md'
'user-guides/docker-compose/acme-http/index.md': 'expose/docker/advanced.md'
'user-guides/docker-compose/acme-dns/index.md': 'expose/docker/advanced.md'
## Expose pages (redirect old URLs to new structure)
'expose/kubernetes.md': 'expose/kubernetes/basic.md'
'expose/docker.md': 'expose/docker/basic.md'
'expose/swarm.md': 'expose/swarm/basic.md'
'user-guides/docker-compose/acme-tls/index.md': 'expose/docker/basic.md'
'user-guides/docker-compose/acme-http/index.md': 'expose/docker/basic.md'
'user-guides/docker-compose/acme-dns/index.md': 'expose/docker/basic.md'
'user-guides/fastproxy.md': 'reference/install-configuration/experimental/fastproxy.md'
'user-guides/grpc.md': 'expose/overview.md#exposing-grpc-services'
'user-guides/websocket.md': 'expose/overview.md#exposing-websocket-services'
# References
# Static Configuration
'reference/static-configuration/overview.md': 'reference/install-configuration/configuration-options.md'
@@ -273,6 +272,9 @@ nav:
- 'Tracing': 'reference/install-configuration/observability/tracing.md'
- 'Logs & AccessLogs': 'reference/install-configuration/observability/logs-and-accesslogs.md'
- 'Health Check (CLI & Ping)': 'reference/install-configuration/observability/healthcheck.md'
- 'Experimental':
- 'FastProxy': 'reference/install-configuration/experimental/fastproxy.md'
- 'Plugins': 'reference/install-configuration/experimental/plugins.md'
- 'Options List': 'reference/install-configuration/configuration-options.md'
- 'Routing Configuration':
- 'Common Configuration' :
@@ -379,12 +381,6 @@ nav:
- 'Deprecation Notices':
- 'Releases': 'deprecation/releases.md'
- 'Features': 'deprecation/features.md'
- 'User Guides':
- 'FastProxy': 'user-guides/fastproxy.md'
- 'Kubernetes and Let''s Encrypt': 'user-guides/crd-acme/index.md'
- 'Kubernetes and cert-manager': 'user-guides/cert-manager.md'
- 'gRPC Examples': 'user-guides/grpc.md'
- 'WebSocket Examples': 'user-guides/websocket.md'
- 'Contributing':
- 'Thank You!': 'contributing/thank-you.md'
- 'Submitting Issues': 'contributing/submitting-issues.md'
+59 -26
View File
@@ -466,42 +466,52 @@ func TestServiceTCPHealthChecker_Launch(t *testing.T) {
},
}
lb := &testLoadBalancer{}
// Create load balancer with event channel for synchronization.
lb := &testLoadBalancer{
RWMutex: &sync.RWMutex{},
eventCh: make(chan struct{}, len(test.server.StatusSequence)+5),
}
serviceInfo := &truntime.TCPServiceInfo{}
service := NewServiceTCPHealthChecker(ctx, test.config, lb, serviceInfo, targets, "serviceName")
go service.Launch(ctx)
// How much time to wait for the health check to actually complete.
deadline := time.Now().Add(200 * time.Millisecond)
// TLS handshake can take much longer.
// Timeout for each event - TLS handshake can take longer.
eventTimeout := 500 * time.Millisecond
if test.server.TLS {
deadline = time.Now().Add(1000 * time.Millisecond)
eventTimeout = 2 * time.Second
}
// Wait for all health checks to complete deterministically
// Wait for health check events using channel synchronization.
// Iterate over StatusSequence to release each connection via Next().
for i := range test.server.StatusSequence {
test.server.Next()
initialUpserted := lb.numUpsertedServers
initialRemoved := lb.numRemovedServers
for time.Now().Before(deadline) {
time.Sleep(5 * time.Millisecond)
if lb.numUpsertedServers > initialUpserted || lb.numRemovedServers > initialRemoved {
// Stop the health checker immediately after the last expected sequence completes
// to prevent extra health checks from firing and modifying the counters.
if i == len(test.server.StatusSequence)-1 {
cancel()
}
break
select {
case <-lb.eventCh:
// Event received
// On the last iteration, stop the health checker immediately
// to prevent extra checks from modifying the counters.
if i == len(test.server.StatusSequence)-1 {
test.server.Close()
cancel()
}
case <-time.After(eventTimeout):
t.Fatalf("timeout waiting for health check event %d/%d", i+1, len(test.server.StatusSequence))
}
}
assert.Equal(t, test.expNumRemovedServers, lb.numRemovedServers, "removed servers")
assert.Equal(t, test.expNumUpsertedServers, lb.numUpsertedServers, "upserted servers")
// Small delay to let goroutines clean up.
time.Sleep(10 * time.Millisecond)
lb.RLock()
removedServers := lb.numRemovedServers
upsertedServers := lb.numUpsertedServers
lb.RUnlock()
assert.Equal(t, test.expNumRemovedServers, removedServers, "removed servers")
assert.Equal(t, test.expNumUpsertedServers, upsertedServers, "upserted servers")
assert.Equal(t, map[string]string{test.server.Addr.String(): test.targetStatus}, serviceInfo.GetAllStatus())
})
}
@@ -597,6 +607,8 @@ type sequencedTCPServer struct {
StatusSequence []tcpMockSequence
TLS bool
release chan struct{}
mu sync.Mutex
listener net.Listener
}
func newTCPServer(t *testing.T, tlsEnabled bool, statusSequence ...tcpMockSequence) *sequencedTCPServer {
@@ -624,17 +636,28 @@ func (s *sequencedTCPServer) Next() {
s.release <- struct{}{}
}
func (s *sequencedTCPServer) Close() {
s.mu.Lock()
defer s.mu.Unlock()
if s.listener != nil {
s.listener.Close()
s.listener = nil
}
}
func (s *sequencedTCPServer) Start(t *testing.T) {
t.Helper()
go func() {
var listener net.Listener
for _, seq := range s.StatusSequence {
<-s.release
if listener != nil {
listener.Close()
s.mu.Lock()
if s.listener != nil {
s.listener.Close()
s.listener = nil
}
s.mu.Unlock()
if !seq.accept {
continue
@@ -643,7 +666,7 @@ func (s *sequencedTCPServer) Start(t *testing.T) {
lis, err := net.ListenTCP("tcp", s.Addr)
require.NoError(t, err)
listener = lis
var listener net.Listener = lis
if s.TLS {
cert, err := tls.X509KeyPair(localhostCert, localhostKey)
@@ -670,8 +693,18 @@ func (s *sequencedTCPServer) Start(t *testing.T) {
)
}
s.mu.Lock()
s.listener = listener
s.mu.Unlock()
conn, err := listener.Accept()
require.NoError(t, err)
if err != nil {
// Listener was closed during shutdown - this is expected behavior.
if strings.Contains(err.Error(), "use of closed network connection") {
return
}
require.NoError(t, err)
}
t.Cleanup(func() {
_ = conn.Close()
})
+27 -10
View File
@@ -418,7 +418,12 @@ func TestServiceHealthChecker_Launch(t *testing.T) {
targetURL, timeout := test.server.Start(t, cancel)
lb := &testLoadBalancer{RWMutex: &sync.RWMutex{}}
// Create load balancer with event channel for synchronization.
expectedEvents := test.expNumRemovedServers + test.expNumUpsertedServers
lb := &testLoadBalancer{
RWMutex: &sync.RWMutex{},
eventCh: make(chan struct{}, expectedEvents+5),
}
config := &dynamic.ServerHealthCheck{
Mode: test.mode,
@@ -441,18 +446,30 @@ func TestServiceHealthChecker_Launch(t *testing.T) {
wg.Done()
}()
select {
case <-time.After(timeout):
t.Fatal("test did not complete in time")
case <-ctx.Done():
wg.Wait()
// Wait for expected health check events using channel synchronization.
for i := range expectedEvents {
select {
case <-lb.eventCh:
// Event received.
// On the last event, cancel to prevent extra health checks.
if i == expectedEvents-1 {
cancel()
}
case <-time.After(timeout):
t.Fatalf("timeout waiting for health check event %d/%d", i+1, expectedEvents)
}
}
lb.Lock()
defer lb.Unlock()
// Wait for the health checker goroutine to exit before making assertions.
wg.Wait()
assert.Equal(t, test.expNumRemovedServers, lb.numRemovedServers, "removed servers")
assert.Equal(t, test.expNumUpsertedServers, lb.numUpsertedServers, "upserted servers")
lb.RLock()
removedServers := lb.numRemovedServers
upsertedServers := lb.numUpsertedServers
lb.RUnlock()
assert.Equal(t, test.expNumRemovedServers, removedServers, "removed servers")
assert.Equal(t, test.expNumUpsertedServers, upsertedServers, "upserted servers")
assert.InDelta(t, test.expGaugeValue, gauge.GaugeValue, delta, "ServerUp Gauge")
assert.Equal(t, []string{"service", "foobar", "url", targetURL.String()}, gauge.LastLabelValues)
assert.Equal(t, map[string]string{targetURL.String(): test.targetStatus}, serviceInfo.GetAllStatus())
+15
View File
@@ -168,14 +168,29 @@ type testLoadBalancer struct {
numRemovedServers int
numUpsertedServers int
// eventCh is used to signal when a status change occurs, allowing tests
// to synchronize with health check events deterministically.
eventCh chan struct{}
}
func (lb *testLoadBalancer) SetStatus(ctx context.Context, childName string, up bool) {
lb.Lock()
if up {
lb.numUpsertedServers++
} else {
lb.numRemovedServers++
}
lb.Unlock()
// Signal the event if a listener is registered.
if lb.eventCh != nil {
select {
case lb.eventCh <- struct{}{}:
default:
// Don't block if channel is full or no listener.
}
}
}
type MetricsMock struct {
+7 -4
View File
@@ -10,8 +10,9 @@ import (
// connectCert holds our certificates as a client of the Consul Connect protocol.
type connectCert struct {
root []string
leaf keyPair
trustDomain string
root []string
leaf keyPair
}
func (c *connectCert) getRoot() []types.FileOrContent {
@@ -52,7 +53,8 @@ func (c *connectCert) equals(other *connectCert) bool {
}
func (c *connectCert) serversTransport(item itemData) *dynamic.ServersTransport {
spiffeID := fmt.Sprintf("spiffe:///ns/%s/dc/%s/svc/%s",
spiffeID := fmt.Sprintf("spiffe://%s/ns/%s/dc/%s/svc/%s",
c.trustDomain,
item.Namespace,
item.Datacenter,
item.Name,
@@ -72,7 +74,8 @@ func (c *connectCert) serversTransport(item itemData) *dynamic.ServersTransport
}
func (c *connectCert) tcpServersTransport(item itemData) *dynamic.TCPServersTransport {
spiffeID := fmt.Sprintf("spiffe:///ns/%s/dc/%s/svc/%s",
spiffeID := fmt.Sprintf("spiffe://%s/ns/%s/dc/%s/svc/%s",
c.trustDomain,
item.Namespace,
item.Datacenter,
item.Name,
+16 -10
View File
@@ -465,7 +465,7 @@ func (p *Provider) watchConnectTLS(ctx context.Context) error {
}
leafWatcher.HybridHandler = leafWatcherHandler(ctx, leafChan)
rootsChan := make(chan []string)
rootsChan := make(chan caRootList)
rootsWatcher, err := watch.Parse(map[string]any{
"type": "connect_roots",
})
@@ -497,9 +497,9 @@ func (p *Provider) watchConnectTLS(ctx context.Context) error {
}()
var (
certInfo *connectCert
leafCerts keyPair
rootCerts []string
certInfo *connectCert
leafCert keyPair
caRoots caRootList
)
for {
@@ -510,13 +510,14 @@ func (p *Provider) watchConnectTLS(ctx context.Context) error {
case err := <-errChan:
return fmt.Errorf("leaf or roots watcher terminated: %w", err)
case rootCerts = <-rootsChan:
case leafCerts = <-leafChan:
case caRoots = <-rootsChan:
case leafCert = <-leafChan:
}
newCertInfo := &connectCert{
root: rootCerts,
leaf: leafCerts,
trustDomain: caRoots.trustDomain,
root: caRoots.roots,
leaf: leafCert,
}
if newCertInfo.isReady() && !newCertInfo.equals(certInfo) {
log.Ctx(ctx).Debug().Msgf("Updating connect certs for service %s", p.ServiceName)
@@ -546,7 +547,12 @@ func (p *Provider) includesHealthStatus(status string) bool {
return false
}
func rootsWatchHandler(ctx context.Context, dest chan<- []string) func(watch.BlockingParamVal, any) {
type caRootList struct {
trustDomain string
roots []string
}
func rootsWatchHandler(ctx context.Context, dest chan<- caRootList) func(watch.BlockingParamVal, any) {
return func(_ watch.BlockingParamVal, raw any) {
if raw == nil {
log.Ctx(ctx).Error().Msg("Root certificate watcher called with nil")
@@ -566,7 +572,7 @@ func rootsWatchHandler(ctx context.Context, dest chan<- []string) func(watch.Blo
select {
case <-ctx.Done():
case dest <- roots:
case dest <- caRootList{trustDomain: v.TrustDomain, roots: roots}:
}
}
}
@@ -34,7 +34,7 @@ func TestLoadIngresses(t *testing.T) {
desc: "Empty, no IngressClass",
paths: []string{
"services.yml",
"ingresses/01-ingress-with-basicauth.yml",
"ingresses/ingress-with-basicauth.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -57,7 +57,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/11-ingress-with-custom-headers.yml",
"ingresses/ingress-with-custom-headers.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -116,7 +116,7 @@ func TestLoadIngresses(t *testing.T) {
{
desc: "No annotation",
paths: []string{
"ingresses/00-ingress-with-no-annotation.yml",
"ingresses/ingress-with-no-annotation.yml",
"ingressclasses.yml",
"services.yml",
"secrets.yml",
@@ -196,7 +196,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/01-ingress-with-basicauth.yml",
"ingresses/ingress-with-basicauth.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -260,7 +260,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/02-ingress-with-forwardauth.yml",
"ingresses/ingress-with-forwardauth.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -324,7 +324,7 @@ func TestLoadIngresses(t *testing.T) {
"services.yml",
"secrets.yml",
"ingressclasses.yml",
"ingresses/03-ingress-with-ssl-redirect.yml",
"ingresses/ingress-with-ssl-redirect.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -472,7 +472,7 @@ func TestLoadIngresses(t *testing.T) {
"services.yml",
"secrets.yml",
"ingressclasses.yml",
"ingresses/04-ingress-with-ssl-passthrough.yml",
"ingresses/ingress-with-ssl-passthrough.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -518,7 +518,7 @@ func TestLoadIngresses(t *testing.T) {
"services.yml",
"secrets.yml",
"ingressclasses.yml",
"ingresses/06-ingress-with-sticky.yml",
"ingresses/ingress-with-sticky.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -585,7 +585,7 @@ func TestLoadIngresses(t *testing.T) {
"services.yml",
"secrets.yml",
"ingressclasses.yml",
"ingresses/07-ingress-with-proxy-ssl.yml",
"ingresses/ingress-with-proxy-ssl.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -642,7 +642,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/08-ingress-with-cors.yml",
"ingresses/ingress-with-cors.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -708,7 +708,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/09-ingress-with-service-upstream.yml",
"ingresses/ingress-with-service-upstream.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -759,7 +759,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/10-ingress-with-upstream-vhost.yml",
"ingresses/ingress-with-upstream-vhost.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -820,7 +820,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/10-ingress-with-use-regex.yml",
"ingresses/ingress-with-use-regex.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -874,7 +874,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/11-ingress-with-rewrite-target.yml",
"ingresses/ingress-with-rewrite-target.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -936,7 +936,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/18-ingress-with-app-root.yml",
"ingresses/ingress-with-app-root.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -998,7 +998,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/18-ingress-with-app-root-wrong.yml",
"ingresses/ingress-with-app-root-wrong.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1107,7 +1107,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/10-ingress-with-whitelist-single-ip.yml",
"ingresses/ingress-with-whitelist-single-ip.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1168,7 +1168,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/11-ingress-with-whitelist-single-cidr.yml",
"ingresses/ingress-with-whitelist-single-cidr.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1229,7 +1229,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/12-ingress-with-whitelist-multiple-ip-and-cidr.yml",
"ingresses/ingress-with-whitelist-multiple-ip-and-cidr.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1290,7 +1290,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/13-ingress-with-whitelist-empty.yml",
"ingresses/ingress-with-whitelist-empty.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1345,7 +1345,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/14-ingress-with-permanent-redirect.yml",
"ingresses/ingress-with-permanent-redirect.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1408,7 +1408,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/14-ingress-with-permanent-redirect-code-wrong-code.yml",
"ingresses/ingress-with-permanent-redirect-code-wrong-code.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1471,7 +1471,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/14-ingress-with-permanent-redirect-code-correct-code.yml",
"ingresses/ingress-with-permanent-redirect-code-correct-code.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1534,7 +1534,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/16-ingress-with-temporal-and-permanent-redirect.yml",
"ingresses/ingress-with-temporal-and-permanent-redirect.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1597,7 +1597,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/15-ingress-with-temporal-redirect.yml",
"ingresses/ingress-with-temporal-redirect.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1660,7 +1660,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/17-ingress-with-temporal-redirect-code-wrong-code.yml",
"ingresses/ingress-with-temporal-redirect-code-wrong-code.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1723,7 +1723,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/17-ingress-with-temporal-redirect-code-correct-code.yml",
"ingresses/ingress-with-temporal-redirect-code-correct-code.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1786,7 +1786,7 @@ func TestLoadIngresses(t *testing.T) {
paths: []string{
"services.yml",
"ingressclasses.yml",
"ingresses/19-ingress-with-proxy-timeout.yml",
"ingresses/ingress-with-proxy-timeout.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1837,7 +1837,7 @@ func TestLoadIngresses(t *testing.T) {
"services.yml",
"secrets.yml",
"ingressclasses.yml",
"ingresses/20-ingress-with-auth-tls-secret.yml",
"ingresses/ingress-with-auth-tls-secret.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -1941,7 +1941,7 @@ func TestLoadIngresses(t *testing.T) {
"services.yml",
"secrets.yml",
"ingressclasses.yml",
"ingresses/21-ingress-with-auth-tls-verify-client.yml",
"ingresses/ingress-with-auth-tls-verify-client.yml",
},
expected: &dynamic.Configuration{
TCP: &dynamic.TCPConfiguration{
@@ -2357,7 +2357,7 @@ func TestIngressEndpointPublishedService(t *testing.T) {
ingress, err := kubeClient.NetworkingV1().Ingresses(metav1.NamespaceDefault).Get(t.Context(), "foo", metav1.GetOptions{})
require.NoError(t, err)
assert.Equal(t, test.expected, ingress.Status.LoadBalancer.Ingress)
assert.ElementsMatch(t, test.expected, ingress.Status.LoadBalancer.Ingress)
})
}
}
@@ -800,53 +800,47 @@ func TestScoreCalculationWithWeights(t *testing.T) {
}
// TestScoreCalculationWithInflight tests that inflight requests are considered in score calculation.
// Uses direct manipulation of response times and nextServer() for deterministic results.
func TestScoreCalculationWithInflight(t *testing.T) {
balancer := New(nil, false)
// We'll manually control the inflight counters to test the score calculation.
// Add two servers with same response time.
// Add two servers with dummy handlers (we test selection logic directly).
balancer.Add("server1", http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
time.Sleep(10 * time.Millisecond)
rw.Header().Set("server", "server1")
rw.WriteHeader(http.StatusOK)
httptrace.ContextClientTrace(req.Context()).GotFirstResponseByte()
}), pointer(1), false)
balancer.Add("server2", http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
time.Sleep(10 * time.Millisecond)
rw.Header().Set("server", "server2")
rw.WriteHeader(http.StatusOK)
httptrace.ContextClientTrace(req.Context()).GotFirstResponseByte()
}), pointer(1), false)
// Build up response time averages for both servers.
for range 2 {
recorder := httptest.NewRecorder()
balancer.ServeHTTP(recorder, httptest.NewRequest(http.MethodGet, "/", nil))
// Pre-fill response times directly (10ms average for both servers).
for _, h := range balancer.handlers {
for i := range sampleSize {
h.responseTimes[i] = 10.0
}
h.responseTimeSum = 10.0 * sampleSize
h.sampleCount = sampleSize
}
// Now manually set server1 to have high inflight count.
// Set server1 to have high inflight count.
balancer.handlers[0].inflightCount.Store(5)
// Make requests - they should prefer server2 because:
// Score for server1: (10 × (1 + 5)) / 1 = 60
// Score for server2: (10 × (1 + 0)) / 1 = 10
recorder := &responseRecorder{save: map[string]int{}}
counts := map[string]int{"server1": 0, "server2": 0}
for range 5 {
// Manually increment to simulate the ServeHTTP behavior.
server, _ := balancer.nextServer()
server, err := balancer.nextServer()
assert.NoError(t, err)
counts[server.name]++
// Simulate ServeHTTP incrementing inflight count.
server.inflightCount.Add(1)
if server.name == "server1" {
recorder.save["server1"]++
} else {
recorder.save["server2"]++
}
}
// Server2 should get all requests
assert.Equal(t, 5, recorder.save["server2"])
assert.Zero(t, recorder.save["server1"])
// Server2 should get all requests since its score (10-50) is always less than server1's (60).
// After each selection, server2's inflight grows: scores are 10, 20, 30, 40, 50 vs 60.
assert.Equal(t, 5, counts["server2"])
assert.Zero(t, counts["server1"])
}
// TestScoreCalculationColdStart tests that new servers (0ms avg) get fair selection
@@ -930,28 +924,20 @@ func TestFastServerGetsMoreTraffic(t *testing.T) {
// TestTrafficShiftsWhenPerformanceDegrades verifies that the load balancer
// adapts to changing server performance by shifting traffic away from degraded servers.
// This tests the adaptive behavior - the core value proposition of least-time load balancing.
// Uses nextServer() directly to avoid timing variations and ensure deterministic results.
func TestTrafficShiftsWhenPerformanceDegrades(t *testing.T) {
balancer := New(nil, false)
// Use atomic to dynamically control server1's response time.
server1Delay := atomic.Int64{}
server1Delay.Store(5) // Start with 5ms
// Add two servers with dummy handlers (we'll test selection logic directly).
balancer.Add("server1", http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
time.Sleep(time.Duration(server1Delay.Load()) * time.Millisecond)
rw.Header().Set("server", "server1")
rw.WriteHeader(http.StatusOK)
httptrace.ContextClientTrace(req.Context()).GotFirstResponseByte()
}), pointer(1), false)
balancer.Add("server2", http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
time.Sleep(5 * time.Millisecond) // Static 5ms
rw.Header().Set("server", "server2")
rw.WriteHeader(http.StatusOK)
httptrace.ContextClientTrace(req.Context()).GotFirstResponseByte()
}), pointer(1), false)
// Pre-fill ring buffers to eliminate cold start effects and ensure deterministic equal performance state.
// Pre-fill ring buffers with equal response times (5ms each).
for _, h := range balancer.handlers {
for i := range sampleSize {
h.responseTimes[i] = 5.0
@@ -960,35 +946,43 @@ func TestTrafficShiftsWhenPerformanceDegrades(t *testing.T) {
h.sampleCount = sampleSize
}
// Phase 1: Both servers perform equally (5ms each).
recorder := &responseRecorder{ResponseRecorder: httptest.NewRecorder(), save: map[string]int{}}
// Phase 1: Both servers have equal performance (5ms each).
// With WRR tie-breaking, traffic should be distributed evenly.
counts := map[string]int{"server1": 0, "server2": 0}
for range 50 {
balancer.ServeHTTP(recorder, httptest.NewRequest(http.MethodGet, "/", nil))
server, err := balancer.nextServer()
assert.NoError(t, err)
counts[server.name]++
}
// With equal performance and pre-filled buffers, distribution should be balanced via WRR tie-breaking.
total := recorder.save["server1"] + recorder.save["server2"]
total := counts["server1"] + counts["server2"]
assert.Equal(t, 50, total)
assert.InDelta(t, 25, recorder.save["server1"], 10) // 25 ± 10 requests
assert.InDelta(t, 25, recorder.save["server2"], 10) // 25 ± 10 requests
assert.InDelta(t, 25, counts["server1"], 1) // Deterministic WRR: 25 ± 1
assert.InDelta(t, 25, counts["server2"], 1) // Deterministic WRR: 25 ± 1
// Phase 2: server1 degrades (simulating GC pause, CPU spike, or network latency).
server1Delay.Store(50) // Now 50ms (10x slower) - dramatic degradation for reliable detection
// Make more requests to shift the moving average.
// Ring buffer has 100 samples, need significant new samples to shift average.
// server1's average will climb from ~5ms toward 50ms.
recorder2 := &responseRecorder{ResponseRecorder: httptest.NewRecorder(), save: map[string]int{}}
for range 60 {
balancer.ServeHTTP(recorder2, httptest.NewRequest(http.MethodGet, "/", nil))
// Phase 2: Simulate server1 degradation by directly updating its ring buffer.
// Set server1's average response time to 50ms (10x slower than server2's 5ms).
for _, h := range balancer.handlers {
if h.name == "server1" {
for i := range sampleSize {
h.responseTimes[i] = 50.0
}
h.responseTimeSum = 50.0 * sampleSize
}
}
// server2 should get significantly more traffic
// With 10x performance difference, server2 should dominate.
total2 := recorder2.save["server1"] + recorder2.save["server2"]
// With 10x performance difference, server2 should get significantly more traffic.
counts2 := map[string]int{"server1": 0, "server2": 0}
for range 60 {
server, err := balancer.nextServer()
assert.NoError(t, err)
counts2[server.name]++
}
total2 := counts2["server1"] + counts2["server2"]
assert.Equal(t, 60, total2)
assert.Greater(t, recorder2.save["server2"], 35) // At least ~60% (35/60)
assert.Less(t, recorder2.save["server1"], 25) // At most ~40% (25/60)
assert.Greater(t, counts2["server2"], 35) // At least ~60% (35/60)
assert.Less(t, counts2["server1"], 25) // At most ~40% (25/60)
}
// TestMultipleServersWithSameScore tests WRR tie-breaking when multiple servers have identical scores.
+6 -25
View File
@@ -5,7 +5,6 @@ import (
"crypto/x509"
"errors"
"fmt"
"net/url"
"os"
"strings"
@@ -160,37 +159,19 @@ func VerifyPeerCertificate(uri string, cfg *tls.Config, rawCerts [][]byte) error
return nil
}
// verifyServerCertMatchesURI is used on tls connections dialed to a server
// to ensure that the certificate it presented has the correct URI.
// verifyServerCertMatchesURI verifies that the given certificate contains the specified URI in its SANs.
func verifyServerCertMatchesURI(uri string, cert *x509.Certificate) error {
if cert == nil {
return errors.New("peer certificate mismatch: no peer certificate presented")
}
// Our certs will only ever have a single URI for now so only check that
if len(cert.URIs) < 1 {
return errors.New("peer certificate mismatch: peer certificate invalid")
for _, certURI := range cert.URIs {
if strings.EqualFold(certURI.String(), uri) {
return nil
}
}
gotURI := cert.URIs[0]
// Override the hostname since we rely on x509 constraints to limit ability to spoof the trust domain if needed
// (i.e. because a root is shared with other PKI or Consul clusters).
// This allows for seamless migrations between trust domains.
expectURI := &url.URL{}
id, err := url.Parse(uri)
if err != nil {
return fmt.Errorf("%q is not a valid URI", uri)
}
*expectURI = *id
expectURI.Host = gotURI.Host
if strings.EqualFold(gotURI.String(), expectURI.String()) {
return nil
}
return fmt.Errorf("peer certificate mismatch got %s, want %s", gotURI, uri)
return fmt.Errorf("peer certificate mismatch: no SAN URI in peer certificate matches %s", uri)
}
// verifyChain performs standard TLS verification without enforcing remote hostname matching.
+64
View File
@@ -0,0 +1,64 @@
package tls
import (
"crypto/x509"
"net/url"
"testing"
"github.com/stretchr/testify/require"
)
func Test_verifyServerCertMatchesURI(t *testing.T) {
tests := []struct {
desc string
uri string
cert *x509.Certificate
expErr require.ErrorAssertionFunc
}{
{
desc: "returns error when certificate is nil",
uri: "spiffe://foo.com",
expErr: require.Error,
},
{
desc: "returns error when certificate has no URIs",
uri: "spiffe://foo.com",
cert: &x509.Certificate{URIs: nil},
expErr: require.Error,
},
{
desc: "returns error when no URI matches",
uri: "spiffe://foo.com",
cert: &x509.Certificate{URIs: []*url.URL{
{Scheme: "spiffe", Host: "other.org"},
}},
expErr: require.Error,
},
{
desc: "returns nil when URI matches",
uri: "spiffe://foo.com",
cert: &x509.Certificate{URIs: []*url.URL{
{Scheme: "spiffe", Host: "foo.com"},
}},
expErr: require.NoError,
},
{
desc: "returns nil when one of the URI matches",
uri: "spiffe://foo.com",
cert: &x509.Certificate{URIs: []*url.URL{
{Scheme: "spiffe", Host: "example.org"},
{Scheme: "spiffe", Host: "foo.com"},
}},
expErr: require.NoError,
},
}
for _, test := range tests {
t.Run(test.desc, func(t *testing.T) {
t.Parallel()
err := verifyServerCertMatchesURI(test.uri, test.cert)
test.expErr(t, err)
})
}
}