-
Notifications
You must be signed in to change notification settings - Fork 8.5k
Open
Labels
needs-kindIndicates a PR lacks a `kind/foo` label and requires one.Indicates a PR lacks a `kind/foo` label and requires one.needs-priorityneeds-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.
Description
Hi,
The current setup is: browser -> haproxy(L4) -> ingress-nginx-controller -> app.
I have an issue where I open the application and see in the browser console:
Information: WebSocket connected to ws://svc1.apps.k8s.dev.utp/afk-asai/login
After 5 minutes, the connection drops with the error:
Error: Connection disconnected with error 'Error: WebSocket closed with status code: 1006 (no reason given).'.
ingress-nginx-controller-5b9dfb6b74-4rhl4:/etc/nginx$ nginx -v
nginx version: nginx/1.25.5
I have configured all WebSocket timeouts in the Ingress resource via annotations, but this doesn't help.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_cache off;
nginx.ingress.kubernetes.io/proxy-buffering: "off"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-http-version: "1.1"
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-read-timeout: "7200"
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
nginx.ingress.kubernetes.io/proxy-send-timeout: "7200"
nginx.ingress.kubernetes.io/rewrite-target: /$1
generation: 4
name: afk-asai
spec:
ingressClassName: nginx
rules:
- host: svc1.apps.k8s.dev.utp
http:
paths:
- backend:
service:
name: afk-asai
port:
number: 8080
path: /afk-asai/(.*)
pathType: ImplementationSpecific
config-map:
apiVersion: v1
data:
allow-snippet-annotations: "true"
annotations-risk-level: Critical
client-header-buffer-size: 64k
enable-underscores-in-headers: "true"
large-client-header-buffers: 4 64k
log-format-escape-json: "true"
proxy-buffer-size: 8k
ssl-redirect: "true"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.1
helm.sh/chart: ingress-nginx-4.12.1
name: ingress-nginx-controller
ingress nginx args:
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/ingress-nginx-defaultbackend
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-nginx-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --enable-metrics=true
- --default-ssl-certificate=ingress-nginx/self-ingress-certs
- --enable-ssl-passthrough
- --watch-ingress-without-class=true
image: docker.repo.local/ingress-nginx/controller:v1.12.1
What I've tested:
- Using
kubectl port-forwarddirectly to the pod, the WebSocket connection stays alive for 10+ minutes without issues - Bypassing HAProxy and connecting directly to ingress-nginx-controller (via NodePort/port-forward) - the connection still drops after exactly 5 minutes
- The problem only occurs when accessing through ingress-nginx-controller
- The connection consistently drops after exactly 5 minutes
Questions:
- Are there any global timeout settings in the nginx-ingress-controller ConfigMap that could override the Ingress annotations?
- What other configuration options should I check?
Please help me resolve this issue.
Metadata
Metadata
Assignees
Labels
needs-kindIndicates a PR lacks a `kind/foo` label and requires one.Indicates a PR lacks a `kind/foo` label and requires one.needs-priorityneeds-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.Indicates an issue or PR lacks a `triage/foo` label and requires one.
Type
Projects
Status
No status