Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

Specifying servicePort by name instead of port number prevents the configuration of the desired health check #196

Open
jcao219 opened this issue May 1, 2018 · 10 comments

Comments

@jcao219
Copy link

jcao219 commented May 1, 2018

Given Kubernetes manifests

apiVersion: v1
kind: Service
metadata:
  name: hello-service
  labels:
    app: hello
spec:
  type: NodePort
  selector:
    app: hello
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: hello-port
      nodePort: 30091
 
---
 
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: hello-deployment
  labels:
    app: hello
spec:
  # init with 3 replicas
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - name: hello-container
        image: gcr.io/google-samples/hello-app:1.0
        ports:
        - name: hello-port
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /healthz
            port: hello-port
          initialDelaySeconds: 25
          periodSeconds: 5

and then an Ingress spec that specifies servicePort by name instead of port number:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hc-from-probe
  annotations:
    kubernetes.io/ingress.global-static-ip-name: $E2E_KUBEMCI_IP
    kubernetes.io/ingress.class: gce-multi-cluster
spec:
  backend:
    serviceName: hello-service
    servicePort: http

kubemci will fail to recognize the readiness probe with:

Pod [pod name] matching service selectors app=hello (targetport ): lacks a matching HTTP probe for use in health checks. 

It works if you change the last line in the ingress spec to:

  servicePort: 80
@G-Harmon
Copy link
Contributor

G-Harmon commented May 2, 2018

Interesting. I think this case is supposed to work. Here's where that code is:
https://sourcegraph.com/github.com/GoogleCloudPlatform/k8s-multicluster-ingress/-/blob/app/kubemci/pkg/kubeutils/utils.go#L140

I should be able to take a look at this next week.

@jcao219
Copy link
Author

jcao219 commented May 2, 2018

Hi, looks like I was wrong about the 'fix' I described earlier.

This problem seems to occur when the Kubernetes Service's port number (80 above) doesn't match the containerPort number in the pod spec, or when the name doesn't match.

What I see from my side is:

Doesn't work:

==> kube.yaml <==
apiVersion: v1
kind: Service
metadata:
  name: hello-service
  labels:
    app: hello
spec:
  type: NodePort
  selector:
    app: hello
  ports:
    - name: http
      protocol: TCP
      port: 8083
      targetPort: hello-port
      nodePort: 30091
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: hello-deployment
  labels:
    app: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - name: hello-container
        image: gcr.io/google-samples/hello-app:1.0
        ports:
        - name: hello-port
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /healthz
            port: hello-port
          initialDelaySeconds: 25
          periodSeconds: 5

==> ingress.yaml <==
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hc-from-probe
  annotations:
    kubernetes.io/ingress.class: gce-multi-cluster
spec:
  backend:
    serviceName: hello-service
    servicePort: http

Works:

==> kube.yaml <==
apiVersion: v1
kind: Service
metadata:
  name: hello-service
  labels:
    app: hello
spec:
  type: NodePort
  selector:
    app: hello
  ports:
    - name: http
      protocol: TCP
      port: 8080        # Changed
      targetPort: http # Changed
      nodePort: 30091
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: hello-deployment
  labels:
    app: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - name: hello-container
        image: gcr.io/google-samples/hello-app:1.0
        ports:
        - name: http   # Changed
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /healthz
            port: http  # Changed
          initialDelaySeconds: 25
          periodSeconds: 5

==> ingress.yaml <==
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hc-from-probe
  annotations:
    kubernetes.io/ingress.class: gce-multi-cluster
spec:
  backend:
    serviceName: hello-service
    servicePort: http

Doesn't work:

==> kube.yaml <==
apiVersion: v1
kind: Service
metadata:
  name: hello-service
  labels:
    app: hello
spec:
  type: NodePort
  selector:
    app: hello
  ports:
    - name: http
      protocol: TCP
      port: 8080
      targetPort: hello-http  # Changed
      nodePort: 30091
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: hello-deployment
  labels:
    app: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - name: hello-container
        image: gcr.io/google-samples/hello-app:1.0
        ports:
        - name: hello-http  # Changed
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /healthz
            port: hello-http  # Changed
          initialDelaySeconds: 25
          periodSeconds: 5

==> ingress.yaml <==
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hc-from-probe
  annotations:
    kubernetes.io/ingress.class: gce-multi-cluster
spec:
  backend:
    serviceName: hello-service
    servicePort: http

Works:

==> kube.yaml <==
apiVersion: v1
kind: Service
metadata:
  name: hello-service
  labels:
    app: hello
spec:
  type: NodePort
  selector:
    app: hello
  ports:
    - name: http
      protocol: TCP
      port: 8080
      targetPort: hello-http
      nodePort: 30091
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: hello-deployment
  labels:
    app: hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: hello
  template:
    metadata:
      labels:
        app: hello
    spec:
      containers:
      - name: hello-container
        image: gcr.io/google-samples/hello-app:1.0
        ports:
        - name: hello-http
          containerPort: 8080
        readinessProbe:
          httpGet:
            path: /healthz
            port: hello-http
          initialDelaySeconds: 25
          periodSeconds: 5

==> ingress.yaml <==
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: hc-from-probe
  annotations:
    kubernetes.io/ingress.class: gce-multi-cluster
spec:
  backend:
    serviceName: hello-service
    servicePort: 8080  # Changed

@G-Harmon
Copy link
Contributor

Looks like you figured out that the Service's port.Name needs to match up with the Deployment's container.port.

It seems like most, if not all, of the problem is setting up the Ingress, Service & Deployment wrong. Do you agree? If so, can we close this as Working-As-Intended?

@jcao219
Copy link
Author

jcao219 commented May 14, 2018

I'm not sure if I set up the kubernetes manifests wrong. Is this requirement placed specifically by kubemci? My first example, with a Service port.name 'http' and Service port 8083 mismatched with Deployment spec's containerPort 8080 and container port.name 'hello-port' works correctly with GKE's ingress-gce.

@G-Harmon
Copy link
Contributor

No, we don't have this requirement from Kubemci. (Its main requirement is that the same NodePorts are used across clusters.)

@nicksardo to comment on why/whether things should work as you describe in the previous comment.

@nicksardo
Copy link

That configuration should be allowed. The non-MCI ingress controller seems to behave as expected: https://github.com/kubernetes/ingress-gce/blob/master/pkg/controller/translator/translator.go#L228-L274.

@kiyose
Copy link

kiyose commented Jun 6, 2018

I am also having issues with this. My configuration is as follows.

service.yaml

...
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 30183
    port: 80
    protocol: TCP
    targetPort: svc-port

deploy.yaml

...
          ports:
          - containerPort: 3001
            name: svc-port
            protocol: TCP
          readinessProbe:
            failureThreshold: 10
            httpGet:
              path: /ping
              port: svc-port
              scheme: HTTP
            initialDelaySeconds: 30
...

ingress.yaml

...
  rules:
  - host: preview.foo.com
    http:
      paths:
      - path: "/"
        backend:
          serviceName: site-preview
          servicePort: 80
      - path: "/*"
        backend:
          serviceName: site-preview
          servicePort: 80

Curling the <clusterIp>:<nodePort>/ping returns a 200.

When i manually update the health check with the /ping endpoint the LB works. This seems to be an issue recognizing the alternate endpoint from the getProbe call and subsequently updating the healthcheck.

The kubemci log:

Pod site-preview-c6cddcf4b-xt2pj matching service selectors app=site-preview (targetport ): lacks a matching HTTP probe for use in health checks.
Path for healthcheck is /
Ensuring health check for port: {SvcName:web-dev/site-preview SvcPort:{Type:0 IntVal:80 StrVal:} NodePort:30183 Protocol:HTTP SvcTargetPort: NEGEnabled:false}
Health check mci1-hc-30183--site-view-dev exists already. Checking if it matches our desired health check
Updating existing health check mci1-hc-30183--site-view-dev to match the desired state
Health check mci1-hc-30183--site-view-dev updated successfully

@jobrs
Copy link

jobrs commented Aug 15, 2018

+1

@codefrau
Copy link

I discovered that the only case it works is if the containerPort is named, and the servicePort is numeric. It does not work with a numeric containerPort. (I didn't try a named servicePort because the original report in this issue is about that)

@codefrau
Copy link

I find it suspicious that SvcTargetPort is empty. Maybe that's why it can't match with a numeric containerPort?

Pod xyz-rk6vz matching service selectors app=xyz (targetport ): lacks a matching HTTP probe for use in health checks.
Path for healthcheck is /
Ensuring health check for port: {SvcName:default/xyz-service SvcPort:{Type:0 IntVal:80 StrVal:} NodePort:30080 Protocol:HTTP SvcTargetPort: NEGEnabled:false}

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants