Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The TLS configuration cannot be placed in the Gateway's CR. #2665

Closed
fanux opened this issue Dec 11, 2023 · 10 comments
Closed

The TLS configuration cannot be placed in the Gateway's CR. #2665

fanux opened this issue Dec 11, 2023 · 10 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@fanux
Copy link

fanux commented Dec 11, 2023

In multi-tenant scenarios, different users have different domain names, and different domain names correspond to different certificates. If everyone updates the Gateway CR, it will inevitably cause mutual influence. If there are a thousand different domain names, there will be a thousand listeners in the Gateway. Therefore, a more reasonable approach is to configure TLS in the Httproute, or have a separate CRD to manage certificate configurations.

Now:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    # hostname: "*.example.com"
  - name: https
    port: 443
    protocol: HTTPS
    # hostname: "*.example.com"
    tls: 
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: example-com
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
      sectionName: https
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

Better way:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    # hostname: "*.example.com"
  - name: https
    port: 443
    protocol: HTTPS
    # hostname: "*.example.com"
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
      sectionName: https
  hostnames:
    - "www.example.com"
  tls: 
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: example-com
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

OR:

apiVersion: gateway.networking.k8s.io/v1
kind: TLS
metadata:
  name: backend
spec:
  httproute: backend
  tls: 
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: example-com
@fanux fanux added the kind/feature Categorizes issue or PR as related to a new feature. label Dec 11, 2023
@fanux fanux changed the title The tls config should not in Gateway CR The TLS configuration cannot be placed in the Gateway's CR. Dec 11, 2023
@fanux
Copy link
Author

fanux commented Dec 11, 2023

Or is there any good way to solve this problem currently? Our scenario is that there are tens of thousands of separate tenants, who may all have their own domain names and certificates to configure, and the Gateway is created uniformly by cluster management, so it is impossible for tenants to modify the listener. And each tenant needs to configure their own domain name certificate in their own namespace.

@Xunzhuo
Copy link
Member

Xunzhuo commented Dec 12, 2023

PTAL: https://gateway-api.sigs.k8s.io/api-types/backendtlspolicy

@shaneutt shaneutt added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Dec 12, 2023
@yinxulai
Copy link

#749
It may be related to this.

@youngnick
Copy link
Contributor

We specifically designed Gateways to limit the number of Listeners to encourage people to have smaller numbers of Listeners in each Gateway, and we decided not to have TLS config in HTTPRoutes because it's rightly a property of the Listener.

For the case that a cluster has thousands of tenants with different domain names, I'd recommend considering the Gateway part of the deployment and having an individual one per tenant, or sharding the tenants across Gateways.

I suspect that the problem here lies in the cost of handing out an IP (or other loadbalancer related resource) to each Gateway. #1713 is intended to add a standard way that you can have a single Gateway that holds the IP address and other config, and then merge other Gateways into that. That would allow a scenario like the one you describe more easily.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 29, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 28, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 28, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@dghubble
Copy link

We specifically designed Gateways to limit the number of Listeners to encourage people to have smaller numbers of Listeners in each Gateway, and we decided not to have TLS config in HTTPRoutes because it's rightly a property of the Listener.

This seems like an unfortunate choice. We have thousands of ingress resources, each which might have their own hostnames and TLS integrations. These can't be directly ported to HTTPRoutes bc of this choice. We generally provide a few ingress controllers/gateways only - making thousands of Gateway's is unsuitable (they're an infra detail with management/money cost). With Ingress it was easy for an app developer to add an Ingress object to express their need for a cert, hostname, and paths to map to their service. If we were to adopt this, we'd have app developers trying to extend the listeners of the Gateway all the time.

A lot of the API Gateway examples are relying heavily on folks using company-wide wildcards

@dghubble
Copy link

Started discussion in #3418

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

8 participants