-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress ALB #93
Comments
There is one example using EKS in the ingress configurations here, not sure if that can help you. https://github.com/AzBuilder/terrakube-helm-chart/blob/main/examples/AzureAuthentication-Example4.md |
@alfespa17 |
Can we connect sometime? @alfespa17. |
Not sure if I will be able to help I dont work with AWS but as far I now the only things that needs to be update are the ingress annotation. I mostly use Azure and GCP, maybe you can check the discussions or other issues in the main repository maybe someone there could be able to help. |
@sagarnitd Any luck on this? I installed the AWS load balancer controller. Briefly: I used a Helm chart to install the controller and used I then tried to follow the ingress setup shown in https://github.com/AzBuilder/terrakube-helm-chart/blob/main/examples/AzureAuthentication-Example4.md. The best I was able to do from that was get four network load balancers to deploy. The four NLBs corresponded to the API, Executor, Registry and UI services. Furthermore, the listeners in each NLB were not listening on port 443 as I hoped; they were listening on the cluster service ports themselves. Although I'm still troubleshooting things, I made some progress: I disabled the individual ingresses in ## Ingress properties
ingress:
useTls: false
includeTlsHosts: false
ui:
enabled: false
api:
enabled: false
registry:
enabled: false
dex:
enabled: false I don't know if it matters, but I disabled the TLS ( Then I deployed a separate ALB ingress that maps to the API, Dex (not Executor) Registry, and UI services. The path settings are from copying over the paths in the example. I had trouble using Also: In Here's my ingress definition in a separate YAML file. apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alb-ingress
namespace: terrakube
annotations:
alb.ingress.kubernetes.io/load-balancer-name: terrakube-alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:<account id>:certificate/<id of your SSL cert in ACM>
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/group.name: alb-group
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- host: terrakube-ui.<snip>.com
http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: terrakube-ui-service
port:
number: 8080
- host: terrakube-api.<snip>.com
http:
paths:
- path: "/dex/*"
pathType: ImplementationSpecific
backend:
service:
name: terrakube-dex
port:
number: 5556
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: terrakube-api-service
port:
number: 8080
- host: terrakube-reg.<snip>.com
http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: terrakube-registry-service
port:
number: 8075 Finally, applied it after deploying the Helm chart: I get one ALB deployed and the listeners have targets that seem to line up with the service ports. The UI login page resolves for me and I can log into Cognito (the IdP for the Dex connector I am using). I don't think this last part will be fully relevant to you, but you might still encounter errors after deploying the separate ingress. In my case, this happened: After authenticating with Cognito, I am getting this peculiar error during the Dex redirect:
It's probably an error specific to my configuration, but my |
I made more progress and I'm now able to log into the application inside my EKS cluster:
Maybe either of you would be interested in seeing these notes, so I'll leave these notes here: Right now I'm using these Ingress settings inside ## Ingress properties
ingress:
useTls: true
includeTlsHosts: true
ui:
enabled: false
domain: "terrakube-ui.<snip>.com"
ingressClassName: "alb"
tlsSecretName: tls-secret-ui-terrakube
annotations: {}
api:
enabled: false
domain: "terrakube-api.<snip>.com"
ingressClassName: "alb"
tlsSecretName: tls-secret-api-terrakube
annotations: {}
registry:
enabled: false
domain: "terrakube-reg.<snip>.com"
ingressClassName: "alb"
tlsSecretName: tls-secret-reg-terrakube
annotations: {}
dex:
enabled: false
annotations: {} The ALB load balancer ingress specification above allows me to finally log into the UI. The only issue I had at that point was that the provisioned load balancer was not automatically applying a health check based on each pod's Readiness probe settings; I wanted to figure out how to get the ALB load balancer controller to apply the health check settings automatically. Unfortunately, I could not figure out how to apply custom health check paths and health check ports inside one Ingress definition for the ALB. However, I can define the custom health check configurations if I go back to using multiple ingress definitions; then I can apply separate annotations (as These are the ALB health check annotations that work if applied to each ingress separately: annotations:
alb.ingress.kubernetes.io/healthcheck-path: <health check path>
alb.ingress.kubernetes.io/healthcheck-port: <health check port> But oddly enough when I use separate ingresses, the load balancer no longer satisfies the priority between rules under the Terrakube API service; I specifically have trouble logging into the UI unless the load balancer - host: terrakube-api.<snip>.com
http:
paths:
- path: "/dex/*"
pathType: ImplementationSpecific
backend:
service:
name: terrakube-dex
port:
number: 5556
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: terrakube-api-service
port:
number: 8080 And this is in spite of using these annotations for the ALB load balancer to try to force the dex Ingress before the api Ingress: alb.ingress.kubernetes.io/priority: <priority number>
alb.ingress.kubernetes.io/group.name: alb-group
alb.ingress.kubernetes.io/load-balancer-name: terrakube-alb Finally, what did work to get one ALB, with the healthcheck settings per path, and maintain rule priorities in the API subdomain host was:
annotations:
alb.ingress.kubernetes.io/healthcheck-path: /actuator/health/readiness
alb.ingress.kubernetes.io/healthcheck-port: '8080' annotations:
alb.ingress.kubernetes.io/healthcheck-path: /actuator/health/readiness
alb.ingress.kubernetes.io/healthcheck-port: '8075' annotations:
alb.ingress.kubernetes.io/healthcheck-path: /healthz/ready
alb.ingress.kubernetes.io/healthcheck-port: '5558' I suspect I am overlooking something regarding the ALB controller that would make the load balancer provisioning a bit cleaner, but what I have shown seems to give me everything for my DNS/Load balancer settings. |
hello @genedev22 I think this comment can help you. DEX and the API are two different components but to simplify the deployment we just reuse the same API domain for DEX in one specific path. You can deploy those 2 services using a different domain if you want. |
You could in theory simply deploy DEX using its own helm chart in another domain and simple change your terrakube helm values to disable the internal DEX that terrakube provides and simply use the other DEX like: dex:
config:
issuer: https://my-custom-dex.with-other-domain.com |
@alfespa17 I like your idea. This should allow me to get around the load balancer priority rules not getting satisfied (as I had encountered) and let me bring back the UI/API/Registry ingress settings back into Given that I'm ready to try out the application, I'll defer that separate DEX resource exploration for now, but thank you for the suggestion. |
Hi @genedev22 ! Can you show your full values.yml please ? I'm running into the same issues as you and I'm having trouble to fix my UI ! I'm not getting through the login page 😢 |
Quick note: I have a consistent issue with my Executor pod crashing during startup, but a quick I started off using these examples as a reference.
My If you see an error in the UI, try opening your browser's tools (F12) and check if it's trying to go an https or http endpoint. In my case, I have it setup for https. The devs can speak better than me on this, but my crude understanding of the auth workflow (based on my configuration) is:
Other: The subdomains don't have to match the defaults shown as far as I can tell, so you should have flexibility in renaming them ... but make sure you're consistent with your subdomains in ## Global Name
name: "terrakube"
global:
imagePullSecrets: []
security:
useOpenLDAP: false
adminGroup: "<name of your AWS Cognito admin group>"
patSecret: "<PAT secret in base64>"
internalSecret: "<internal secret in base64>"
dexClientId: cognito-app
dexClientScope: "email openid profile offline_access groups"
dexIssuerUri: "https://terrakube-api.your-domain.com/dex"
existingSecret: false
## Dex
dex:
enabled: true
existingSecret: false
config:
issuer: "https://terrakube-api.your-domain.com/dex"
storage:
type: memory
web:
http: 0.0.0.0:5556
allowedOrigins: ['*']
skipApprovalScreen: true
oauth2:
responseTypes: ["code", "token", "id_token"]
connectors:
- type: oidc
id: cognito
name: AWS Cognito
config:
issuer: "https://cognito-idp.us-east-1.amazonaws.com/us-east-1_<user pool id>"
clientID: "<your AWS Cognito app's client id>"
clientSecret: "<your AWS Cognito app's secret key>"
redirectURI: "https://terrakube-api.your-domain.com/dex/callback"
scopes:
- openid
- email
- profile
insecureSkipEmailVerified: true
insecureEnableGroups: true
userNameKey: "cognito:username"
claimMapping:
groups: "cognito:groups"
staticClients:
- id: cognito-app
redirectURIs:
- 'https://terrakube-ui.your-domain.com'
- '/device/callback'
- 'http://localhost:10000/login'
- 'http://localhost:10001/login'
name: 'cognito-app'
public: true
## Terraform Storage
storage:
defaultStorage: false
aws:
accessKey: "<the IAM access key for an IAM key pair>"
secretKey: "<the IAM secret key for an IAM key pair>"
bucketName: "name-of-bucket-in-your-aws-account"
region: "your-aws-region"
# Default Redis Configuration
redis:
architecture: "standalone"
auth:
enabled: true
password: "your-redis-password"
## API properties
api:
existingSecret: false
enabled: true
image: azbuilder/api-server
version: ""
replicaCount: "1"
serviceType: "ClusterIP"
serviceAccountName: ""
rbac:
create: false
roleName: "terrakube-api-role"
roleBindingName: "terrakube-api-role-binding"
secrets:
- terrakube-api-secrets
resources: {}
podLabels: {}
defaultDatabase: false
defaultRedis: true
loadSampleData: false
terraformReleasesUrl: "https://releases.hashicorp.com/terraform/index.json"
securityContext: {}
containerSecurityContext: {}
imagePullSecrets: []
initContainers: []
cache:
moduleCacheMaxTotal: "128"
moduleCacheMaxIdle: "128"
moduleCacheMinIdle: "64"
moduleCacheTimeout: "600000"
moduleCacheSchedule: "0 */3 * ? * *"
properties:
databaseType: "POSTGRESQL"
databaseHostname: "<your postgres hostname>"
databaseName: "terrakube"
databaseUser: "terrakube"
databaseSchema: "public"
databasePassword: "<your DB password>"
databaseSslMode: "require"
databasePort: "5432"
redisHostname: ""
redisPassword: ""
## The database port is only used for mysql databases
## SslMode values are disable, allow, prefer, require, verify-ca, verify-full. Default mode is "disable".
## Reference: https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/PGProperty.html#SSL_MODE
## Executor properties
executor:
existingSecret: false
enabled: true
image: "azbuilder/executor"
version: ""
replicaCount: "1"
serviceType: "ClusterIP"
serviceAccountName: ""
secrets:
- terrakube-executor-secrets
resources: {}
podLabels: {}
podAnnotations: {}
apiServiceUrl: "http://terrakube-api-service:8080"
properties:
toolsRepository: "https://github.com/AzBuilder/terrakube-extensions"
toolsBranch: "main"
securityContext: {}
containerSecurityContext: {}
imagePullSecrets: []
initContainers: []
## Registry properties
registry:
enabled: true
existingSecret: false
image: azbuilder/open-registry
version: ""
replicaCount: "1"
serviceType: "ClusterIP"
serviceAccountName: ""
secrets:
- terrakube-registry-secrets
resources: {}
podLabels: {}
securityContext: {}
containerSecurityContext: {}
imagePullSecrets: []
initContainers: []
## UI Properties
ui:
enabled: true
existingSecret: false
image: azbuilder/terrakube-ui
version: ""
replicaCount: "1"
serviceType: "ClusterIP"
serviceAccountName: ""
resources: {}
podLabels: {}
securityContext: {}
containerSecurityContext: {}
imagePullSecrets: []
initContainers: []
## Ingress properties
ingress:
useTls: true
includeTlsHosts: true
ui:
enabled: false
domain: "terrakube-ui.your-domain.com"
ingressClassName: "alb"
tlsSecretName: tls-secret-ui-terrakube
annotations: {}
api:
enabled: false
domain: "terrakube-api.your-domain.com"
ingressClassName: "alb"
tlsSecretName: tls-secret-api-terrakube
annotations: {}
registry:
enabled: false
domain: "terrakube-reg.your-domain.com"
ingressClassName: "alb"
tlsSecretName: tls-secret-reg-terrakube
annotations: {}
dex:
enabled: false
annotations: {} and this is my one Ingress. I don't claim this is 'the' way to setup your Ingress; it's just what finally worked for me ... such that it gave me one ALB, the correct mapping of hosts and paths to the Terrakube services. In my case: plus adding ALB health check annotations in the service definition yaml files so the ALB health checks were reaching out to the correct health check paths and correct health check ports. apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alb-ingress
namespace: terrakube
annotations:
alb.ingress.kubernetes.io/load-balancer-name: terrakube-alb
alb.ingress.kubernetes.io/group.name: alb-group
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-1:<your account id>:certificate/<your ACM certificate id>
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '15'
alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
alb.ingress.kubernetes.io/healthy-threshold-count: '2'
alb.ingress.kubernetes.io/unhealthy-threshold-count: '2'
spec:
ingressClassName: alb
rules:
- host: terrakube-ui.your-domain.com
http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: terrakube-ui-service
port:
number: 8080
- host: terrakube-api.your-domain.com
http:
paths:
- path: "/dex/*"
pathType: ImplementationSpecific
backend:
service:
name: terrakube-dex
port:
number: 5556
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: terrakube-api-service
port:
number: 8080
- host: terrakube-reg.your-domain.com
http:
paths:
- path: "/*"
pathType: ImplementationSpecific
backend:
service:
name: terrakube-registry-service
port:
number: 8075 |
Please, provide ingress.yaml for AWS Internal ALB
The text was updated successfully, but these errors were encountered: