You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm looking at boundary-reference-architecture/deployment/kube/kubernetes/boundary_config.tf, and I'm curious what to specify at public_cluster_addr for the controller, and the address, controllers, public_addr for worker configuration.
The configmap.yaml I'm using is as below. I'm running my kubernetes cluster in AWS private subnet, and thus have no idea what to specify at public_cluster_addr for controller. Also, I believe the example runs the controllers and workers in the same pod, and thought that the worker address, controllers, and public_addr should be localhost. Is it correct? (By the way, I am using Helm Chart I have made to implement the /kubernetes part, as the example is in Terraform. I prefer Helm)
This configuration seems to be wrong, as I'm getting some kind of connection error as below when I try to access the redis using the example.
❯ boundary connect -exec redis-cli -target-id ttcp_er1Yy3ROiI -- -h http://boundary.dev.mydomain.cloud -p 80
Could not connect to Redis at http://boundary.dev.mydomain.cloud:80: nodename nor servname provided, or not known
not connected>
The text was updated successfully, but these errors were encountered:
The public cluster address is the address advertised to the workers as a means to connect to your controllers. We do this so the controllers can live behind a well known domain name or elastic IP address which often translates to a load balancer for ensuring high availability of the controller nodes: https://www.boundaryproject.io/docs/configuration/controller#public_cluster_addr
Hi, I'm looking at
boundary-reference-architecture/deployment/kube/kubernetes/boundary_config.tf
, and I'm curious what to specify at public_cluster_addr for the controller, and the address, controllers, public_addr for worker configuration.The configmap.yaml I'm using is as below. I'm running my kubernetes cluster in AWS private subnet, and thus have no idea what to specify at public_cluster_addr for controller. Also, I believe the example runs the controllers and workers in the same pod, and thought that the worker address, controllers, and public_addr should be localhost. Is it correct? (By the way, I am using Helm Chart I have made to implement the /kubernetes part, as the example is in Terraform. I prefer Helm)
This configuration seems to be wrong, as I'm getting some kind of connection error as below when I try to access the redis using the example.
The text was updated successfully, but these errors were encountered: