New deployment against Vast S3 failing startup checks #8762
-
Describe the bugWe're attempting a new mimir deployment against an S3 endpoint running on a Vast cluster (https://www.vastdata.com/). The various pods that interface with S3 buckets are failing with "context deadline exceeded" for various S3 endpoint location checks (location, sanity-check-at-startup). To Reproducedeploy mimir with a values.yaml similar to:
AWS_ACCESS_KEY_ID="[redacted]" AWS_SECRET_ACCESS_KEY="[redacted]" helm install mimir grafana/mimir-distributed -f ./mimir.yaml -n lgtm logs from mimi-alertmanager pod:
Expected behaviorThe pods connect to the existing S3 buckets and enter a ready state EnvironmentRKE2 v1.27.2+rke2r1 Additional ContextThe Vast S3 endpoint should be as compatible as minio. No known caveats but we're absolutely willing to play around! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Converting to discussion, due to lack of evidence of Mimir bug. |
Beta Was this translation helpful? Give feedback.
-
resolved, answering for posterity. This was a case of the $AWS_SECRET_ACCESS_KEY and $AWS_ACCESS_KEY_ID env var not making it to the installation. Moving those variables to the helm install command via a --set 'mimir.structuredConfig.common.storage.s3.secret_access_key=[redacted]' and --set 'mimir.structuredConfig.common.storage.s3.access_key_id=[redacted]' resolved the issue. |
Beta Was this translation helpful? Give feedback.
resolved, answering for posterity. This was a case of the $AWS_SECRET_ACCESS_KEY and $AWS_ACCESS_KEY_ID env var not making it to the installation. Moving those variables to the helm install command via a --set 'mimir.structuredConfig.common.storage.s3.secret_access_key=[redacted]' and --set 'mimir.structuredConfig.common.storage.s3.access_key_id=[redacted]' resolved the issue.