Ingesters cannot connect to S3 storage - Compactor can #2135
Replies: 1 comment 6 replies
-
The issue you're reporting made me realise that we're not running the sanity check on the compactor: mimir/pkg/mimir/sanity_check.go Line 131 in b13d2df This explains why the compactor starts, while ingesters don't, but still doesn't explain why the sanity check fails.
The "context deadline exceeded" error is a timeout error. It means the sanity check hasn't been able to
I agree this confirms config for compactor looks good. Question: The same exact sanity check of ingesters is also run by queriers, rulers and store-gateways. Do they also fail the sanity check? |
Beta Was this translation helpful? Give feedback.
-
I'm using 2.1.0, running in AWS ECS cluster, the EC2 instances have an IAM role attached which allows both listing of the bucket and all actions on any objects.
If I upload a file from the host to the bucket, the compactor discovers this as a user and proceeds to create a directory with that name and uploads the file bucket-index.json.gz. This at least confirms the IAM role is fine and in theory the config is too.
The exact same configuration (Mimir conf is on a shared EFS and mounted as a volume for all containers) doesn't work for ingesters, the logs for the ingester:
level=warn ts=2022-06-19T18:58:27.064578882Z caller=sanity_check.go:116 msg="Unable to successfully connect to configured object storage (will retry)" err="blocks storage: unable to successfully send a request to object storage: Get \"https://mimir-bucket.s3.dualstack.us-east-1.amazonaws.com/sanity-check-at-startup\": context deadline exceeded"
The storage config:
It seems strange to me the compactor doesn't have a problem but the ingesters never pass the sanity check. Is there a step I should be doing? From what I can tell from other posts no, that should be it.
Beta Was this translation helpful? Give feedback.
All reactions