You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One part of the outstanding pull request #6 addresses this.
If a queue is one of many subscribers to a topic, then we want all the queues to have a 'fair go' at processing a message before the max retention period of the topic lapses.
With the current setup, the deletion of a message from the queue using the extended lib deletes the underlying S3 bucket. This is fine when the queue is the only consumer of the message (and implicitly the S3 bucket) but with a topic and queue setup the first consumer will stop any other subscriber from getting the contents of the S3 bucket.
We can currently work around this by writing custom code to override the behaviour of the client when deleting a message, but this seems flaky given we don't control the library itself.
Would be great to get a configuration option that leaves the message in S3 when a consumer sends a delete message request . This way we could let the topic drive when an S3 bucket is finally cleaned up.
The text was updated successfully, but these errors were encountered:
I don't really understand this usecase, or the infrastructure setup. Do all the queues use the same underlying bucket? I thought they would use different buckets. I could be wrong, but reading the docs I thought the item in the bucket was deleted, not the bucket itself
One part of the outstanding pull request #6 addresses this.
If a queue is one of many subscribers to a topic, then we want all the queues to have a 'fair go' at processing a message before the max retention period of the topic lapses.
With the current setup, the deletion of a message from the queue using the extended lib deletes the underlying S3 bucket. This is fine when the queue is the only consumer of the message (and implicitly the S3 bucket) but with a topic and queue setup the first consumer will stop any other subscriber from getting the contents of the S3 bucket.
We can currently work around this by writing custom code to override the behaviour of the client when deleting a message, but this seems flaky given we don't control the library itself.
Would be great to get a configuration option that leaves the message in S3 when a consumer sends a delete message request . This way we could let the topic drive when an S3 bucket is finally cleaned up.
The text was updated successfully, but these errors were encountered: