Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prefetched messages cannot be drained when pausing receiving #712

Open
SeanFeldman opened this issue Jun 21, 2024 · 1 comment
Open

Prefetched messages cannot be drained when pausing receiving #712

SeanFeldman opened this issue Jun 21, 2024 · 1 comment

Comments

@SeanFeldman
Copy link
Contributor

Description

Prefetching improves the overall throughput and is a desired setting for most high-load scenarios.
Unfortunately, a design flaw prevents prefetched messages from being processed when a receiver is passed. An example would be using ServiceBusMessageProssesor, which can be paused and restarted. When a message is prefetched, and the processor is paused, the message's lock starts ticking, but it cannot be received by the processor. This causes the message's lock to expire and the message to be retried later. The assumption that a retry would fix the issue does not apply to all scenarios, especially when a processor could be stopped and started multiple times. This situation causes many messages to end up in the dead-letter queue due to repeated LockLostException due to the message being pre-fetched. And the more aggressive the prefetch, the worse this problem becomes.

Actual Behavior

  1. Processor paused
  2. No prefetched messages are drained
  3. Processor restarted
  4. LostLockExceptions thrown with some messages eventually ending in the dead-letter queue due to exceeded maximum number of deliveries

Expected Behavior

  1. Processor paused
  2. Prefetched messages are drained
  3. Processor restarted
  4. No LostLockException
@EldertGrootenboer
Copy link
Contributor

Thank you for suggesting this feature. We have opened an investigation task for this in our backlog, and will update this issue when we have more information. To help us get this prioritized, it would be helpful to see others vote and support this feature, as well as explain their scenarios.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants