Skip to content

feat: add fair-share allocation between queues#19

Merged
sylvesterdamgaard merged 6 commits into
mainfrom
fair-share-allocation
Apr 30, 2026
Merged

feat: add fair-share allocation between queues#19
sylvesterdamgaard merged 6 commits into
mainfrom
fair-share-allocation

Conversation

@sylvesterdamgaard
Copy link
Copy Markdown
Contributor

Summary

Adds a fairness layer between per-queue demand evaluation and cross-host distribution, resolving the first-queue-wins starvation problem when total demand exceeds cluster capacity.

  • New FairShareAllocator class: min-first then proportional allocation with water-filling iteration to reclaim capacity freed by max clamping
  • Refactored evaluateAndPublishClusterRecommendations() from single evaluate-and-distribute loop into three phases: collect demands → fair-share allocate → distribute adjusted targets
  • Pinned (non-scalable) workloads bypass the allocator and subtract from available capacity
  • No new configuration keys — workers.max remains a safety bound, not a fairness mechanism
  • 16 new tests (14 unit + 2 integration), all 496 tests passing

Key behaviors

  • No contention → no change: if sum(demand) <= capacity, every queue gets exactly what it asked for
  • Mins guaranteed: every queue gets at least workers.min, even under pressure
  • Idle queues yield: capacity flows to queues that need it
  • Max as safety bound: proportional distribution + water-filling ensures max clamping doesn't waste capacity
  • Deterministic: tie-breaking by workload key ensures stable, reproducible allocations

Closes #16

Test plan

  • No contention returns demands unchanged
  • Equal contention distributes proportionally with largest-remainder
  • Min guarantees hold even when mins exceed capacity
  • Idle queues yield all capacity to busy queues
  • Max clamping redistributes freed capacity via water-filling
  • Multiple queues hitting max in sequence converge correctly
  • Deterministic results regardless of input order
  • Fair-share + distributeClusterTarget integration (multi-host)
  • Full test suite passes (496 tests, 1241 assertions)
  • PHPStan clean (1 pre-existing unrelated error)
  • Pint clean

🤖 Generated with Claude Code

sylvesterdamgaard and others added 6 commits April 30, 2026 13:16
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…Allocator (#16)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ibution (#16)

The proportional distribution now uses demand-based headroom instead of
max-capped headroom. This allows the water-filling iteration to actually
fire: queues may temporarily over-allocate past their max, the clamp step
corrects it, and freed capacity is redistributed in the next iteration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Cluster cap distribution is first-queue-wins: missing fairness layer between queues

1 participant