Summary
Add a new Valkey addon supporting sentinel-based replication topology (3 data pods + 3 sentinel pods).
Features
- Topology: Sentinel-based replication (primary + replicas, 3 sentinels for HA)
- Versions: Valkey 8.x and 9.x
- Operations: All standard KubeBlocks OpsRequests (VScale, HScale, Restart, Stop/Start, Upgrade, Reconfiguring, Expose, Backup/Restore, TLS, ACL, Switchover, RebuildInstance)
- Backup: Physical backup via kopia with pre/post-restore sentinel re-registration
- TLS: Full TLS support including sentinel TLS replication
Key design decisions
- Sentinel uses
emptyDir (no PVC) — conf is rebuilt from scratch on every restart via _background_monitor_discovery background loop
valkey-start.sh uses quorum-based sentinel query (majority of sentinels must agree on the same master) before falling back to lexicographic heuristic — prevents split-brain during sentinel convergence windows
valkey-member-leave.sh selects the highest config-epoch sentinel (avoids isolated/stale sentinels); skips SENTINEL RESET unless a SENTINEL FAILOVER was actually triggered and completed — premature RESET temporarily zeros num-slaves and can cause restarting pods to elect a second master
switchover.sh uses strict wait_for_new_master || return 1 for targeted switchover to ensure KubeBlocks reports failure if the wrong candidate was elected
Summary
Add a new Valkey addon supporting sentinel-based replication topology (3 data pods + 3 sentinel pods).
Features
Key design decisions
emptyDir(no PVC) — conf is rebuilt from scratch on every restart via_background_monitor_discoverybackground loopvalkey-start.shuses quorum-based sentinel query (majority of sentinels must agree on the same master) before falling back to lexicographic heuristic — prevents split-brain during sentinel convergence windowsvalkey-member-leave.shselects the highest config-epoch sentinel (avoids isolated/stale sentinels); skipsSENTINEL RESETunless aSENTINEL FAILOVERwas actually triggered and completed — premature RESET temporarily zerosnum-slavesand can cause restarting pods to elect a second masterswitchover.shuses strictwait_for_new_master || return 1for targeted switchover to ensure KubeBlocks reports failure if the wrong candidate was elected