Skip to content
This repository has been archived by the owner on Nov 18, 2017. It is now read-only.

Fencing and Quorum Support #25

Open
asmartin opened this issue Dec 4, 2015 · 3 comments
Open

Fencing and Quorum Support #25

asmartin opened this issue Dec 4, 2015 · 3 comments

Comments

@asmartin
Copy link

asmartin commented Dec 4, 2015

I am interested in Governor, but am curious about how it handles the following HA components:

  • fencing: being able to ensure that a failed node is really dead and won't come back online at some future point is critical for an HA cluster; does Governor provide any mechanism to STONITH or otherwise ensure that a failed node is guaranteed to be in a known state?
  • quorum: I've read how election of a new leader works, but how does Governor handle race conditions or ties? What if two nodes are exactly equal candidates in terms of WAL position - is there a mechanism in place to prevent both of them from becoming master concurrently?
@Winslett
Copy link
Contributor

Winslett commented Dec 4, 2015

Fencing: take a look at the setting and code around maximum_lag_on_failover. Each time the loop runs, we log the last XLOG position for the primary Postgres. With maximum_lag_on_failover, you can tailor the maximum bytes between the recorded value and a follower deemed healthy enough to failover.

Quorum: take a look at this method https://github.com/compose/governor/blob/master/helpers/etcd.py#L81. With it, we use the built in prevExist=false, it is a check and set action built into etcd. We rely on it to etcd's RAFT and prevExit to ensure there are no ties.

@asmartin
Copy link
Author

asmartin commented Dec 4, 2015

Thanks for the fast reply!

Fencing: The maximum_lag_on_failover parameter makes sense in terms of preventing the failed node from becoming a candidate for failover, but what prevents a rogue node from coming back at some point and haproxy starting to send traffic to it? The stonith story is a good illustration of why the fencing mechanism needs to ensure that any rogue node is reset to a known clean state (e.g with a reboot via IPMI)

Quorum: okay, so it sounds like etcd's RAFT ensures that a race condition between multiple nodes cannot end in a tie or multiple concurrent masters?

@Winslett
Copy link
Contributor

Winslett commented Dec 4, 2015

Great questions.

Haproxy status checks rely on etcd as a single source of truth. If a rogue node can take over the leader key, it would start receiving reads and writes. The maximum_lag_on_failover is one convention used to prevent that -- particularly when all nodes are shut down and nodes are started one at a time. Take a look at the blocker method is_healthiest_node (https://github.com/compose/governor/blob/master/helpers/postgresql.py#L125). The is_healthiest_node method is called before any node tries to take over as a leader.

If an etcd namespace is shared between Postgres Governor clusters, Governor doesn't prevent the cluster from comparing themselves to nodes of other clusters. It'd be awesome if you could code something up to prevent rogue clusters from trying to overthrow the current cluster's etcd namespace.

Take a look at the code and throw some scenarios at the code base. If you have a scenario of a rogue takeover that can be recreated, we'd love to solve for that situation.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants