Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix wait schema agreement #461

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dkropachev
Copy link
Collaborator

Pre-review checklist

Make schema agreement waiting code renew connection on each iteration

When schema agreement is started it could happen that control connection is getting disconnected/reconnected, when it happens schema agreement code used to use disconnected connection to run all the queries.
As result, it could lead to schema agreement timeout, even if all nodes got schema updated long time ago.

This commit updates connection on every iteration and makes it iterate when underlying connection is closed

Fixes: #458

  • I have split my patch into logically separate commits.
  • All commit messages clearly explain what they change and why.
  • I added relevant tests for new features and bug fixes.
  • All commits compile, pass static checks and pass test.
  • PR description sums up the changes and reasons why they should be introduced.
  • I have provided docstrings for the public items that I want to introduce.
  • I have adjusted the documentation in ./docs/source/.
  • I added appropriate Fixes: annotations to PR description.

Comment on lines 4260 to 4278
log.debug("[control connection] Aborting wait for schema match due to shutdown")
return None
else:
raise
elif not error_signaled:
self._signal_error()
error_signaled = True
continue

schema_mismatches = self._get_schema_mismatches(peers_result, local_result, connection.endpoint)
schema_mismatches = self._get_schema_mismatches(peers_result, local_result, current_connection.endpoint)
if schema_mismatches is None:
return True

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see some problems with that approach.

DDL Requests

Schema agreement is not always done on control connection. After SCHEMA_CHANGE response to a request we perform schema agreement wait on the connection that the request was sent to.
If this connection becomes broken during that:

  • old code would raise an error immediately, which is perfectly reasonable in this case
  • new code would trigger control connection reconnection (for no reason - control connection may be perfectly fine), and then keep trying sending requests on defunct connection until timeout. This seems not optimal.

Multiple failures

This fix only guards us from a single failure, because of the error_signaled guard. If node X (with control connection) goes down, we signal it, connect CC to node Y, which then goes down, we will not call signal_error again and just keep trying the defunct connection.

OTOH getting rid of error_signaled is not a good idea - it could result in reconnection storm / loop.

I am not sure how to address this. Could you check how Java Driver approaches this? I'm asking about Java and not Rust because Rust handle schema agreement very differently so it is not applicable.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see some problems with that approach.

DDL Requests

Schema agreement is not always done on control connection. After SCHEMA_CHANGE response to a request we perform schema agreement wait on the connection that the request was sent to. If this connection becomes broken during that:

  • old code would raise an error immediately, which is perfectly reasonable in this case

I disagree here, not is not reasonable, statement was executed, driver received response, the fact that connection become dead should not impact the process.
If you throw same error that schema agreement logic used, for API user will not be able to distinct between schema agreement exception and statement exception and therefore will not be able to handle it properly.
Best behavior here would be to keep trying to check on schema agreement on any live connection available.

  • new code would trigger control connection reconnection (for no reason - control connection may be perfectly fine), and then keep trying sending requests on defunct connection until timeout. This seems not optimal.

True, we better fix that.

Multiple failures

This fix only guards us from a single failure, because of the error_signaled guard. If node X (with control connection) goes down, we signal it, connect CC to node Y, which then goes down, we will not call signal_error again and just keep trying the defunct connection.

OTOH getting rid of error_signaled is not a good idea - it could result in reconnection storm / loop.

I am not sure how to address this. Could you check how Java Driver approaches this? I'm asking about Java and not Rust because Rust handle schema agreement very differently so it is not applicable.

On my book the only proper way to address these issues is to make code iterate over available connections.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree here, not is not reasonable, statement was executed, driver received response, the fact that connection become dead should not impact the process.
If you throw same error that schema agreement logic used, for API user will not be able to distinct between schema agreement exception and statement exception and therefore will not be able to handle it properly.
Best behavior here would be to keep trying to check on schema agreement on any live connection available.

The point is to provide the following:

  1. User executes a DDL request (let's say it creates a table)
  2. It completes successfully (= no exception is thrown by the driver)
  3. If so, user can execute a request using this new table, and it won't return an error about the table being unknown.

To guarantee this we have to await schema agreement after issuing DDL.
More specifically: we have to await schema agreement against the same node we issued DDL against. Why? If we check schema agreement on other node it is possible that it does not know the new schema yet, so the schema agreement will be falsely successful, violating the guarantee.

This means we can try on other connections, but it has to be against the same node.
If we can't complete schema agreement against this node, we have to throw an exception.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Lorak-mmk , Please correct me if I am wrong, schema changes are done with QUORUM consistency level. Which means that if driver check QUORUM number of nodes on schema agreement and succeed it would be enough to ensure that whole cluster has same schema in given curcumstances.

When schema agreement is started it could happen that control connection
is getting disconnected/reconnected, when it happens schema agreement
code used to use disconnected connection to run all the queries.
As result, it could lead to schema agreement timeout, even if all nodes
got schema updated long time ago.

This commit updates connection on every iteration and makes it iterate
when underlying connection is closed
@dkropachev dkropachev force-pushed the dk/fix-wait_for_schema_agreement branch from a4d6546 to c5016ca Compare March 31, 2025 10:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants