-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix wait schema agreement #461
base: master
Are you sure you want to change the base?
Fix wait schema agreement #461
Conversation
3a84457
to
a4d6546
Compare
log.debug("[control connection] Aborting wait for schema match due to shutdown") | ||
return None | ||
else: | ||
raise | ||
elif not error_signaled: | ||
self._signal_error() | ||
error_signaled = True | ||
continue | ||
|
||
schema_mismatches = self._get_schema_mismatches(peers_result, local_result, connection.endpoint) | ||
schema_mismatches = self._get_schema_mismatches(peers_result, local_result, current_connection.endpoint) | ||
if schema_mismatches is None: | ||
return True | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see some problems with that approach.
DDL Requests
Schema agreement is not always done on control connection. After SCHEMA_CHANGE response to a request we perform schema agreement wait on the connection that the request was sent to.
If this connection becomes broken during that:
- old code would raise an error immediately, which is perfectly reasonable in this case
- new code would trigger control connection reconnection (for no reason - control connection may be perfectly fine), and then keep trying sending requests on defunct connection until timeout. This seems not optimal.
Multiple failures
This fix only guards us from a single failure, because of the error_signaled
guard. If node X (with control connection) goes down, we signal it, connect CC to node Y, which then goes down, we will not call signal_error
again and just keep trying the defunct connection.
OTOH getting rid of error_signaled
is not a good idea - it could result in reconnection storm / loop.
I am not sure how to address this. Could you check how Java Driver approaches this? I'm asking about Java and not Rust because Rust handle schema agreement very differently so it is not applicable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see some problems with that approach.
DDL Requests
Schema agreement is not always done on control connection. After SCHEMA_CHANGE response to a request we perform schema agreement wait on the connection that the request was sent to. If this connection becomes broken during that:
- old code would raise an error immediately, which is perfectly reasonable in this case
I disagree here, not is not reasonable, statement was executed, driver received response, the fact that connection become dead should not impact the process.
If you throw same error that schema agreement logic used, for API user will not be able to distinct between schema agreement exception and statement exception and therefore will not be able to handle it properly.
Best behavior here would be to keep trying to check on schema agreement on any live connection available.
- new code would trigger control connection reconnection (for no reason - control connection may be perfectly fine), and then keep trying sending requests on defunct connection until timeout. This seems not optimal.
True, we better fix that.
Multiple failures
This fix only guards us from a single failure, because of the
error_signaled
guard. If node X (with control connection) goes down, we signal it, connect CC to node Y, which then goes down, we will not callsignal_error
again and just keep trying the defunct connection.OTOH getting rid of
error_signaled
is not a good idea - it could result in reconnection storm / loop.I am not sure how to address this. Could you check how Java Driver approaches this? I'm asking about Java and not Rust because Rust handle schema agreement very differently so it is not applicable.
On my book the only proper way to address these issues is to make code iterate over available connections.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I disagree here, not is not reasonable, statement was executed, driver received response, the fact that connection become dead should not impact the process.
If you throw same error that schema agreement logic used, for API user will not be able to distinct between schema agreement exception and statement exception and therefore will not be able to handle it properly.
Best behavior here would be to keep trying to check on schema agreement on any live connection available.
The point is to provide the following:
- User executes a DDL request (let's say it creates a table)
- It completes successfully (= no exception is thrown by the driver)
- If so, user can execute a request using this new table, and it won't return an error about the table being unknown.
To guarantee this we have to await schema agreement after issuing DDL.
More specifically: we have to await schema agreement against the same node we issued DDL against. Why? If we check schema agreement on other node it is possible that it does not know the new schema yet, so the schema agreement will be falsely successful, violating the guarantee.
This means we can try on other connections, but it has to be against the same node.
If we can't complete schema agreement against this node, we have to throw an exception.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Lorak-mmk , Please correct me if I am wrong, schema changes are done with QUORUM consistency level. Which means that if driver check QUORUM number of nodes on schema agreement and succeed it would be enough to ensure that whole cluster has same schema in given curcumstances.
When schema agreement is started it could happen that control connection is getting disconnected/reconnected, when it happens schema agreement code used to use disconnected connection to run all the queries. As result, it could lead to schema agreement timeout, even if all nodes got schema updated long time ago. This commit updates connection on every iteration and makes it iterate when underlying connection is closed
a4d6546
to
c5016ca
Compare
Pre-review checklist
Make schema agreement waiting code renew connection on each iteration
When schema agreement is started it could happen that control connection is getting disconnected/reconnected, when it happens schema agreement code used to use disconnected connection to run all the queries.
As result, it could lead to schema agreement timeout, even if all nodes got schema updated long time ago.
This commit updates connection on every iteration and makes it iterate when underlying connection is closed
Fixes: #458
I added relevant tests for new features and bug fixes.I have provided docstrings for the public items that I want to introduce.I have adjusted the documentation in./docs/source/
.Fixes:
annotations to PR description.