-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
txn: make the unique key lock behaviour consistent for all executors #22837
Comments
I want to join to resolve this issue, what can i do for this issue?@cfzjywxk |
@zhaoxugang |
the query by unique index keys will be executed by PointGetExec ot BatchPointGetExec, so i don't know what can i do for resolve the issue@you06,@cfzjywxk |
In a short word, current PointGetExec and BatchPointGetExec will lock exist/unexist keys, however other executors will lock exist keys only. With the executor choice in planner module, it would be confusing for users. eg. -- id is a unique key, val is not
select * from t where id = 1 and val = 1; -- sql1, lock unexist key(id = 1)
select * from t where val = 1; -- sql2, does not lock any keys if val = 1 not exist Notice that In a short word, this issue doesn't have a well design yet. |
got it,thx |
I have an idea , It is i choose which index to lock dependency access path of data source . If there not exists index that was choosing , i will lock the table . |
Lock table sounds terrible, that will introduce a great performance impact. |
@you06 It will not lock any unexist keys. So in this situation, the task of this issue is to lock the unexist key(id = 1 and v = 1) in |
@dwangxxx It seems your case talks about the table like I don't mean it needs to lock the unexist key when with predicate |
@you06 /* t1 */ begin pessimistic;
/* t2 */ begin pessimistic;
/* t1 */ select * from t where id = 1 for update; -- this will lock all keys id = 1
/* t2 */ insert into t values(1, 2, 3); -- this will be blocked until t1 commit |
Yes. |
@you06
/* t1 */ begin pessimistic;
/* t2 */ begin pessimistic;
/* t1 */ select * from t where id = 1 for update;
/* t2 */ insert into t values(1, x, x);
/* t1 */ begin pessimistic;
/* t2 */ begin pessimistic;
/* t1 */ select * from t where id = 1 and v = 1 for update;
/* t2 */ insert into t values(1, 1, x);
/* t1 */ begin pessimistic;
/* t2 */ begin pessimistic;
/* t1 */ select * from t where id = 1 and v = 1 and val = 3 for update;
/* t2 */ insert into t values(1, 1, x);
/* t1 */ begin pessimistic;
/* t2 */ begin pessimistic;
/* t1 */ select * from t where val = 3 for update;
/* t2 */ insert into t values(x, x, 3);
/* t1 */ begin pessimistic;
/* t2 */ begin pessimistic;
/* t1 */ select * from t where id = 1 and val = 3 for update;
/* t2 */ insert into t values(1, x, x); |
Thanks for the cases. Suppose the only unique/primary key id
|
@you06 And about the difficulty, it is because the need-lock keys may be distributed in different regions? |
@dwangxxx It's not need-lock keys, it's need-lock range. A range may be distributed in different regions which makes things complex and expensive. |
@you06 |
@dwangxxx You're right, 2 phase lock harms performance a lot. Do you mean we may not lock any keys or ranges with |
@you06 /* t1 */ begin pessimistic;
/* t2 */ begin pessimistic;
/* t1 */ select * from t where id = 1 for update; -- empty
/* t2 */ insert into t values(2, 2, 1); -- blocked /* t1 */ begin pessimistic;
/* t2 */ begin pessimistic;
/* t1 */ select * from t where id = 1 and v = 1 for update; -- empty
/* t2 */ insert into t values(2, 2, 1); -- blocked They are also blocked. So I don't know if we should be like MySQL? |
Development Task
After #21483, we'll try to lock all the related unique keys for pessimistic transactions if the executor is
pointGet
orbatchPointGet
. For other coprocessor related executors, we should also implement this for them, and make the lock behaviours consistent.Tasks are:
Another possible solution is to push down the pessimistic lock operations into the kv requests doing data read, the difference behaviours could be resolved, check rfc for more details.
The text was updated successfully, but these errors were encountered: