Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Validation Error in GeminiTest.test_load_random_with_nemesis: Rows Differ Due to Repeated Entries in Column 'col5' #436

Open
timtimb0t opened this issue Sep 30, 2024 · 14 comments
Assignees
Labels
bug Something isn't working good first issue Good for newcomers
Milestone

Comments

@timtimb0t
Copy link

timtimb0t commented Sep 30, 2024

Packages

Scylla version: 6.2.0~rc1-20240919.a71d4bc49cc8 with build-id 2a79c005ca22208ec14a7708a4f423e96b5d861f

Kernel Version: 6.8.0-1016-aws

Issue description

Happens during this job:
https://jenkins.scylladb.com/job/scylla-6.2/job/gemini/job/gemini-3h-with-nemesis-test/4/

2024-09-22 15:35:20.939: (TestFrameworkEvent Severity.ERROR) period_type=one-time event_id=f294216e-d5eb-4a5d-b9d0-947174db18ba, source=GeminiTest.test_load_random_with_nemesis (gemini_test.GeminiTest)() message=Traceback (most recent call last):
File "/home/ubuntu/scylla-cluster-tests/gemini_test.py", line 68, in test_load_random_with_nemesis
self.verify_results()
File "/home/ubuntu/scylla-cluster-tests/gemini_test.py", line 127, in verify_results
self.fail(self.gemini_results['results'])
AssertionError: [{'errors': [{'timestamp': '2024-09-22T15:34:41.040271558Z', 'message': 'Validation failed: rows differ (-map[ck0:0e5bd852e93365ceba7b983b1e7ae82f0e ck1:196.209.17.42 col0:1114235h14m33.527540656s col1:map[428598df-64b6-1ac8-898a-0269c58502ef:4428111412553544617 5c32fb93-feff-158e-898c-0269c58502ef:4779579801187610993 5f827495-0ce5-1082-898b-0269c58502ef:7356596969051568171] col10:d9e375ad-94d0-1fe4-898f-0269c58502ef col11:false col12:0.57534295 col2:8f1bbdd2-d836-1a8b-898d-0269c58502ef col3:1714736h23m51.156602912s col4:map[udt_1507393928_0:d668425f-84e6-1d9f-898e-0269c58502ef udt_1507393928_1:1767-09-21 01:16:18.871345152 +0000 UTC udt_1507393928_2:48254040-10-31 14:06:06.583 +0000 UTC udt_1507393928_3:131653931-08-13 22:35:40.318 +0000 UTC udt_1507393928_4:-6456 udt_1507393928_5:0.12871255 udt_1507393928_6:6936] col5:[7988919290375467742 4376061058815166327 379500719263269858 5009093180938296012 7174863172911282077 6522356728298859312 599653341271145931 3490621354264608579 7939102450213922708] col6:236.236.209.136 col7:3642 col8:0.8230271 col9:505143h58m44.369718516s pk0:8369222297109090103 pk1:5469874807351661720 pk2:-12846] 
+map[ck0:0e5bd852e93365ceba7b983b1e7ae82f0e ck1:196.209.17.42 col0:1114235h14m33.527540656s col1:map[428598df-64b6-1ac8-898a-0269c58502ef:4428111412553544617 5c32fb93-feff-158e-898c-0269c58502ef:4779579801187610993 5f827495-0ce5-1082-898b-0269c58502ef:7356596969051568171] col10:d9e375ad-94d0-1fe4-898f-0269c58502ef col11:false col12:0.57534295 col2:8f1bbdd2-d836-1a8b-898d-0269c58502ef col3:1714736h23m51.156602912s col4:map[udt_1507393928_0:d668425f-84e6-1d9f-898e-0269c58502ef udt_1507393928_1:1767-09-21 01:16:18.871345152 +0000 UTC udt_1507393928_2:48254040-10-31 14:06:06.583 +0000 UTC udt_1507393928_3:131653931-08-13 22:35:40.318 +0000 UTC udt_1507393928_4:-6456 udt_1507393928_5:0.12871255 udt_1507393928_6:6936] col5:[7988919290375467742 7988919290375467742 4376061058815166327 4376061058815166327 379500719263269858 379500719263269858 5009093180938296012 5009093180938296012 7174863172911282077 7174863172911282077 6522356728298859312 6522356728298859312 599653341271145931 599653341271145931 3490621354264608579 3490621354264608579 7939102450213922708 7939102450213922708] col6:236.236.209.136 col7:3642 col8:0.8230271 col9:505143h58m44.369718516s pk0:8369222297109090103 pk1:5469874807351661720 pk2:-12846]):   map[string]any{\n  \t... // 8 identical entries\n  \t"col3": s"1714736h23m51.156602912s",\n  \t"col4": map[string]any{"udt_1507393928_0": gocql.UUID{0xd6, 0x68, 0x42, 0x5f, ...}, "udt_1507393928_1": s"1767-09-21 01:16:18.871345152 +0000 UTC", "udt_1507393928_2": s"48254040-10-31 14:06:06.583 +0000 UTC", "udt_1507393928_3": s"131653931-08-13 22:35:40.318 +0000 UTC", ...},\n  \t"col5": []int64{\n  \t\t7988919290375467742,\n+ \t\t7988919290375467742,\n  \t\t4376061058815166327,\n+ \t\t4376061058815166327,\n  \t\t379500719263269858,\n+ \t\t379500719263269858,\n  \t\t5009093180938296012,\n+ \t\t5009093180938296012,\n  \t\t7174863172911282077,\n+ \t\t7174863172911282077,\n+ \t\t6522356728298859312,\n  \t\t6522356728298859312,\n+ \t\t599653341271145931,\n  \t\t599653341271145931,\n+ \t\t3490621354264608579,\n  \t\t3490621354264608579,\n+ \t\t7939102450213922708,\n  \t\t7939102450213922708,\n  \t},\n  \t"col6": string("236.236.209.136"),\n  \t"col7": int16(3642),\n  \t... // 5 identical entries\n  }\n', 'query': 'SELECT * FROM ks1.table1 WHERE pk0=8369222297109090103 AND pk1=5469874807351661720 AND pk2=-12846 ', 'stmt-type': 'SelectStatement'}], 'write_ops': 29336134, 'write_errors': 0, 'read_ops': 6545513, 'read_errors': 1}]

Seems like results are duplicated. Not sure but may be driver related isse

How frequently does it reproduce?

Just once

Installation details

Cluster size: 3 nodes (i4i.2xlarge)

Scylla Nodes used in this run:

  • gemini-with-nemesis-3h-normal-6-2-oracle-db-node-5dffaac7-1 (3.236.186.61 | 10.12.2.136) (shards: 30)
  • gemini-with-nemesis-3h-normal-6-2-db-node-5dffaac7-3 (44.203.33.96 | 10.12.2.71) (shards: 7)
  • gemini-with-nemesis-3h-normal-6-2-db-node-5dffaac7-2 (54.160.20.232 | 10.12.0.122) (shards: 7)
  • gemini-with-nemesis-3h-normal-6-2-db-node-5dffaac7-1 (44.204.255.146 | 10.12.0.163) (shards: 7)

OS / Image: ami-059b505168db98ed8 (aws: undefined_region)

Test: gemini-3h-with-nemesis-test
Test id: 5dffaac7-04b4-473d-a574-c53d59bfd567
Test name: scylla-6.2/gemini/gemini-3h-with-nemesis-test
Test method: gemini_test.GeminiTest.test_load_random_with_nemesis
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor 5dffaac7-04b4-473d-a574-c53d59bfd567
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs 5dffaac7-04b4-473d-a574-c53d59bfd567

Logs:

Jenkins job URL
Argus

@timtimb0t
Copy link
Author

Here is the similar issue: scylladb/scylladb#7937

@temichus
Copy link

reproduced in the latest 6.2.0

Packages

Scylla version: 6.2.0-20241013.b8a9fd4e49e8 with build-id a61f658b0408ba10663812f7a3b4d6aea7714fac

Kernel Version: 6.8.0-1016-aws

Issue description

  • This issue is a regression.
  • It is unknown if this issue is a regression.

Describe your issue in detail and steps it took to produce it.

Impact

Describe the impact this issue causes to the user.

How frequently does it reproduce?

Describe the frequency with how this issue can be reproduced.

Installation details

Cluster size: 3 nodes (i4i.2xlarge)

Scylla Nodes used in this run:

  • gemini-with-nemesis-3h-normal-6-2-oracle-db-node-2ce47ce4-1 (3.237.190.119 | 10.12.0.252) (shards: 30)
  • gemini-with-nemesis-3h-normal-6-2-db-node-2ce47ce4-3 (44.204.250.71 | 10.12.0.100) (shards: 7)
  • gemini-with-nemesis-3h-normal-6-2-db-node-2ce47ce4-2 (18.234.247.199 | 10.12.3.106) (shards: 7)
  • gemini-with-nemesis-3h-normal-6-2-db-node-2ce47ce4-1 (44.201.28.152 | 10.12.3.220) (shards: 7)

OS / Image: ami-0cfa45340a2dfa18d (aws: undefined_region)

Test: gemini-3h-with-nemesis-test
Test id: 2ce47ce4-9395-47a3-9b59-97305baf93ca
Test name: scylla-6.2/gemini/gemini-3h-with-nemesis-test
Test method: gemini_test.GeminiTest.test_load_random_with_nemesis
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor 2ce47ce4-9395-47a3-9b59-97305baf93ca
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs 2ce47ce4-9395-47a3-9b59-97305baf93ca

Logs:

Jenkins job URL
Argus

@aleksbykov
Copy link
Contributor

@bhalevy @nyh , We got the reproduce of https://github.com/scylladb/scylla-enterprise/issues/3663 several times for 6.2. As i understood it could be expected behavior. I saw that it was fixed in gemini: scylladb/scylladb#7937.
We need you opinion could we ignore such issue and reopen bug to gemini repo?

@nyh
Copy link

nyh commented Oct 21, 2024

This was supposedly fixed in Gemini fd28158 so how did it come back? And why is this specific to 6.2?

scylladb/scylladb#3559 (comment) explains why two appends to a list using the same user-side timestamp would be duplicated if they happen to reach two different coordinators - because the "timeuuid" used as the list key uses not just the timestamp but also the node id.

@nyh
Copy link

nyh commented Oct 21, 2024

@aleksbykov somebody will need to investigate the Gemini failure and explain in English words (and not opaque Gemini output) what exactly is the problem seen. Is it even a list (as was in the issues mentioned above) or a map as the error message suggests? What does Gemini do on this map and why does it not expect the output it gets?

@aleksbykov
Copy link
Contributor

@nyh ,
map is just gemini output it shows difference between rows in test and oracle cluster. The main difference which caused the the gemini error is duplicating values in list column:

col5:[7988919290375467742 4376061058815166327 379500719263269858 5009093180938296012 7174863172911282077 6522356728298859312 599653341271145931 3490621354264608579 7939102450213922708]

col5:[7988919290375467742 7988919290375467742 4376061058815166327 4376061058815166327 379500719263269858 379500719263269858 5009093180938296012 5009093180938296012 7174863172911282077 7174863172911282077 6522356728298859312 6522356728298859312 599653341271145931 599653341271145931 3490621354264608579 3490621354264608579 7939102450213922708 7939102450213922708]

@nyh
Copy link

nyh commented Oct 22, 2024

Somebody will need to debug Gemini to understand why this happens. As I noted here and in various issues mentioned above, we know that there are cases when the list item may be added more than once. scylladb/scylladb#3559 (comment) is one such known case: When the same list append gets sent to different coordinator shards (if using server-side timestamps, even the same coordinator will cause this problem).

Although this is arguably a Scylla bug - a bug that should be recognized but will probably never be fixed - I don't know whether Gemini is supposed to "reproduce" this bug every time. It seems to me that fd28158 tried to make this "reproduction" less likely, but I'm not sure it completely solves the problem - is it possible that Gemini or the driver it uses retries an append operation and that is what causes the double append? scylladb/scylladb#3559 suggests maybe the application (Gemini) or its driver is doing this retry, and that list appends are not idempotent and must not be retried. But does Gemini or its driver know that?

Again, somebody who is familiar with Gemini and the driver it uses should look into this, I am not sure how to help here.

@timtimb0t
Copy link
Author

timtimb0t commented Nov 18, 2024

Due to the issue been spotted on several runs I changed the label to tier1, but the issue itself is not critical

@mykaul
Copy link

mykaul commented Nov 18, 2024

Due to the issue been spotted on several runs I changed the label to tier1, but the issue itself is not critical

@timtimb0t - are we sure it's not a Gemini issue?

@timtimb0t
Copy link
Author

reproduced there:

Packages

Base Scylla version: 6.0.4-20240919.7d421abec43d with build-id 7cc42fc19bcc383fe3723163554d80edaae8b8a5
Target Scylla version (or git commit hash): 2024.2.0-20241118.614d56348f46 with build-id 2f88720b7edc44997fa233aef003e675dd1a7cee

Kernel Version: 5.15.0-1070-aws

Impact

Describe the impact this issue causes to the user.

How frequently does it reproduce?

Describe the frequency with how this issue can be reproduced.

Installation details

Cluster size: 4 nodes (im4gn.2xlarge)

Scylla Nodes used in this run:

  • rolling-upgrade--ubuntu-focal-db-node-fb117102-4 (54.229.150.176 | 10.4.3.133) (shards: 7)
  • rolling-upgrade--ubuntu-focal-db-node-fb117102-3 (34.244.179.254 | 10.4.2.194) (shards: 7)
  • rolling-upgrade--ubuntu-focal-db-node-fb117102-2 (3.250.65.123 | 10.4.2.170) (shards: 7)
  • rolling-upgrade--ubuntu-focal-db-node-fb117102-1 (3.253.5.59 | 10.4.3.130) (shards: 7)

OS / Image: ami-0ac9f799d0f225bd4 (aws: undefined_region)

Test: rolling-upgrade-ami-arm-test
Test id: fb117102-22bf-436c-aa13-3cec8d3fb1db
Test name: enterprise-2024.2/rolling-upgrade/rolling-upgrade-ami-arm-test
Test method: upgrade_test.UpgradeTest.test_rolling_upgrade
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor fb117102-22bf-436c-aa13-3cec8d3fb1db
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs fb117102-22bf-436c-aa13-3cec8d3fb1db

Logs:

Jenkins job URL
Argus

@timtimb0t
Copy link
Author

Due to the issue been spotted on several runs I changed the label to tier1, but the issue itself is not critical

@timtimb0t - are we sure it's not a Gemini issue?

we are sure that its Gemini issue

@timtimb0t
Copy link
Author

@mykaul
Copy link

mykaul commented Nov 18, 2024

Due to the issue been spotted on several runs I changed the label to tier1, but the issue itself is not critical

@timtimb0t - are we sure it's not a Gemini issue?

we are sure that its Gemini issue

Then why it's in this repo?

@timtimb0t
Copy link
Author

Due to the issue been spotted on several runs I changed the label to tier1, but the issue itself is not critical

@timtimb0t - are we sure it's not a Gemini issue?

we are sure that its Gemini issue

Then why it's in this repo?

I mistakenly created it there because i didnt know that its different component at that time. I can recreate it in a proper repo, should I?

@mykaul mykaul transferred this issue from scylladb/scylladb Nov 19, 2024
@CodeLieutenant CodeLieutenant added the bug Something isn't working label Nov 28, 2024
@CodeLieutenant CodeLieutenant self-assigned this Nov 28, 2024
@CodeLieutenant CodeLieutenant added this to the Gemini 1.9 milestone Nov 28, 2024
@CodeLieutenant CodeLieutenant added the good first issue Good for newcomers label Nov 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

6 participants