You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've tried to revert the migration and ran ./install.sh again without any luck: docker compose --no-ansi run --rm snuba-api migrations reverse --group replays --migration-id 0019_add_materialization
Also tried to revert the migration before 0019 without any luck: docker compose --no-ansi run --rm snuba-api migrations reverse --group replays --migration-id 0018_add_viewed_by_id_column
Thanks for the assistance.
Expected Result
The ./install.sh script to not throw the missing column exception.
Actual Result
Clickhouse logs from install.sh
▶ Upgrading Clickhouse ...
Container sentry-self-hosted-clickhouse-1 Running
sentry-self-hosted-clickhouse-1 clickhouse-self-hosted-local "/entrypoint.sh" clickhouse 5 minutes ago Up 5 minutes (healthy) 8123/tcp, 9000/tcp, 9009/tcp
Detected clickhouse version 23.8.11.29.altinitystable. Skipping upgrades!
Snuba error
{"module": "snuba.migrations.runner", "event": "Running migration: 0019_add_materialization", "severity": "info", "timestamp": "2024-07-25T08:44:04.451504Z"}
{"module": "snuba.migrations.operations", "event": "Executing op: CREATE TABLE IF NOT EXISTS repla...", "severity": "info", "timestamp": "2024-07-25T08:44:04.457029Z"}
{"module": "snuba.migrations.operations", "event": "Executing on local node: clickhouse:9000", "severity": "info", "timestamp": "2024-07-25T08:44:04.457120Z"}
{"module": "snuba.migrations.operations", "event": "Executing op: CREATE MATERIALIZED VIEW IF NOT ...", "severity": "info", "timestamp": "2024-07-25T08:44:04.465391Z"}
{"module": "snuba.migrations.operations", "event": "Executing on local node: clickhouse:9000", "severity": "info", "timestamp": "2024-07-25T08:44:04.465480Z"}
{"module": "snuba.migrations.operations", "event": "Failed to execute operation on StorageSetKey.REPLAYS, target: OperationTarget.LOCAL\nCREATE MATERIALIZED VIEW IF NOT EXISTS replays_aggregation_mv TO replays_aggregated_local (project_id UInt64, to_hour_timestamp DateTime, replay_id UUID, retention_days UInt16, browser_name AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), browser_version AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), count_dead_clicks AggregateFunction(sum, UInt64), count_errors AggregateFunction(sum, UInt64), count_infos AggregateFunction(sum, UInt64), count_rage_clicks AggregateFunction(sum, UInt64), count_segments AggregateFunction(count, Nullable(UInt64)), count_urls AggregateFunction(sum, UInt64), count_warnings AggregateFunction(sum, UInt64), device_brand AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), device_family AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), device_model AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), device_name AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), dist AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), environment AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), finished_at AggregateFunction(maxIf, DateTime, UInt8), ip_address_v4 AggregateFunction(any, Nullable(IPv4)), ip_address_v6 AggregateFunction(any, Nullable(IPv6)), is_archived AggregateFunction(sum, Nullable(UInt64)), min_segment_id AggregateFunction(min, Nullable(UInt16)), os_name AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), os_version AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), platform AggregateFunction(anyIf, String, UInt8), sdk_name AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), sdk_version AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), started_at AggregateFunction(min, Nullable(DateTime)), user AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), user_id AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), user_name AggregateFunction(anyIf, Nullable(String), Nullable(UInt8)), user_email AggregateFunction(anyIf, Nullable(String), Nullable(UInt8))) AS \nSELECT\n project_id,\n toStartOfHour(timestamp) as to_hour_timestamp,\n replay_id,\n retention_days,\n anyIfState(browser_name, browser_name != '') as browser_name,\n anyIfState(browser_version, browser_version != '') as browser_version,\n sumState(toUInt64(click_is_dead)) as count_dead_clicks,\n sumState(toUInt64(error_id != '00000000-0000-0000-0000-000000000000' OR fatal_id != '00000000-0000-0000-0000-000000000000')) as count_errors,\n sumState(toUInt64(debug_id != '00000000-0000-0000-0000-000000000000' OR info_id != '00000000-0000-0000-0000-000000000000')) as count_infos,\n sumState(toUInt64(click_is_rage)) as count_rage_clicks,\n countState(toUInt64(segment_id)) as count_segments,\n sumState(length(urls)) as count_urls,\n sumState(toUInt64(warning_id != '00000000-0000-0000-0000-000000000000')) as count_warnings,\n anyIfState(device_brand, device_brand != '') as device_brand,\n anyIfState(device_family, device_family != '') as device_family,\n anyIfState(device_model, device_model != '') as device_model,\n anyIfState(device_name, device_name != '') as device_name,\n anyIfState(dist, dist != '') as dist,\n maxIfState(timestamp, segment_id IS NOT NULL) as finished_at,\n anyIfState(environment, environment != '') as environment,\n anyState(ip_address_v4) as ip_address_v4,\n anyState(ip_address_v6) as ip_address_v6,\n sumState(toUInt64(is_archived)) as is_archived,\n anyIfState(os_name, os_name != '') as os_name,\n anyIfState(os_version, os_version != '') as os_version,\n anyIfState(platform, platform != '') as platform,\n anyIfState(sdk_name, sdk_name != '') as sdk_name,\n anyIfState(sdk_version, sdk_version != '') as sdk_version,\n minState(replay_start_timestamp) as started_at,\n anyIfState(user, user != '') as user,\n anyIfState(user_id, user_id != '') as user_id,\n anyIfState(user_name, user_name != '') as user_name,\n anyIfState(user_email, user_email != '') as user_email,\n minState(segment_id) as min_segment_id\nFROM replays_local\nGROUP BY project_id, toStartOfHour(timestamp), replay_id, retention_days\n;\nNone", "severity": "error", "exception": "Traceback (most recent call last):\n File \"/usr/src/snuba/snuba/clickhouse/native.py\", line 200, in execute\n result_data = query_execute()\n ^^^^^^^^^^^^^^^\n File \"/usr/src/snuba/snuba/clickhouse/native.py\", line 183, in query_execute\n return conn.execute( # type: ignore\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py\", line 373, in execute\n rv = self.process_ordinary_query(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py\", line 571, in process_ordinary_query\n return self.receive_result(with_column_types=with_column_types,\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py\", line 204, in receive_result\n return result.get_result()\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/site-packages/clickhouse_driver/result.py\", line 50, in get_result\n for packet in self.packet_generator:\n File \"/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py\", line 220, in packet_generator\n packet = self.receive_packet()\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py\", line 237, in receive_packet\n raise packet.exception\nclickhouse_driver.errors.ServerException: Code: 47.\nDB::Exception: Missing columns: 'error_id' 'click_is_rage' 'fatal_id' 'info_id' 'debug_id' 'warning_id' 'click_is_dead' while processing query: 'SELECT project_id, toStartOfHour(timestamp) AS to_hour_timestamp, replay_id, retention_days, anyIfState(browser_name, browser_name != '') AS browser_name, anyIfState(browser_version, browser_version != '') AS browser_version, sumState(toUInt64(click_is_dead)) AS count_dead_clicks, sumState(toUInt64((error_id != '00000000-0000-0000-0000-000000000000') OR (fatal_id != '00000000-0000-0000-0000-000000000000'))) AS count_errors, sumState(toUInt64((debug_id != '00000000-0000-0000-0000-000000000000') OR (info_id != '00000000-0000-0000-0000-000000000000'))) AS count_infos, sumState(toUInt64(click_is_rage)) AS count_rage_clicks, countState(toUInt64(segment_id)) AS count_segments, sumState(length(urls)) AS count_urls, sumState(toUInt64(warning_id != '00000000-0000-0000-0000-000000000000')) AS count_warnings, anyIfState(device_brand, device_brand != '') AS device_brand, anyIfState(device_family, device_family != '') AS device_family, anyIfState(device_model, device_model != '') AS device_model, anyIfState(device_name, device_name != '') AS device_name, anyIfState(dist, dist != '') AS dist, maxIfState(timestamp, segment_id IS NOT NULL) AS finished_at, anyIfState(environment, environment != '') AS environment, anyState(ip_address_v4) AS ip_address_v4, anyState(ip_address_v6) AS ip_address_v6, sumState(toUInt64(is_archived)) AS is_archived, anyIfState(os_name, os_name != '') AS os_name, anyIfState(os_version, os_version != '') AS os_version, anyIfState(platform, platform != '') AS platform, anyIfState(sdk_name, sdk_name != '') AS sdk_name, anyIfState(sdk_version, sdk_version != '') AS sdk_version, minState(replay_start_timestamp) AS started_at, anyIfState(user, user != '') AS user, anyIfState(user_id, user_id != '') AS user_id, anyIfState(user_name, user_name != '') AS user_name, anyIfState(user_email, user_email != '') AS user_email, minState(segment_id) AS min_segment_id FROM default.replays_local GROUP BY project_id, toStartOfHour(timestamp), replay_id, retention_days', required columns: 'segment_id' 'replay_id' 'browser_name' 'click_is_dead' 'warning_id' 'debug_id' 'info_id' 'fatal_id' 'browser_version' 'click_is_rage' 'urls' 'retention_days' 'os_version' 'error_id' 'device_brand' 'environment' 'dist' 'os_name' 'device_model' 'project_id' 'platform' 'timestamp' 'device_name' 'ip_address_v4' 'user_id' 'ip_address_v6' 'is_archived' 'sdk_name' 'sdk_version' 'user_email' 'replay_start_timestamp' 'device_family' 'user' 'user_name', maybe you meant: 'segment_id', 'replay_id', 'browser_name', 'click_node_id', 'browser_version', 'click_tag', 'urls', 'retention_days', 'os_version', 'error_ids', 'device_brand', 'environment', 'dist', 'os_name', 'device_model', 'project_id', 'platform', 'timestamp', 'device_name', 'ip_address_v4', 'user_id', 'ip_address_v6', 'is_archived', 'sdk_name', 'sdk_version', 'user_email', 'replay_start_timestamp', 'device_family', 'user' or 'user_name'. Stack trace:\n\n0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c61ff37 in /usr/bin/clickhouse\n1. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x0000000007156ef1 in /usr/bin/clickhouse\n2. DB::TreeRewriterResult::collectUsedColumns(std::shared_ptr<DB::IAST> const&, bool, bool) @ 0x000000001221ab99 in /usr/bin/clickhouse\n3. DB::TreeRewriter::analyzeSelect(std::shared_ptr<DB::IAST>&, DB::TreeRewriterResult&&, DB::SelectQueryOptions const&, std::vector<DB::TableWithColumnNamesAndTypes, std::allocator<DB::TableWithColumnNamesAndTypes>> const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::TableJoin>) const @ 0x000000001221f801 in /usr/bin/clickhouse\n4. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>)::$_0::operator()(bool) const @ 0x0000000011ed191c in /usr/bin/clickhouse\n5. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x0000000011ec5975 in /usr/bin/clickhouse\n6. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x0000000011f74948 in /usr/bin/clickhouse\n7. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x0000000011ced71c in /usr/bin/clickhouse\n8. DB::InterpreterCreateQuery::execute() @ 0x0000000011cfd920 in /usr/bin/clickhouse\n9. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x00000000122bfe15 in /usr/bin/clickhouse\n10. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000122bb5b5 in /usr/bin/clickhouse\n11. DB::TCPHandler::runImpl() @ 0x0000000013137519 in /usr/bin/clickhouse\n12. DB::TCPHandler::run() @ 0x00000000131498f9 in /usr/bin/clickhouse\n13. Poco::Net::TCPServerConnection::start() @ 0x0000000015b42834 in /usr/bin/clickhouse\n14. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015b43a31 in /usr/bin/clickhouse\n15. Poco::PooledThread::run() @ 0x0000000015c7a667 in /usr/bin/clickhouse\n16. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015c7893c in /usr/bin/clickhouse\n17. ? @ 0x00007f762c482609 in ?\n18. ? @ 0x00007f762c3a7353 in ?\n\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n File \"/usr/src/snuba/snuba/migrations/operations.py\", line 80, in execute\n connection.execute(self.format_sql(), settings=self._settings)\n File \"/usr/src/snuba/snuba/clickhouse/native.py\", line 283, in execute\n raise ClickhouseError(e.message, code=e.code) from e\nsnuba.clickhouse.errors.ClickhouseError: DB::Exception: Missing columns: 'error_id' 'click_is_rage' 'fatal_id' 'info_id' 'debug_id' 'warning_id' 'click_is_dead' while processing query: 'SELECT project_id, toStartOfHour(timestamp) AS to_hour_timestamp, replay_id, retention_days, anyIfState(browser_name, browser_name != '') AS browser_name, anyIfState(browser_version, browser_version != '') AS browser_version, sumState(toUInt64(click_is_dead)) AS count_dead_clicks, sumState(toUInt64((error_id != '00000000-0000-0000-0000-000000000000') OR (fatal_id != '00000000-0000-0000-0000-000000000000'))) AS count_errors, sumState(toUInt64((debug_id != '00000000-0000-0000-0000-000000000000') OR (info_id != '00000000-0000-0000-0000-000000000000'))) AS count_infos, sumState(toUInt64(click_is_rage)) AS count_rage_clicks, countState(toUInt64(segment_id)) AS count_segments, sumState(length(urls)) AS count_urls, sumState(toUInt64(warning_id != '00000000-0000-0000-0000-000000000000')) AS count_warnings, anyIfState(device_brand, device_brand != '') AS device_brand, anyIfState(device_family, device_family != '') AS device_family, anyIfState(device_model, device_model != '') AS device_model, anyIfState(device_name, device_name != '') AS device_name, anyIfState(dist, dist != '') AS dist, maxIfState(timestamp, segment_id IS NOT NULL) AS finished_at, anyIfState(environment, environment != '') AS environment, anyState(ip_address_v4) AS ip_address_v4, anyState(ip_address_v6) AS ip_address_v6, sumState(toUInt64(is_archived)) AS is_archived, anyIfState(os_name, os_name != '') AS os_name, anyIfState(os_version, os_version != '') AS os_version, anyIfState(platform, platform != '') AS platform, anyIfState(sdk_name, sdk_name != '') AS sdk_name, anyIfState(sdk_version, sdk_version != '') AS sdk_version, minState(replay_start_timestamp) AS started_at, anyIfState(user, user != '') AS user, anyIfState(user_id, user_id != '') AS user_id, anyIfState(user_name, user_name != '') AS user_name, anyIfState(user_email, user_email != '') AS user_email, minState(segment_id) AS min_segment_id FROM default.replays_local GROUP BY project_id, toStartOfHour(timestamp), replay_id, retention_days', required columns: 'segment_id' 'replay_id' 'browser_name' 'click_is_dead' 'warning_id' 'debug_id' 'info_id' 'fatal_id' 'browser_version' 'click_is_rage' 'urls' 'retention_days' 'os_version' 'error_id' 'device_brand' 'environment' 'dist' 'os_name' 'device_model' 'project_id' 'platform' 'timestamp' 'device_name' 'ip_address_v4' 'user_id' 'ip_address_v6' 'is_archived' 'sdk_name' 'sdk_version' 'user_email' 'replay_start_timestamp' 'device_family' 'user' 'user_name', maybe you meant: 'segment_id', 'replay_id', 'browser_name', 'click_node_id', 'browser_version', 'click_tag', 'urls', 'retention_days', 'os_version', 'error_ids', 'device_brand', 'environment', 'dist', 'os_name', 'device_model', 'project_id', 'platform', 'timestamp', 'device_name', 'ip_address_v4', 'user_id', 'ip_address_v6', 'is_archived', 'sdk_name', 'sdk_version', 'user_email', 'replay_start_timestamp', 'device_family', 'user' or 'user_name'. Stack trace:\n\n0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c61ff37 in /usr/bin/clickhouse\n1. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x0000000007156ef1 in /usr/bin/clickhouse\n2. DB::TreeRewriterResult::collectUsedColumns(std::shared_ptr<DB::IAST> const&, bool, bool) @ 0x000000001221ab99 in /usr/bin/clickhouse\n3. DB::TreeRewriter::analyzeSelect(std::shared_ptr<DB::IAST>&, DB::TreeRewriterResult&&, DB::SelectQueryOptions const&, std::vector<DB::TableWithColumnNamesAndTypes, std::allocator<DB::TableWithColumnNamesAndTypes>> const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::TableJoin>) const @ 0x000000001221f801 in /usr/bin/clickhouse\n4. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>)::$_0::operator()(bool) const @ 0x0000000011ed191c in /usr/bin/clickhouse\n5. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x0000000011ec5975 in /usr/bin/clickhouse\n6. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x0000000011f74948 in /usr/bin/clickhouse\n7. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x0000000011ced71c in /usr/bin/clickhouse\n8. DB::InterpreterCreateQuery::execute() @ 0x0000000011cfd920 in /usr/bin/clickhouse\n9. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x00000000122bfe15 in /usr/bin/clickhouse\n10. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000122bb5b5 in /usr/bin/clickhouse\n11. DB::TCPHandler::runImpl() @ 0x0000000013137519 in /usr/bin/clickhouse\n12. DB::TCPHandler::run() @ 0x00000000131498f9 in /usr/bin/clickhouse\n13. Poco::Net::TCPServerConnection::start() @ 0x0000000015b42834 in /usr/bin/clickhouse\n14. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015b43a31 in /usr/bin/clickhouse\n15. Poco::PooledThread::run() @ 0x0000000015c7a667 in /usr/bin/clickhouse\n16. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015c7893c in /usr/bin/clickhouse\n17. ? @ 0x00007f762c482609 in ?\n18. ? @ 0x00007f762c3a7353 in ?\n", "timestamp": "2024-07-25T08:44:04.474886Z"}
Traceback (most recent call last):
File "/usr/src/snuba/snuba/clickhouse/native.py", line 200, in execute
result_data = query_execute()
^^^^^^^^^^^^^^^
File "/usr/src/snuba/snuba/clickhouse/native.py", line 183, in query_execute
return conn.execute( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 373, in execute
rv = self.process_ordinary_query(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 571, in process_ordinary_query
return self.receive_result(with_column_types=with_column_types,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 204, in receive_result
return result.get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/result.py", line 50, in get_result
for packet in self.packet_generator:
File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 220, in packet_generator
packet = self.receive_packet()
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 237, in receive_packet
raise packet.exception
clickhouse_driver.errors.ServerException: Code: 47.
DB::Exception: Missing columns: 'error_id' 'click_is_rage' 'fatal_id' 'info_id' 'debug_id' 'warning_id' 'click_is_dead' while processing query: 'SELECT project_id, toStartOfHour(timestamp) AS to_hour_timestamp, replay_id, retention_days, anyIfState(browser_name, browser_name != '') AS browser_name, anyIfState(browser_version, browser_version != '') AS browser_version, sumState(toUInt64(click_is_dead)) AS count_dead_clicks, sumState(toUInt64((error_id != '00000000-0000-0000-0000-000000000000') OR (fatal_id != '00000000-0000-0000-0000-000000000000'))) AS count_errors, sumState(toUInt64((debug_id != '00000000-0000-0000-0000-000000000000') OR (info_id != '00000000-0000-0000-0000-000000000000'))) AS count_infos, sumState(toUInt64(click_is_rage)) AS count_rage_clicks, countState(toUInt64(segment_id)) AS count_segments, sumState(length(urls)) AS count_urls, sumState(toUInt64(warning_id != '00000000-0000-0000-0000-000000000000')) AS count_warnings, anyIfState(device_brand, device_brand != '') AS device_brand, anyIfState(device_family, device_family != '') AS device_family, anyIfState(device_model, device_model != '') AS device_model, anyIfState(device_name, device_name != '') AS device_name, anyIfState(dist, dist != '') AS dist, maxIfState(timestamp, segment_id IS NOT NULL) AS finished_at, anyIfState(environment, environment != '') AS environment, anyState(ip_address_v4) AS ip_address_v4, anyState(ip_address_v6) AS ip_address_v6, sumState(toUInt64(is_archived)) AS is_archived, anyIfState(os_name, os_name != '') AS os_name, anyIfState(os_version, os_version != '') AS os_version, anyIfState(platform, platform != '') AS platform, anyIfState(sdk_name, sdk_name != '') AS sdk_name, anyIfState(sdk_version, sdk_version != '') AS sdk_version, minState(replay_start_timestamp) AS started_at, anyIfState(user, user != '') AS user, anyIfState(user_id, user_id != '') AS user_id, anyIfState(user_name, user_name != '') AS user_name, anyIfState(user_email, user_email != '') AS user_email, minState(segment_id) AS min_segment_id FROM default.replays_local GROUP BY project_id, toStartOfHour(timestamp), replay_id, retention_days', required columns: 'segment_id' 'replay_id' 'browser_name' 'click_is_dead' 'warning_id' 'debug_id' 'info_id' 'fatal_id' 'browser_version' 'click_is_rage' 'urls' 'retention_days' 'os_version' 'error_id' 'device_brand' 'environment' 'dist' 'os_name' 'device_model' 'project_id' 'platform' 'timestamp' 'device_name' 'ip_address_v4' 'user_id' 'ip_address_v6' 'is_archived' 'sdk_name' 'sdk_version' 'user_email' 'replay_start_timestamp' 'device_family' 'user' 'user_name', maybe you meant: 'segment_id', 'replay_id', 'browser_name', 'click_node_id', 'browser_version', 'click_tag', 'urls', 'retention_days', 'os_version', 'error_ids', 'device_brand', 'environment', 'dist', 'os_name', 'device_model', 'project_id', 'platform', 'timestamp', 'device_name', 'ip_address_v4', 'user_id', 'ip_address_v6', 'is_archived', 'sdk_name', 'sdk_version', 'user_email', 'replay_start_timestamp', 'device_family', 'user' or 'user_name'. Stack trace:
0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c61ff37 in /usr/bin/clickhouse
1. DB::Exception::Exception(PreformattedMessage&&, int) @ 0x0000000007156ef1 in /usr/bin/clickhouse
2. DB::TreeRewriterResult::collectUsedColumns(std::shared_ptr<DB::IAST> const&, bool, bool) @ 0x000000001221ab99 in /usr/bin/clickhouse
3. DB::TreeRewriter::analyzeSelect(std::shared_ptr<DB::IAST>&, DB::TreeRewriterResult&&, DB::SelectQueryOptions const&, std::vector<DB::TableWithColumnNamesAndTypes, std::allocator<DB::TableWithColumnNamesAndTypes>> const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::TableJoin>) const @ 0x000000001221f801 in /usr/bin/clickhouse
4. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>)::$_0::operator()(bool) const @ 0x0000000011ed191c in /usr/bin/clickhouse
5. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x0000000011ec5975 in /usr/bin/clickhouse
6. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x0000000011f74948 in /usr/bin/clickhouse
7. DB::InterpreterCreateQuery::createTable(DB::ASTCreateQuery&) @ 0x0000000011ced71c in /usr/bin/clickhouse
8. DB::InterpreterCreateQuery::execute() @ 0x0000000011cfd920 in /usr/bin/clickhouse
9. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x00000000122bfe15 in /usr/bin/clickhouse
10. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000122bb5b5 in /usr/bin/clickhouse
11. DB::TCPHandler::runImpl() @ 0x0000000013137519 in /usr/bin/clickhouse
12. DB::TCPHandler::run() @ 0x00000000131498f9 in /usr/bin/clickhouse
13. Poco::Net::TCPServerConnection::start() @ 0x0000000015b42834 in /usr/bin/clickhouse
14. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015b43a31 in /usr/bin/clickhouse
15. Poco::PooledThread::run() @ 0x0000000015c7a667 in /usr/bin/clickhouse
16. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015c7893c in /usr/bin/clickhouse
17. ? @ 0x00007f762c482609 in ?
18. ? @ 0x00007f762c3a7353 in ?
Event ID
Error in install/bootstrap-snuba.sh:4.
'$dcr snuba-api migrations migrate --force' exited with status 1
-> ./install.sh:main:36
--> install/bootstrap-snuba.sh:source:4
Looks like you've already sent this error to us, we're on it :)
The text was updated successfully, but these errors were encountered:
SherinBloemendaal
changed the title
Missing columns Clickhouse error when upgrading from 23.12.1 to 24.7.1
Missing columns Clickhouse error when upgrading from 24.1.2 to 24.7.1
Jul 25, 2024
@SherinBloemendaal This is the error you would receive if the database hadn't been migrated from 0013 to 0018. I'm not sure if there was a bug in the install script or if the migrations were just ignored but you would need to run migrations starting from 0014.
Self-Hosted Version
24.7.1
CPU Architecture
x85_64
Docker Version
27.1.1, build 6312585
Docker Compose Version
2.29.1
Steps to Reproduce
I've tried to revert the migration and ran ./install.sh again without any luck:
docker compose --no-ansi run --rm snuba-api migrations reverse --group replays --migration-id 0019_add_materialization
Also tried to revert the migration before 0019 without any luck:
docker compose --no-ansi run --rm snuba-api migrations reverse --group replays --migration-id 0018_add_viewed_by_id_column
Thanks for the assistance.
Expected Result
The ./install.sh script to not throw the missing column exception.
Actual Result
Clickhouse logs from install.sh
Snuba error
Event ID
Error in install/bootstrap-snuba.sh:4.
'$dcr snuba-api migrations migrate --force' exited with status 1
-> ./install.sh:main:36
--> install/bootstrap-snuba.sh:source:4
Looks like you've already sent this error to us, we're on it :)
The text was updated successfully, but these errors were encountered: