-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ADAP-803] The existing table '' is in another format than 'delta' or 'iceberg' or 'hudi' #870
Comments
I observe similar behaviour. Tables are registered in the Hive Metastore. This can be reproduced as follows: Create the test schema:
Then running the following incremental model:
The first time it runs fine as @roberto-rosero mentioned. The second time it indeed fails. In spark I defined the Iceberg catalog as follows:
With logs:
It does work if I explicitly include the catalog in the
For normal DBT tables it (re)runs fine without explicitly specifying the metastore. I tried diving into the code at the location indicated by the logs Any help is appreciated! |
Encountering a similar issue. When I specifically incorporate the catalog within the target_schema, it utilizes the "create or replace" statement instead of performing a merge operation on subsequent attempts |
The same thing is happening to us, in our case the table is Iceberg but the provider it uses is Hive. Reviewing in the impl.py of dbt-spark, and debugging our code, we understand that it never meets the condition for the Hive provider even if the table is an iceberg. It can be seen in the def of the build_spark_relation_list method. We understand that it is a bug of the impl.py since the table is of type Iceberg. To solve it, we have chosen to generate the snapshots macro at the project level and remove the control that validated what type of table it was. Code removed from snapshot macro
This was the way so far that we managed to obtain the desired snapshot behavior. Environment
|
Is this a new bug in dbt-spark?
Current Behavior
I ran a dbt snapshot the first time and it ran very well, but the second time occurs the error in the title of this bug.
Expected Behavior
Create the snapshot like the first time.
Steps To Reproduce
Relevant log output
No response
Environment
Additional Context
No response
The text was updated successfully, but these errors were encountered: