-
Notifications
You must be signed in to change notification settings - Fork 107
Description
Description
In ETL pipelines, loading transformed data into various data warehouses is a critical requirement. Currently, the ibis.TableDataset
connector in Kedro does not support data insertion into Ibis backends.
Context
Why is this change important to me?
We are developing ETL pipelines in our organization, and inserting records into data warehouses is an essential requirement. At present, without support for data insertion, we must bypass the Kedro DataCatalog
and rely on external ORM tools to handle native data storage operations, such as SQLAlchemy
, dataset
etc .
How would I use it?
Supporting data insertion in ibis.TableDataset
would allow us to maintain a clean and consistent pipeline, avoiding the need for custom load operations within nodes. This would simplify the workflow and allow Kedro to manage the complete I/O process.
How can it benefit other users?
By enabling this feature, users could avoid writing custom loading logic, thereby keeping their pipelines cleaner and more efficient. This would enhance Kedro's usability in scenarios where heavy I/O operations are involved, particularly for teams working with data warehouses or similar storage backends.