-
Notifications
You must be signed in to change notification settings - Fork 378
[Dataset hub] more than storage: streaming, editing, connectors #2052
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,2 @@ | ||
| # Datasets connectors | ||
|
|
| Original file line number | Diff line number | Diff line change | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,105 @@ | ||||||||||||||
| # Editing datasets | ||||||||||||||
|
|
||||||||||||||
| The [Hub](https://huggingface.co/datasets) enables collabporative curation of community and research datasets. We encourage you to explore dataset on the Hub and contribute to dataset curation to help grow the ML community and accelerate progress for everyone. All contributions are welcome! | ||||||||||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||
|
|
||||||||||||||
| Start by [creating a Hugging Face Hub account](https://huggingface.co/join) if you don't have one yet. | ||||||||||||||
|
|
||||||||||||||
| ## Edit using the Hub UI | ||||||||||||||
|
|
||||||||||||||
| > [!WARNING] | ||||||||||||||
| > This feature is only available for CSV datasets for now. | ||||||||||||||
| The Hub's web-based interface allows users without any developer experience to edit a dataset. | ||||||||||||||
|
|
||||||||||||||
| Open the dataset page and navigate to the dataset **Data Studio** to edit the dataset | ||||||||||||||
|
Comment on lines
+12
to
+14
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. (nit) it reads better like this i think
Suggested change
|
||||||||||||||
|
|
||||||||||||||
| <div class="flex justify-center"> | ||||||||||||||
| <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/data_studio_button-min.png"/> | ||||||||||||||
| <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/data_studio_button_dark-min.png"/> | ||||||||||||||
| </div> | ||||||||||||||
|
|
||||||||||||||
| Click on **Toggle edit mode** to enable dataset editing. | ||||||||||||||
|
|
||||||||||||||
| <div class="flex justify-center"> | ||||||||||||||
| <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/toggle_edit_button-min.png"/> | ||||||||||||||
| <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/toggle_edit_button_dark-min.png"/> | ||||||||||||||
| </div> | ||||||||||||||
|
|
||||||||||||||
| <div class="flex justify-center"> | ||||||||||||||
| <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/edit_cell_button-min.png"/> | ||||||||||||||
| <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/edit_cell_button_dark-min.png"/> | ||||||||||||||
| </div> | ||||||||||||||
|
|
||||||||||||||
| Edit as many cells as you want and finally click **Commit** to commit your changes and leave a commit message. | ||||||||||||||
|
|
||||||||||||||
|
|
||||||||||||||
| <div class="flex justify-center"> | ||||||||||||||
| <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_button-min.png"/> | ||||||||||||||
| <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_button_dark-min.png"/> | ||||||||||||||
| </div> | ||||||||||||||
|
|
||||||||||||||
| <div class="flex justify-center"> | ||||||||||||||
| <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_message-min.png"/> | ||||||||||||||
| <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-edit/commit_message_dark-min.png"/> | ||||||||||||||
| </div> | ||||||||||||||
|
|
||||||||||||||
| ## Using the `huggingface_hub` client library | ||||||||||||||
|
|
||||||||||||||
| The rich features set in the `huggingface_hub` library allows you to manage repositories, including editing dataset files on the Hub. Visit [the client library's documentation](/docs/huggingface_hub/index) to learn more. | ||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. should embed at least an example of two (same in the "downloading" doc btw)
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||
|
|
||||||||||||||
| ## Integrated libraries | ||||||||||||||
|
|
||||||||||||||
| If a dataset on the Hub is tied to a [supported library](./datasets-libraries), loading the dataset, editing, and pushing your changes can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. | ||||||||||||||
|
|
||||||||||||||
| For example, [`samsum`](https://huggingface.co/datasets/knkarthick/samsum?library=datasets) shows how to do so with 🤗 Datasets below. | ||||||||||||||
|
|
||||||||||||||
| <div class="flex justify-center"> | ||||||||||||||
| <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage.png"/> | ||||||||||||||
| <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-dark.png"/> | ||||||||||||||
| </div> | ||||||||||||||
|
|
||||||||||||||
| <div class="flex justify-center"> | ||||||||||||||
| <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-modal.png"/> | ||||||||||||||
| <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-modal-dark.png"/> | ||||||||||||||
| </div> | ||||||||||||||
|
|
||||||||||||||
| ### Only upload the new data | ||||||||||||||
|
|
||||||||||||||
| Hugging Face's storage uses [Xet](https://huggingface.co/docs/hub/en/xet) which is based on deduplication, and enables in particular deduplicated uploads. | ||||||||||||||
| Unlike regular cloud storages, Xet doesn't require datasets to be completely reuploaded to commit changes. | ||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||
| Instead, it automatically detects which parts of the dataset changed and tells the client library to only upload the parts that changed. | ||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||||||
| To do that, Xet uses a smart algorithm to find chunks of 64kB that already exist on Hugging Face. | ||||||||||||||
|
|
||||||||||||||
| Here is how it works with Pandas: | ||||||||||||||
|
|
||||||||||||||
| ```python | ||||||||||||||
| import pandas as pd | ||||||||||||||
|
|
||||||||||||||
| # Load the dataset | ||||||||||||||
| df = pd.read_csv(f"hf://datasets/{repo_id}/data.csv") | ||||||||||||||
|
|
||||||||||||||
| # Edit the dataset | ||||||||||||||
| # df = df.apply(...) | ||||||||||||||
|
|
||||||||||||||
| # Commit the changes | ||||||||||||||
| df.to_csv(f"hf://datasets/{repo_id}/data.csv") | ||||||||||||||
| ``` | ||||||||||||||
|
|
||||||||||||||
| This code first loads a dataset and then edits it. | ||||||||||||||
| Once the edits are done, `to_csv()` materializes the file in memory, chunks it, and asks Xet which chunks are already on Hugging Face and which chunks changed, and finally only upload the new data. | ||||||||||||||
|
|
||||||||||||||
| ### Optimized Parquet editing | ||||||||||||||
|
|
||||||||||||||
| Therefore the amount of data to reupload depends on the edits and the file structure. | ||||||||||||||
|
|
||||||||||||||
| The Parquet format is columnar and compressed at the page level (pages are around ~1MB). | ||||||||||||||
| We optimized Parquet for Xet with [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc), which ensures unchanged data generally result in unchanged pages. | ||||||||||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. add a visual of how those files and/or datasets are marked on the Hub 😁 |
||||||||||||||
|
|
||||||||||||||
| Check out if your library supports optimized Parquet in the [supported libraries](./datasets-libraries) page. | ||||||||||||||
|
|
||||||||||||||
| ### Streaming | ||||||||||||||
|
|
||||||||||||||
| Libraries with dataset streaming features for end-to-end streaming pipelines are recommended for big datasets. | ||||||||||||||
| In this case, the dataset processing runs progressively as the old data arrives and the new data is uploaded to the Hub. | ||||||||||||||
|
|
||||||||||||||
| Check out if your library supports streaming in the [supported libraries](./datasets-libraries) page. | ||||||||||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -4,24 +4,52 @@ The Datasets Hub has support for several libraries in the Open Source ecosystem. | |||||
| Thanks to the [huggingface_hub Python library](/docs/huggingface_hub), it's easy to enable sharing your datasets on the Hub. | ||||||
| We're happy to welcome to the Hub a set of Open Source libraries that are pushing Machine Learning forward. | ||||||
|
|
||||||
| ## Libraries table | ||||||
|
|
||||||
| The table below summarizes the supported libraries and their level of integration. | ||||||
|
|
||||||
| | Library | Description | Download from Hub | Push to Hub | | ||||||
| | ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------- | ----------- | | ||||||
| | [Argilla](./datasets-argilla) | Collaboration tool for AI engineers and domain experts that value high quality data. | ✅ | ✅ | | ||||||
| | [Daft](./datasets-daft) | Data engine for large scale, multimodal data processing with a Python-native interface. | ✅ | ✅ | | ||||||
| | [Dask](./datasets-dask) | Parallel and distributed computing library that scales the existing Python and PyData ecosystem. | ✅ | ✅ | | ||||||
| | [Datasets](./datasets-usage) | 🤗 Datasets is a library for accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP). | ✅ | ✅ | | ||||||
| | [Daft](./datasets-daft) | Data engine for large scale, multimodal data processing with a Python-native interface. | ✅ +s | ✅ +s +p | | ||||||
| | [Dask](./datasets-dask) | Parallel and distributed computing library that scales the existing Python and PyData ecosystem. | ✅ +s | ✅ +s +p* | | ||||||
| | [Datasets](./datasets-usage) | 🤗 Datasets is a library for accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP). | ✅ +s | ✅ +s +p | | ||||||
| | [Distilabel](./datasets-distilabel) | The framework for synthetic data generation and AI feedback. | ✅ | ✅ | | ||||||
| | [DuckDB](./datasets-duckdb) | In-process SQL OLAP database management system. | ✅ | ✅ | | ||||||
| | [Embedding Atlas](./datasets-embedding-atlas) | Interactive visualization and exploration tool for large embeddings. | ✅ | ❌ | | ||||||
| | [fenic](./datasets-fenic) | PySpark-inspired DataFrame framework for building production AI and agentic applications. | ✅ | ❌ | | ||||||
| | [FiftyOne](./datasets-fiftyone) | FiftyOne is a library for curation and visualization of image, video, and 3D data. | ✅ | ✅ | | ||||||
| | [Pandas](./datasets-pandas) | Python data analysis toolkit. | ✅ | ✅ | | ||||||
| | [Polars](./datasets-polars) | A DataFrame library on top of an OLAP query engine. | ✅ | ✅ | | ||||||
| | [PyArrow](./datasets-pyarrow) | Apache Arrow is a columnar format and a toolbox for fast data interchange and in-memory analytics. | ✅ | ✅ | | ||||||
| | [Spark](./datasets-spark) | Real-time, large-scale data processing tool in a distributed environment. | ✅ | ✅ | | ||||||
| | [WebDataset](./datasets-webdataset) | Library to write I/O pipelines for large datasets. | ✅ | ❌ | | ||||||
| | [DuckDB](./datasets-duckdb) | In-process SQL OLAP database management system. | ✅ +s | ❌ | | ||||||
| | [Embedding Atlas](./datasets-embedding-atlas) | Interactive visualization and exploration tool for large embeddings. | ✅ +s | ❌ | | ||||||
| | [Fenic](./datasets-fenic) | PySpark-inspired DataFrame framework for building production AI and agentic applications. | ✅ +s | ❌ | | ||||||
| | [FiftyOne](./datasets-fiftyone) | FiftyOne is a library for curation and visualization of image, video, and 3D data. | ✅ +s | ✅ | | ||||||
| | [Pandas](./datasets-pandas) | Python data analysis toolkit. | ✅ | ✅ +p* | | ||||||
| | [Polars](./datasets-polars) | A DataFrame library on top of an OLAP query engine. | ✅ +s | ✅ | | ||||||
| | [PyArrow](./datasets-pyarrow) | Apache Arrow is a columnar format and a toolbox for fast data interchange and in-memory analytics. | ✅ +s | ✅ +p* | | ||||||
| | [Spark](./datasets-spark) | Real-time, large-scale data processing tool in a distributed environment. | ✅ +s | ✅ +s +p | | ||||||
| | [WebDataset](./datasets-webdataset) | Library to write I/O pipelines for large datasets. | ✅ +s | ❌ | | ||||||
|
|
||||||
| _+s: Supports Streaming_ | ||||||
| _+p: Writes optimized Parquet files_ | ||||||
| _+p*: Requires passing extra arguments to write optimized Parquet files_ | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nice! |
||||||
|
|
||||||
| ### Streaming | ||||||
|
|
||||||
| Dataset streaming allows to iterate on a dataset on Hugging Face progressively without having to download it completely. | ||||||
| It saves disk space and download time. | ||||||
|
|
||||||
| In addition to streaming from Hugging Face, many libraries also support streaming when writing back to Hugging Face. | ||||||
| Therefore they can run end-to-end streaming pipelines: streaming from a source and writing to Hugging Face progressively, often overlapping the downloads, uploads and processing steps. | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
|
|
||||||
| For more details on how to do streaming, check out the documentation of a library that support streaming (see table above) or the [streaming datasets](./datasets-streaming) documentation if you want to stream datasets from Hugging Face by yourself. | ||||||
|
|
||||||
| ### Optimized Parquet files | ||||||
|
|
||||||
| Parquet files on Hugging Face are optimized to improve storage efficiency, accelerate downloads and uploads, and enable efficient dataset streaming and editing: | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. add a visual of how those files or datasets are marked on the Hub 😁
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hint hint @lhoestq =) |
||||||
|
|
||||||
| * [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc) optimizes Parquet for [Xet](https://huggingface.co/docs/hub/en/xet/index), Hugging Face's storage based on Git. It accelereates uploads and downloads thanks to deduplication and allows efficient file editing | ||||||
| * Page index accelerates filters when streaming and enables efficient random access, e.g. in the [Dataset Viewer](https://huggingface.co/docs/dataset-viewer) | ||||||
|
|
||||||
| Some libraries require extra argument to write optimized Parquet files: | ||||||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. do we want to say which here? |
||||||
|
|
||||||
| * `content_defined_chunking=True` to enable Parquet Content Defined Chunking, for [deduplication](https://huggingface.co/blog/parquet-cdc) and [editing](./datasets-editing) | ||||||
| * `write_page_index=True` to include a page index in the Parquet metadata, for [streaming and random access](./datasets-streaming) | ||||||
|
|
||||||
| ## Integrating data libraries and tools with the Hub | ||||||
|
|
||||||
|
|
||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍