Skip to content
Draft
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/hub/datasets-connectors.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Datasets connectors

2 changes: 2 additions & 0 deletions docs/hub/datasets-editing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# Datasets editing

52 changes: 40 additions & 12 deletions docs/hub/datasets-libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,24 +4,52 @@ The Datasets Hub has support for several libraries in the Open Source ecosystem.
Thanks to the [huggingface_hub Python library](/docs/huggingface_hub), it's easy to enable sharing your datasets on the Hub.
We're happy to welcome to the Hub a set of Open Source libraries that are pushing Machine Learning forward.

## Libraries table

The table below summarizes the supported libraries and their level of integration.

| Library | Description | Download from Hub | Push to Hub |
| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------ | ----------------- | ----------- |
| [Argilla](./datasets-argilla) | Collaboration tool for AI engineers and domain experts that value high quality data. | ✅ | ✅ |
| [Daft](./datasets-daft) | Data engine for large scale, multimodal data processing with a Python-native interface. | ✅ | ✅ |
| [Dask](./datasets-dask) | Parallel and distributed computing library that scales the existing Python and PyData ecosystem. | ✅ | ✅ |
| [Datasets](./datasets-usage) | 🤗 Datasets is a library for accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP). | ✅ | ✅ |
| [Daft](./datasets-daft) | Data engine for large scale, multimodal data processing with a Python-native interface. | ✅ +s | ✅ +s +p |
| [Dask](./datasets-dask) | Parallel and distributed computing library that scales the existing Python and PyData ecosystem. | ✅ +s | ✅ +s +p* |
| [Datasets](./datasets-usage) | 🤗 Datasets is a library for accessing and sharing datasets for Audio, Computer Vision, and Natural Language Processing (NLP). | ✅ +s | ✅ +s +p |
| [Distilabel](./datasets-distilabel) | The framework for synthetic data generation and AI feedback. | ✅ | ✅ |
| [DuckDB](./datasets-duckdb) | In-process SQL OLAP database management system. | ✅ | ✅ |
| [Embedding Atlas](./datasets-embedding-atlas) | Interactive visualization and exploration tool for large embeddings. | ✅ | ❌ |
| [fenic](./datasets-fenic) | PySpark-inspired DataFrame framework for building production AI and agentic applications. | ✅ | ❌ |
| [FiftyOne](./datasets-fiftyone) | FiftyOne is a library for curation and visualization of image, video, and 3D data. | ✅ | ✅ |
| [Pandas](./datasets-pandas) | Python data analysis toolkit. | ✅ | ✅ |
| [Polars](./datasets-polars) | A DataFrame library on top of an OLAP query engine. | ✅ | ✅ |
| [PyArrow](./datasets-pyarrow) | Apache Arrow is a columnar format and a toolbox for fast data interchange and in-memory analytics. | ✅ | ✅ |
| [Spark](./datasets-spark) | Real-time, large-scale data processing tool in a distributed environment. | ✅ | ✅ |
| [WebDataset](./datasets-webdataset) | Library to write I/O pipelines for large datasets. | ✅ | ❌ |
| [DuckDB](./datasets-duckdb) | In-process SQL OLAP database management system. | ✅ +s | ❌ |
| [Embedding Atlas](./datasets-embedding-atlas) | Interactive visualization and exploration tool for large embeddings. | ✅ +s | ❌ |
| [Fenic](./datasets-fenic) | PySpark-inspired DataFrame framework for building production AI and agentic applications. | ✅ +s | ❌ |
| [FiftyOne](./datasets-fiftyone) | FiftyOne is a library for curation and visualization of image, video, and 3D data. | ✅ +s | ✅ |
| [Pandas](./datasets-pandas) | Python data analysis toolkit. | ✅ | ✅ +p* |
| [Polars](./datasets-polars) | A DataFrame library on top of an OLAP query engine. | ✅ +s | ✅ |
| [PyArrow](./datasets-pyarrow) | Apache Arrow is a columnar format and a toolbox for fast data interchange and in-memory analytics. | ✅ +s | ✅ +p* |
| [Spark](./datasets-spark) | Real-time, large-scale data processing tool in a distributed environment. | ✅ +s | ✅ +s +p |
| [WebDataset](./datasets-webdataset) | Library to write I/O pipelines for large datasets. | ✅ +s | ❌ |

_+s: Supports Streaming_
_+p: Writes optimized Parquet files_
_+p*: Requires passing extra arguments to write optimized Parquet files_
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice!


### Streaming

Dataset streaming allows to iterate on a dataset on Hugging Face progressively without having to download it completely.
It saves disk space and download time.

In addition to streaming from Hugging Face, many libraries also support streaming when writing back to Hugging Face.
Therefore they can run end-to-end streaming pipelines: streaming from a source and writing to Hugging Face progressively, often overlapping the downloads, uploads and processing steps.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Therefore they can run end-to-end streaming pipelines: streaming from a source and writing to Hugging Face progressively, often overlapping the downloads, uploads and processing steps.
Therefore, they can run end-to-end streaming pipelines: streaming from a source and writing to Hugging Face progressively, often overlapping the download, upload, and processing steps.


For more details on how to do streaming, check out the documentation of a library that support streaming (see table above) or the [streaming datasets](./datasets-streaming) documentation if you want to stream datasets from Hugging Face by yourself.

### Optimized Parquet files

Parquet files on Hugging Face are optimized to improve storage efficiency, accelerate downloads and uploads, and enable efficient dataset streaming and editing:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a visual of how those files or datasets are marked on the Hub 😁

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hint hint @lhoestq =)


* [Parquet Content Defined Chunking](https://huggingface.co/blog/parquet-cdc) optimizes Parquet for [Xet](https://huggingface.co/docs/hub/en/xet/index), Hugging Face's storage based on Git. It accelereates uploads and downloads thanks to deduplication and allows efficient file editing
* Page index accelerates filters when streaming and enables efficient random access, e.g. in the [Dataset Viewer](https://huggingface.co/docs/dataset-viewer)

Some libraries require extra argument to write optimized Parquet files:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want to say which here?


* `content_defined_chunking=True` to enable Parquet Content Defined Chunking, for [deduplication](https://huggingface.co/blog/parquet-cdc) and [editing](./datasets-editing)
* `write_page_index=True` to include a page index in the Parquet metadata, for [streaming and random access](./datasets-streaming)

## Integrating data libraries and tools with the Hub

Expand Down
158 changes: 158 additions & 0 deletions docs/hub/datasets-streaming.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
# Streaming datasets

## Integrated libraries

If a dataset on the Hub is tied to a [supported library](./datasets-libraries) that allows streaming from Hugging Face, streaming the dataset can be done in just a few lines. For information on accessing the dataset, you can click on the "Use this dataset" button on the dataset page to see how to do so. For example, [`samsum`](https://huggingface.co/datasets/Samsung/samsum?library=datasets) shows how to do so with 🤗 Datasets below.

<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage.png"/>
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-dark.png"/>
</div>
Comment on lines +7 to +10
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we should have a toggle or something to show the Streaming code in addition to the Download code ? wdyt @cfahlgren1 ?


<div class="flex justify-center">
<img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-modal.png"/>
<img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/hub/datasets-usage-modal-dark.png"/>
</div>

## Using the Hugging Face Client Library

You can use the [`huggingface_hub`](/docs/huggingface_hub) library to create, delete, and access files from repositories. For example, to stream the `allenai/c4` dataset in python, run

Comment on lines +19 to +20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggesting to install the latest version of hfh so that the user benefits from all the recent bug fixes and improvements you shipped over the past few weeks

Suggested change
You can use the [`huggingface_hub`](/docs/huggingface_hub) library to create, delete, and access files from repositories. For example, to stream the `allenai/c4` dataset in python, run
You can use the [`huggingface_hub`](/docs/huggingface_hub) library to create, delete, and access files from repositories. For example, to stream the `allenai/c4` dataset in Pythonn, simply install the library (we recommend using the latest version) and run the following code.
```bash
pip install -U huggingface_hub

```python
from huggingface_hub import HfFileSystem

fs = HfFileSystem()

repo_id = "allenai/c4"
path_in_repo = "en/c4-train.00000-of-01024.json.gz"

# Stream the file
with fs.open(f"datasets/{repo_id}/{path_in_repo}", "r", compression="gzip") as f:
print(f.readline()) # read only the first line
# {"text":"Beginners BBQ Class Taking Place in Missoula!...}
Comment on lines +22 to +32
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(nit, DX) any way to make the snippet even more simpler/more compact? for instance do we need to instantiate a HfFileSystem or we could have syntactic sugar maybe?

```

See the [HF filesystem documentation](https://huggingface.co/docs/huggingface_hub/en/guides/hf_file_system) for more information.

You can also integrate this into your own library! For example, you can quickly stream a CSV dataset using Pandas in a batched manner.
```py
from huggingface_hub import HfFileSystem
import pandas as pd

fs = HfFileSystem()

repo_id = "YOUR_REPO_ID"
path_in_repo = "data.csv"

batch_size = 5

# Stream the file
with fs.open(f"datasets/{repo_id}/{path_in_repo}") as f:
for df in pd.read_csv(f, iterator=True, chunksize=batch_size): # read 5 lines at a time
print(len(df)) # 5
```

Streaming is especially useful to read big files on Hugging Face progressively or only small portion.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Streaming is especially useful to read big files on Hugging Face progressively or only small portion.
Streaming is especially useful to read big files on Hugging Face progressively or only a small portion.

For example `tarfile` can iterate on the files of TAR archives, `zipfile` can read files from ZIP archives and `pyarrow` can access row groups of Parquet files.

>![TIP]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
>![TIP]
> ![TIP]

> There is an equivalent filesystem implementation in Rust available in [OpenDAL](https://github.com/apache/opendal)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
> There is an equivalent filesystem implementation in Rust available in [OpenDAL](https://github.com/apache/opendal)
> There is an equivalent filesystem implementation in Rust available in [OpenDAL](https://github.com/apache/opendal).

## Using cURL

Since all files on the Hub are available via HTTP, you can stream files using `cURL`:

```bash
>>> curl -L https://huggingface.co/datasets/fka/awesome-chatgpt-prompts/resolve/main/prompts.csv | head -n 5
"act","prompt"
"An Ethereum Developer","Imagine you are an experienced Ethereum developer tasked with creating...
"SEO Prompt","Using WebPilot, create an outline for an article that will be 2,000 words on the ...
"Linux Terminal","I want you to act as a linux terminal. I will type commands and you will repl...
"English Translator and Improver","I want you to act as an English translator, spelling correct...
```

Use range requests to access a specific portion of a file:

```bash
>>> curl -r 40-88 -L https://huggingface.co/datasets/fka/awesome-chatgpt-prompts/resolve/main/prompts.csv
Imagine you are an experienced Ethereum developer
```

Stream from private repositories using an [access token](https://huggingface.co/docs/hub/en/security-tokens):


```bash
>>> export HF_TOKEN=hf_xxx
>>> curl -H "Authorization: Bearer $HF_TOKEN" -L https://huggingface.co/...
```

## Streaming Parquet

Parquet is a great format for AI datasets. It offers good compression, a columnar structure for efficient processing and projections, and multi-level metadata for fast filtering, and is suitable for datasets of all sizes.

Parquet files are divided in row groups that are often around 100MB each. This lets data loaders and data processing frameworks stream data progressively, iterating on row groups.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we do need a doc page about CDC Parquet btw!! cc @kszucs too


### Stream Row Groups

Use PyArrow to stream row groups from Parquet files on Hugging Face:

```python
import pyarrow.parquet as pq

repo_id = "HuggingFaceFW/finewiki"
path_in_repo = "data/enwiki/000_00000.parquet"

# Stream the Parquet file row group per row group
with pq.ParquetFile(f"hf://datasets/{repo_id}/{path_in_repo}") as pf:
for row_group_idx in range(pf.num_row_groups):
row_group_table = pf.read_row_group(row_group_idx)
df = row_group_table.to_pandas()
```

> ![TIP]
> PyArrow supports `hf://` paths out-of-the-box and uses `HfFileSystem` automatically
Find more information in the [PyArrow documentation](./datasets-pyarrow).

### Efficient random access

Row groups are further divied into columns, and columns into pages. Pages are often around 1MB and are the smallest unit of data in Parquet, since this is where compression is applied. Accessing pages enables loading specific rows without having to load a full row group, and is possible if the Parquet file has a page index. However not every Parquet frameworks supports reading at the page level. PyArrow doesn't for example, but the `parquet` crate in Rust does:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add a good visual of Parquet file format, and btw we should have a /docs/hub/parquet page probably...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Row groups are further divied into columns, and columns into pages. Pages are often around 1MB and are the smallest unit of data in Parquet, since this is where compression is applied. Accessing pages enables loading specific rows without having to load a full row group, and is possible if the Parquet file has a page index. However not every Parquet frameworks supports reading at the page level. PyArrow doesn't for example, but the `parquet` crate in Rust does:
Row groups are further divided into columns, and columns into pages. Pages are often around 1MB and are the smallest unit of data in Parquet, since this is where compression is applied. Accessing pages enables loading specific rows without having to load a full row group, and is possible if the Parquet file has a page index. However not every Parquet frameworks support reading at the page level. PyArrow doesn't for example, but the `parquet` crate in Rust does:


```rust
use std::sync::Arc;
use object_store::path::Path;
use object_store_opendal::OpendalStore;
use opendal::services::Huggingface;
use opendal::Operator;
use parquet::arrow::async_reader::ParquetObjectReader;
use parquet::arrow::ParquetRecordBatchStreamBuilder;
use futures::TryStreamExt;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let repo_id = "HuggingFaceFW/finewiki";
let path_in_repo = Path::from("data/enwiki/000_00000.parquet");
let offset = 0;
let limit = 10;

let builder = Huggingface::default().repo_type("dataset").repo_id(repo_id);
let operator = Operator::new(builder)?.finish();
let store = Arc::new(OpendalStore::new(operator));
let reader = ParquetObjectReader::new(store, path_in_repo.clone());
let batch_stream =
ParquetRecordBatchStreamBuilder::new(reader).await?
.with_offset(offset as usize)
.with_limit(limit as usize)
.build()?;
let results = batch_stream.try_collect::<Vec<_>>().await?;
println!("Read {} batches", results.len());
Ok(())
}
```

> ![TIP]
> In Rust we use OpenDAL's `Huggingface` service which is equivalent to `HfFileSystem` in python
Pass `write_page_index=True` in PyArrow to include the page index that enables efficient random access.
It notably adds "offset_index_offset" and "offset_index_length" to Parquet columns that you can see in the [Parquet metadata viewer on Hugging Face](https://huggingface.co/blog/cfahlgren1/intro-to-parquet-format).
Page indexes also speed up the [Hugging Face Dataset Viewer](https://huggingface.co/docs/dataset-viewer) and allows it to show data without row group size limit.
3 changes: 3 additions & 0 deletions docs/hub/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,10 @@ The Hugging Face Hub is a platform with over 2M models, 500k datasets, and 1M de
<a class="no-underline! hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-gated">Gated Datasets</a>
<a class="no-underline! hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-adding">Uploading Datasets</a>
<a class="no-underline! hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-downloading">Downloading Datasets</a>
<a class="no-underline! hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-streaming">Streaming Datasets</a>
<a class="no-underline! hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-editing">Editing Datasets</a>
<a class="no-underline! hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-libraries">Libraries</a>
<a class="no-underline! hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-connectors">Connectors</a>
<a class="no-underline! hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-viewer">Dataset Viewer</a>
<a class="no-underline! hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-download-stats">Download Stats</a>
<a class="no-underline! hover:opacity-60 transform transition-colors hover:translate-x-px" href="./datasets-data-files-configuration">Data files Configuration</a>
Expand Down