Skip to content

Latest commit

 

History

History
84 lines (53 loc) · 4.59 KB

README.md

File metadata and controls

84 lines (53 loc) · 4.59 KB

PubLayNet

PubLayNet is a large dataset of document images, of which the layout is annotated with both bounding boxes and polygonal segmentations. The source of the documents is PubMed Central Open Access Subset (commercial use collection). The annotations are automatically generated by matching the PDF format and the XML format of the articles in the PubMed Central Open Access Subset. More details are available in our paper "PubLayNet: largest dataset ever for document layout analysis.".

Headlines

07/Mar/2022 - We have released the ground truth of the test set for the ICDAR 2021 Scientific Literature Parsing competition available here!.

04/May/2021 - Report for the ICDAR 2021 Scientific Literature Parsing competition available here.

07/Aug/2020 - PDF of document pages in PubLayNet is released.

20/Jul/2020 - PubLayNet is used in ICDAR 2021 Competition on Scientific Literature Parsing (Task A on Document Layout Recognition)

26/Apr/2020 - PubLayNet is used by ICLR 2020 to extract all the images in ICLR 2020 papers for promotion.

03/Dev/2019 - Pre-trained Faster-RCNN model and Mask-RCNN model are released.

25/Nov/2019 - PubTabNet is released! PubTabNet is a large dataset for image-based table recognition, containing 568k+ images of tabular data annotated with the corresponding HTML representation of the tables. Table regions are identified using the same algorithm that generates PubLayNet.

01/Nov/2019 - Our paper "PubLayNet: largest dataset ever for document layout analysis." receives the best paper award at ICDAR 2019!

31/Oct/2019 - PubLayNet migrates from Box to IBM Data Asset eXchange.

Updates in progress

Ground truth of test set

We have released the ground truth of the test set for the ICDAR 2021 Scientific Literature Parsing competition available here!.

Getting data

Images and annotations can be downloaded here. The training set is quite large, so two options are offered. We split the training set into 7 batches, which can be separately downloaded. Or you can also download the full set at once.

For the ICDAR competition, ids of the image files are available here.

If direct download in browser is unstable or you want to download the data from the command line, you can use curl or wget to download the data.

curl -o <YOUR_TARGET_DIR>/publaynet.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz
wget -O <YOUR_TARGET_DIR>/publaynet.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/publaynet.tar.gz

To download the PDFs of the document pages contained in PubLayNet:

curl -o <YOUR_TARGET_DIR>/PubLayNet_PDF.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/PubLayNet_PDF.tar.gz
wget -O <YOUR_TARGET_DIR>/PubLayNet_PDF.tar.gz https://dax-cdn.cdn.appdomain.cloud/dax-publaynet/1.0.0/PubLayNet_PDF.tar.gz

Annotation format

The annotation files follows the json format of the Object Detection task of MS COCO

Cite us

@inproceedings{zhong2019publaynet,
  title={PubLayNet: largest dataset ever for document layout analysis},
  author={Zhong, Xu and Tang, Jianbin and Yepes, Antonio Jimeno},
  booktitle={2019 International Conference on Document Analysis and Recognition (ICDAR)},
  year={2019},
  volume={},
  number={},
  pages={1015-1022},
  doi={10.1109/ICDAR.2019.00166},
  ISSN={1520-5363},
  month={Sep.},
  organization={IEEE}
}

Examples

A Jupyter notebook is provided to generate the following visualization of the annotations of 20 sample pages.

alt text