Josh Myers-Dean, Yifei Fan, Brian Price, Wilson Chan, Danna Gurari
- Release Test Set
This repository provides links to download data for the DIG dataset. Setup instructions are provided in GETTING_STARTED.md. Given the size of the dataset, we chunk the data into json files containing up to 100 samples each. Each sample is corresponds to an image ID with annotations for: multiple gesture types (e.g., lasso, click), previous segmentations, ground truths, and other optional metadata. For the test set, we also include annotations for the setting of selecting multiple disconnected regions. For more info on the JSON, refer to DATA.md.
We provide a lightweight example of dataloading with dig in dig/dataset.py
. Since we chunk the data, we pre-computed index maps for each JSON file to allow for seamless dataloading.
from dig.dataset import DIGDataset
ds = DIGDataset(JSON_DIRECTORY, IMG_DIRECTORY, split=SPLIT)
sample = ds[0] # sample.img, sample.previous_seg, sample.gt, sample.annotation
We provoide an implementation of our RICE evaluation metric in dig/rice.py
.
If you encounter any issues with the data or have any questions we encourage you to submit an issue in this repository. For all other inquiries, please reach out to josh [dot] myers-dean [at] colorado [dot] edu.
If you find our work useful, please considering citing.
@InProceedings{Myers-Dean_2024_WACV,
author = {Myers-Dean, Josh and Fan, Yifei and Price, Brian and Chan, Wilson and Gurari, Danna},
title = {Interactive Segmentation for Diverse Gesture Types Without Context},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2024},
pages = {7198-7208}
}