Skip to content

Latest commit

 

History

History
138 lines (108 loc) · 7.95 KB

File metadata and controls

138 lines (108 loc) · 7.95 KB

CRISP Teleoperated Fruit Picking Dataset

Dataset_Video.mp4

Intro

Dataset containing the demonstration data collected with a teleoperation system. The CRISP teleoperated fruit picking dataset contains real-world teleoperated demonstration recordings of teleoperated grasping and manipulation sequences. The dataset offers recordings of RGB-D, Tactile and kinematic data collected during fruit pick-and-place tasks. Our items are placed in the workspace as single or as a clutter to simulate real-world food manufacturing scenarios.

It comprehends 10 recordings for 3 different objects (Avocado, Banana, Blueberry Box) in 2 different scenarios (Single, Clutter) for a total of 60 demonstrations.

The dataset includes 6 activities:

  • move-in is the act of approaching with the arm to the item the operator wants to grasp or manipulate.
  • move-out is the opposite of the previous. It corresponds to when the robot arm is leaving the workspace, with or without the item in hand.
  • manipulate occurs during the successful and unsuccessful manoeuvres for workspace decluttering.
  • grasp corresponds to the act of performing a closure around the item. This activity terminates when the hand lifts with or without the item.
  • pick-up starts at the end of the previous. It corresponds to the act of lifting the item vertically within the workspace.
  • drop terminates all the demonstrations. It occurs after a $move\textnormal{-}out$ while carrying the item. It terminates when the item gets in contact with a surface outside of the workspace.

Data Modalities

The dataset provides the following modelities:

  • RGB-D
  • TF (Robot hand palm, Robot Fingertips, Leap Motion tracked fingertips)
  • Tactile Data
  • Kinematic state of Robot hand and Arm RGB is currently available as jpegs. In case another format of RGB, Depth and ROS version of the dataset is required, please email [email protected].

For every item and scenario, there are 10 demonstrated episodes. The dataset is organised as following:

avocado
    └─ single
         └── 1
             ├── allegro_fingertips.csv
             ├── allegro_joints.csv
             ├── camera_compressed
             │   ├── 1634755101828721375.jpg
             │   ├── 1634755101861649368.jpg
             │   ├── 1634755101893623525.jpg
             │   ├── 1634755101927505682.jpg
             │   ├── .................
             │   ├──
             │
             ├── labels
             ├── leap_fingertips.csv
             ├── optoforce_data.csv
             └── ur5_joints.csv

The demonstration activities are manually annotated with the following format: tstamp_begin:tstamp_end;activity. For example, the content of the labels file may look like this.

1634844105833881526:1634844131785567065;move-in
1634844131785567065:1634844140912956220;manipulate
1634844140912956220:1634844148261190867;move-out
1634844148261190867:1634844161656180720;move-in
1634844161656180720:1634844169650791017;manipulate
1634844169650791017:1634844180109491830;move-out
1634844180109491830:1634844215461427984;move-in
1634844215461427984:1634844227849974205;grasp
1634844227849974205:1634844231262897166;pick-up
1634844231262897166:1634844242894455258;move-out
1634844242894455258:1634844247330759033;drop

This format has been chosen to cope with the different framerate of the different modalities.

The images of the RGB modalities are saved with their timestamp in the filename inside the camera_compressed folder. In the other modalities, the timestamp is provided in the time column. To synchronize the different modalities, you can use the merge_as_of function in the pandas package.

allegro_fingertips.csv - contains cartesian poses of the palm wtr of the robot base and the fingertips in the frame of the wrist. leap_fingertips.csv - contains cartesian poses of the palm wtr in the leap motion frame and the fingertips in the frame of the wrist. These are organised with the following columns

pose_x pose_y pose_z pose_qx pose_qy pose_qz pose_qw

the first 3 columns are the cartesian position of the other are orientation expressed in quaternions.

allegro_joints.csv - contains the joint states of the allegro hand (16 joints - position, velocity, effort) ur5_joints.csv- contains the joint states of the allegro hand (6 joints - position, velocity, effort) optoforce_data.csv - contains the tactile data obtained with the optoforce sensor placed on the manipulator fingertip. Tactile data contains the following columns (3 components per finger x fingers).

index_x index_y index_z middle_x middle_y middle_z ring_x ring_y ring_z thumb_x thumb_y thumb_z

Important Note (READ HERE!)

Because of a technical fault the ring finger of the robot was not used during the collection of this dataset. Thus, all the fields referring to the index should be ignored (e.g., ring_x ring_y ring_z). These columns are kept only for compatibility with future updates of the dataset, which will include the ring finger.


Demo Video

The demonstrations are collected with a teleoperation system of our creation.

blueberry_demo.mp4

Other info about the teleoperation system can be found here:

Watch the video

How to Download:

A compressed version of the dataset has been made availabe on Zenodo. This is a temporary solution (especially considering the download speed). Please help yourself with the following script to download the files.

curl https://zenodo.org/record/6450413/files/single.zip?download=1 --output blueberry_single.zip
curl https://zenodo.org/record/6450413/files/clutter.zip?download=1 --output blueberry_clutter.zip

curl https://zenodo.org/record/6450435/files/single.zip?download=1 --output avocado_single.zip
curl https://zenodo.org/record/6450435/files/clutter.zip?download=1 --output avocado_clutter.zip

curl https://zenodo.org/record/6450439/files/single.zip?download=1 --output banana_single.zip
curl https://zenodo.org/record/6450439/files/clutter.zip?download=1 --output banana_clutter.zip