Skip to content

This repo only contains the code for container imagination. Suitable for users who want to try only the imagination part without doing robot experiments. Updated dependencies. Works for Python 3.

License

Notifications You must be signed in to change notification settings

ChirikjianLab/container-imagination-only

Repository files navigation

Container Imagination

Can I Pour into it? Robot Imagining Open Containability Affordance of Previously Unseen Objects via Physical Simulations (RA-L with ICRA 2021)

This repo is sourced from the original repo by Hongtao Wu, Gregory Chirikjian, but for Imagination only (no Real Robot Experiment)

Container Imagination is a method which enables robot to imagine the open containability affordance of an unseen object. With the understanding of the containability, the robot is able to identify whether the object is able to contain materials (e.g., M&M candies!) and pour a cup of the material into the object if it is identified as an open container.

Drawing

If you have any questions or find any bugs, please let me know: [email protected]

The following image summarize our method. Click the image to watch the video:

introductory video

(This repo only contains the code for step 2 & 3 - Open Containability Imagination and Pouring Imagination)

Abstract

Open containers, i.e., containers without covers, are an important and ubiquitous class of objects in human life. We propose a novel method for robots to "imagine" the open containability affordance of a previously unseen object via physical simulations. We implement our imagination method on a UR5 manipulator. The robot autonomously scans the object with an RGB-D camera. The scanned 3D model is used for open containability imagination which quantifies the open containability affordance by physically simulating dropping particles onto the object and counting how many particles are retained in it. This quantification is used for open-container vs. non-open-container binary classification. If the object is classified as an open container, the robot further imagines pouring into the object, again using physical simulations, to obtain the pouring position and orientation for real robot autonomous pouring. We evaluate our method on open container classification and autonomous pouring of granular material on a dataset containing 130 previously unseen objects with 57 object categories. Although our proposed method uses only 11 objects for simulation calibration (training), its open container classification aligns well with human judgements. In addition, our method endows the robot with the capability to autonomously pour into the 55 containers in the dataset with a very high success rate.

Citation

If you find this code and/or the data useful in your work, please consider citing

@article{wu2020can,
  title={Can I Pour Into It? Robot Imagining Open Containability Affordance of Previously Unseen Objects via Physical Simulations},
  author={Wu, Hongtao and Chirikjian, Gregory S},
  journal={IEEE Robotics and Automation Letters},
  volume={6},
  number={1},
  pages={271--278},
  year={2020},
  publisher={IEEE}
}

Dependencies

The project has been tested on Ubuntu 16.04 with python 2.7. We are working on transfering the code to python 3 on later version of Ubuntu release.

To install the dependencies, please refer to here.

Usage

Imagination

Ash Tray Containability Imagination Pouring Imagination

The imagination contains two part: open containability imagination and pouring imagination. The main script for imagination is main_imagination.py.

python main_imagination.py <root_dir> <data_root_dir> <data_name> <mesh_name> [-p] [-v] [-m] 
  • root_dir: root directory of the code
  • data_root_dir: root directory of the data
  • data_name: name of the data in data_root_dir. Note that a data represent a capture of a scene. There may be more than one object in the scene.
  • mesh_name: name of the mesh / object for imagination
  • [-p]: if set as True, pouring imagination will be activated
  • [-v]: if set as True, visualization of the imagination will be activated
  • [-m]: if a directory is given, the video of the imagination will be saved thereof. Note that this needs to used with the [-v] option. The video name of the containabiliy imagination is object_name_contain.mp4 the video name of the pouring imagination is object_name_pour.mp4

An example argument is given as follows:

python main_imagination.py <root_dir> <root_dir>/data Amazon_Accessory_Tray_pour_pca Amazon_Accessory_Tray_pour_pca_mesh_0 -p True -v True -m <root_dir>/data/Amazon_Accessory_Tray_pour_pca

The directory of the data should be structured as follows:

├── data_root_dir
│   ├── data_name_0
│   │   ├── object_name_0.obj
│   │   ├── object_name_0_vhacd.obj
│   │   ├── object_name_0.urdf
...

V-HACD convex decomposition


V-HACD is used to decompose the mesh reconstructed from TSDF fusion for physical simulation in Pybullet. Please follow the Setup instruction to install V-HACD.

Containability Imagination Benchmark

First, download the dataset. To run the containability imagination benchmark, first generate the imagiantion result

python containability_imagination_benchmark.py <root_dir> <data_root_dir> <result_dir>
  • root_dir: root directory of the code
  • data_root_dir: root directory of the data
  • result_dir: directory to save the result The imagination result (txt) for each object will be saved in the . The format of the result is:
container <sphere_in_percentage> 0 1 2 3

The "0 1 2 3" does not have any meaning in specific. Then, run the benchmark code to get the classification accuracy and AUC

python benchmark_map.py <result_dir> <gt_dir>
  • result_dir: directory to save the result
  • gt_dir: directory of the ground truth

We use the <sphere_in_percentage> from the containability imagination as the confidence to calculate the classificaiton accuracy and AUC. If <sphere_in_percentage> > 0, we classify the object as an open container. The ground truths are saved in a similar format:

container/noncontainer 0 1 2 3

They are obtained from human annotation. The "0 1 2 3" does not have any meaning in specific.

Related Work

These are the related papers on the robot imagination project our group is working on. Please take a look!

  • Is That a Chair? Imagining Affordances Using Simulations of an Articulated Human Body [arxiv] [project page]

For more information about our group, please visit our website at: https://chirikjianlab.github.io/

TODO

  • python 3 version

About

This repo only contains the code for container imagination. Suitable for users who want to try only the imagination part without doing robot experiments. Updated dependencies. Works for Python 3.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •