-
Given an object, find a grasp that maximizes a binary success metric subjected to uncertainty in object, environment, robot state.
-
Leverage a large dataset of obj-grasp-grasp quality to reduce the number of grasp evaluations required to find the optimal grasp.
-
Learning to grasp from purely synthetic data.
-
They only consider grasping singulated obj in this paper. Unclear how it will work in clutter.
-
The main component in the approach is a Grasp Quality CNN (GQ-CNN), which predicts binary success label for depth image - grasp configuration pair.
-
To pick grasp, they just use CEM.
-
Suction grasp is widely used for pick-and-placed tasks in industry and warehouse order fulfillment. Suction has an advantage over parallel-jaw or multi-finger grasping due to its ability to reach into narrow spaces and pick up objects with a single point of contact.
-
Apply the approach taken in dex-net 2.0 to to suction grasp.
-
Propose a new model to evaluate grasp robustness of suction-based grasp by analyzing seal formation and wrench resistance.
-
Achieves success rate of
$98%$ ,$82%$ ,$58%$ on basic (prismatic or cylindrical), typical (with more complex geometry), and adversarial (with few available suction-grasp points) respectively.
Supersizing_Self_supervision_Learning_to_Grasp_from_50K_Tries_and_700_Robot_Hours
- Autonomous large-scale dataset collection for robotic grasping through trial and error.
-
Learning hand-eye coordination for robotic grasping from monocular images from scratch, with minimal prior knowledge and manual engineering.
-
End-to-end training directly from pixel input to output task-space gripper motion with minimal human supervision.
-
Precise camera calibration is not used.
-
The method learns servo the robotic gripper to position where grasps are likely to be successful.
Learning_Synergies_between_Pushing_and_Grasping_with_Self_supervised_Deep_Reinforcement_Learning
-
Learns how to combine individually parameterized grasping policy and pushing policy.
-
The 2 policies map directly from visual obs to actions.
-
Both are trained jointly to maximize the number of successful grasps.
-
Propose an interesting discrete parameterization of the action space that enables efficient learning, with translational and rotational invariance and parallelization of action evaluation