Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add quick tutorial using fetch #1294

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

HiroIshida
Copy link

I added quick tutorial using fetch. I try to design this tutorial to be minimum but enough to do something using real robot. Currently the tutorial includes (day1) basic usage for geometrical simulation only inside the euslisp and (day2) workflow to control real robot. I am planning to explain how to make simple tabletop demo in day3. My DC2 submission is 5/8, so I will work on day3 part in the next week of the submission. So, I now keel it as a draft.

You may wonder why I made this tutorial from scratch rather than taking advantage of pr2eus_tutorials and jsk_fetch_gazebo_demo, so I will explain the reason here.

Actually, making a tutorial for pr2eus_tutorials was my first attempt. But simulating PR2 is too slow (2x slower than fetch), and even when I deleted camera and depth sensor stuff from the urdf file and launch it, it didn't make significant difference (of course, a little bit faster though). The bottleneck seems to be in the dynamical simulation and not in image (depth) rendering. Thus I can imagine that even if one use GPU the situation will not change so much. I think especially for students who cannot use real robot and do everything in the simulator, this slow simulation must be a nightmare. So I gave up on PR2.

Then my second attempt was making step-by-step tutorial for jsk_fetch_gazebo_demo. It was a great demo but it is not minimum. In that demo, fetch starts from a location away from a desk, and because it use MCL to navigate, it takes really long time to reach the desk. (This is why I make this comment)

I think manipulating object on table is good starting point. So, I am trying to make a tabletop demo. Unlike fetch_gazebo_demo, the robot is spawn just in front of the table and ready-to-start to grasp object.

@HiroIshida
Copy link
Author

In making a tutorial for day3, I have been struggled to have a reasonable output of plane OrganizedMultiPlaneSegmentaion here. The figure below is the actual output that I got:
strange_plane_segmentation
The point cloud shown in rviz is passed to the OrganizedMyltiPlanSegmentation. The blue convex region is detected plane. It is really strange that region closer to the robot is ignored.

After some trial (like change fetch's position), I found that region where the pointcloud is dense is somehow ignored. But I don't know why this occurs. If someone can help me solving this issue, it would be so helpful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant