Skip to content

Commit

Permalink
Final version of the readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Laurenhut committed Dec 9, 2017
1 parent 445d75b commit 990a508
Showing 1 changed file with 10 additions and 11 deletions.
21 changes: 10 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@
##### Functionality of package
This package is a collection of scripts and message definitions created to have a Baxter robot prepare a cup of coffee using a single serve keurig, a kcup, and a mug. The objects must be set at a specific height and the coffee maker must face Baxter, but objects can otherwise be placed arbitrarily so long as Baxter can easily reach them.

This package makes use of two features to identify whether the object is the kcup, mug, or the coffee maker and also to find the transform from baxters base frame to the object. The first feature is a Computer vision algorithm that utilizes the CVBridge package to convert ROS images to the CV images we use to identify series of pink, blue, and yellow post-it-notes placed next to the kcup, cup, and keurig respectively.
The other feature is the tracking of AR tags. The AR tags come from the ar_track_alvar package. What this packes does is it will use an external camera to identify a series of AR tags that can be generated using the packages generation function. When an AR tag is detected by the camera the package will begin publishing transforms from a selected base frame to the AR tag. We used these tags to find the transforms from baxter to the items these transforms will eventually be used in the motion planning program.
This package makes use of two features to identify whether the object is the kcup, mug, or the coffee maker and also to find the transform from Baxters base frame to the object. The first feature is a Computer vision algorithm that utilizes the CVBridge package to convert ROS images to the CV images we use to identify series of pink, blue, and yellow post-it-notes placed next to the kcup, cup, and keurig respectively.
The other feature is the tracking of AR tags. The AR tags come from the ar_track_alvar package. What this packes does is it will use an external camera to identify a series of AR tags that can be generated using the packages generation function. When an AR tag is detected by the camera the package will begin publishing transforms from a selected base frame to the AR tag. We used these tags to find the transforms from Baxter to the items these transforms will eventually be used in the motion planning program.

Once the tags are identified a different algorithm takes the transformation data produced from the AR tags and matches the position of the tag with the position of the detected color it then determines what ar tag corresponds to which item. When the items and their positions with respect to baxter are determined the data is sent to a motion planning program.
Once the tags are identified a different algorithm takes the transformation data produced from the AR tags and matches the position of the tag with the position of the detected color it then determines what AR tag corresponds to which item. When the items and their positions with respect to Baxter are determined the data is sent to a motion planning program.

The motion planning program uses Baxter's in-built inverse kinematics service to move to desired locations. The objective is to approach the objects horizontally and keep them level to avoid spilling their contents, and then places the objects at the desired positions.
To open and close the keurig two services are called in the middle of the motion planning program.
Expand All @@ -18,8 +18,6 @@ In total, there are three main components of this project: Color Segmentation, A

[![baxpicture](./media/demo_screen.png)](https://vimeo.com/246536038)

One launch file is included named ["baxter_sw.launch"][launch-launch1]. This should start everything. There have been some instances of the [pose-and-item][src-pose] node crashing after startup. The suggested work-around for this is to either restart that node individually, or to operate the nodes manually.


##### [Scripts][src]
* The [pic_cal_sw.py][src-pic] script launches the node `pic_cal` that subscribes to Baxter's left hand camera topic `/cameras/left_hand_camera/image`. While not actually run in the launch file, this node was used to obtain clear images of the [blue][media-post2], [yellow][media-post1], and [pink][media-post3] post-it-notes that Baxter used to differentiate between the three different objects: the Keurig, cup, and K-cup.
Expand All @@ -28,11 +26,11 @@ One launch file is included named ["baxter_sw.launch"][launch-launch1]. This sho

* The [starbax_sw.py][src-starbax] script launches the node `initial` that positions Baxter's left arm so that it's camera has a top-down view of the table where the Keurig, cup, and K-cup are located.

* The [artransforms.py][src-artrans] script will launch the `ar_info` node and it will subscribe to the `ar_pose_marker` topic generated by the `ar_track_alvar package`. This script will then extract the AR tags id number, the x,y,z positions with respect to the base frame, and orientation data of each of the tags. This data is then republished on the topic `ar_pose_id` as a custom message type called `ar_tag`.
* The [artransforms.py][src-artrans] script will launch the `ar_info` node and it will subscribe to the `ar_pose_marker` topic generated by the `ar_track_alvar` package. This script will then extract the AR tags id number, the x,y,z positions with respect to Baxter's base frame, and orientation data of each of the tags. This data is then republished on the topic `ar_pose_id` as a custom message type called `ar_tag`.

* The [pose_and_item.py][src-pose] script launches the node `pose_item` that subscribes to the `pos_items` and `ar_pose_id` topics to get the relative positions of the objects and their pose in the world. The incoming pose is of the `ar_tag` message definition. It then sorts the three poses from least to greatest using their 'y' coordinates. The pose with the smallest 'y' coordinate is then paired with the left most object as one faces Baxter, and so forth. Finally, the node publishes a new `ar_tagstr` message containing the object name and pose to the topic `pose_and_item`.

* The [movement.py][src-movem]: Will subscribe to the topic `pose_and_item` to find the x and y positions of the items. The process of picking and placing of objects is broken down into several independent tasks, this node defines each of these tasks as a separate function. The main function calls all of the movement functions in the correct order after the x and y positions of the objects. The node also calls the `open_service` and `close_service` services to open and close the Keurig lid.
* The [movement.py][src-movem] Will subscribe to the topic `pose_and_item` to find the x and y positions of the items. The process of picking and placing of objects is broken down into several independent tasks, this node defines each of these tasks as a separate function. The main function calls all of the movement functions in the correct order after the x and y positions of the objects are obtained. The node also calls the `open_service` and `close_service` services to open and close the Keurig lid.

* The [open_service.py][src-open] script launches the `opener_node` node, which provides the `/opener` service. This service is intended to be called with a Pose message containing the position of the coffee maker relative to the base frame of Baxter. When called, the node will use the [ExternalTools/left/PositionKinematicsNode/IKService][src-open-ik] to produce a series of joint angle solutions relative to the coffee maker, then command the left arm and gripper in a sequence intended to open the lid of the coffee maker.

Expand All @@ -48,11 +46,11 @@ One launch file is included named ["baxter_sw.launch"][launch-launch1]. This sho

###### Topics

* /pos_items: Publishes the names of what objects are being seen
* `/pos_items`: Publishes the names of what objects are being seen

* /ar_pose_id:Publishes the id number, position, and orientations of the AR tags
* `/ar_pose_id`:Publishes the id number, position, and orientations of the AR tags

* /pose_and_item: Publishes the positions of the AR tags with what item each position corresponds to
* `/pose_and_item`: Publishes the positions of the AR tags with what item each position corresponds to

##### Messages
* [ar_tag.msg][msg-tag]: contains the id number as a uint32 and the pose as a geometry_msgs/Pose
Expand All @@ -62,7 +60,8 @@ One launch file is included named ["baxter_sw.launch"][launch-launch1]. This sho

##### [Launch Files][launch]

* [baxter_sw.launch][launch-launch1]: This launch file will start the ar_track_alvar AR tag tracking and will sets the camera it identifies tags with to be the left hand camera. It will then start the artransforms.py, starbax_sw.py, table_cam_sw.py, movement.py, pose_and_item.py,close_service.py, and open_service.py python scripts.
* [baxter_sw.launch][launch-launch1]: This launch file will start the ar_track_alvar AR tag tracking and will sets the camera it identifies tags with to be the left hand camera. It will then start the artransforms.py, starbax_sw.py, table_cam_sw.py, movement.py, pose_and_item.py,close_service.py, and open_service.py python scripts in that order. There have been some instances of the [pose-and-item][src-pose] node crashing after startup. The suggested work-around for this is to either restart that node individually, or to operate the nodes manually.



[src]:https://github.com/Laurenhut/ME495-final-project/tree/master/src
Expand Down

0 comments on commit 990a508

Please sign in to comment.