Skip to content

EnzoGhisoni/SearchAndRescueRos

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

1 Integrated Robotics Project

1.1 Scenario:

After a natural or man-made disaster, the first task for the responders is to find and rescue survivors in the collapsed buildings. Even though technologies such as microphone arrays or search cameras help the responders, it is still a dangerous task: the stricken building might collapse completely and bury the rescue team in case of an aftershock. Search and Rescue robots can help to reduce this risk during these Urban Search and Rescue (USAR) missions. The idea is to send these robots into the rubble pile, and let them search for victims using their sensors such as cameras or heat sensors and report to the rescuers once they have found a person.

1.2 Your Mission:

\1. Autonomously move the mobile robot in the environment with no initial map, avoiding obstacles, able to dead reckon with or without visual cues (hint: consider calibration methods).

\2. Build and store a map of the environment and navigate within it (e.g. SLAM).

\3. Locate up to 6 individuals in the environment and as many signs as possible, which would allow the rescue team to plan a safe route to enter and recover the missing workers and report their position in the map.

To complete this subject, I have used the Turtlebot3 and the different packages provide in the package (turtlbot3_navigation, turtlebot3_slam, turtlebot3_gazebo).

1.3 Technical Description:

  1. Obstacle Avoidance & Path Planning:

To perform the obstacle avoidance and the path planning, I have used as inspiration the turtlebot3_slam package with the slam method frontier exploration for the configuration of the move_base node and amcl. This method can be launch with the command:

$ roslaunch integrated_robotics_project start_navigation_turtlebot3_gmapping.launch

In this launch file, 3 packages are used:

  • Gmapping with the turtlebot3 configuration
  • AMCL for the localization
  • move_base with the planners and cost map parameters

The move base package is launched with a DWA (Dynamic Window Approach) as local planner. This method is based on a discretization of the robot's states and possible speeds. A simulation is then carried out to evaluate the best possible trajectories based on metrics such as: distance to obstacles, distance to the global plane, proximity to the goal and speed. After that the best one is selected NavFnROS is the global planner, this planner is an implementation of a Dijkstra's algorithm and is one of the most commonly used global planner used in Navigation application. The move_base node has various other parameters to tune the cost map. I have chosen to use the default configuration implemented in the turtlebot3_slam package to perform the Obstacle avoidance.

To explore the map, I have used the package explore_lite (http://wiki.ros.org/explore_lite), this package use move_base to control the navigation. I will subscribe to nav_msgs/OccupancyGrid messages to get map information and will send a new goal to the move_base node to continue the exploration. The figure bellow presents the architecture to use explore_lite package.

  1. Self-Localization and Mapping:

The localization of the robot is done with the package AMCL (Adaptative Monte Carlo Localization http://wiki.ros.org/amcl ) which use a particle filter to calculate the pose of the robot. This node takes as input parameters a 2D map, laser scans data and transform frame messages and provide an estimation of the pose. For the experimentation, I have used the default parameters define in the turtlebot_navigation package to set the amcl node. The map is generated by the package map_server thanks to the data provided by the amcl node and gmapping. The communication between different package is managed by the move_base node. The map will be published on the topic /map and store in a nav_msgs/OccupancyGrid Message which will be use in other experimentations.

  1. Object Detection:

The objective of this part is to detect when one of these signs is detected: To perform sign detection, I used the find_object_2d (http://wiki.ros.org/find_object_2d) package. This package uses feature detection algorithms to detect a pattern on an image. Different algorithms are available but the SURF has been used to perform the image detection.

This package sends on the topic /objects a std_msgs/Float32MultiArray message when an object is detected. This message contains in the vector msg.data is formatted as [objectId1, objectWidth, objectHeight, h11, h12, h13, h21, h22, h23, h31, h32, h33, objectId2...] where objectIdx is the id of x-th object detected and the hxx values are coordinates of the homography matrix associated. The homography (https://docs.opencv.org/master/d9/dab/tutorial_homography.html) is used in the field of computer vision to render and take into consideration the perspective on an image.

By using the libraries Pyside2.QtGui and Pyside2.QtCore, we can use the homographic matrix to find the coordinates where the sign has been recognized on the image. I inspired myself from the print_objects_detected_node.cpp code of the find_object_2d package to detect the position of the corners of the detected sign. Once the corners of the sign have been detected, we want to extract two information, the distance sign/robot and the angle sign/robot. To find the angle, I have used the information from the datasheet of the camera used on the turtlebot3 (https://emanual.robotis.com/docs/en/platform/turtlebot3/appendix_raspi_cam/). The data interesting to find the angle is the Horizontal field of view (62.2 degrees) and the resolution of the camera which is 480x640. To find the angle between the robot and the center of the object detected, I have supposed a linear relation between the angle of the center angle and its position (in pixel) on the image. The two extremes points are of the minimum angle (-31.1 degrees at 0px) and the maximum angle (31.4 degrees at 640px). By using the properties of a linear relation, we can define that:

theta = cameraAnglemin + xpixelposition × steeringcoefficient

The only missing parameter is the steering coefficient, we can compute it by using the following equation on a known point to find this parameter:

steeringcoefficient = theta - cameraAnglemin x pixelposition

By calculating the steering coefficient of theta = 0 (center of the image), the position in pixel is 320px (= 640 / 2).

steeringcoefficient= (0 - 31.1) / 320

The value of the steering coefficient with these values is 0.097, the coefficient should be constant for every pixel/angle values.

By using the steering coefficient of this curve, we can find the angle of an image by using the following equation:

theta=cameraAnglemin+cameraAnglemax×steeringcoefficient

To find the distance, I have used the focal length (3.04mm) and sensor width (3.68mm) values from the turtlebot3 datasheet. To compute the focal length value in pixel, we can use this expression:

focalLengthpx=focalLengthmmsensorWidth×imageWidthpx

To determine the distance of the object, we need few extra parameters as the real size of the object and the size in pixel of the object when detected. By looking into the world file, we can see that the size of the cube is 1 meter for the width and the height. The pixel size can be determined by using the corner position (found to determine the angle) and draw a rectangle around with an OpenCV method. The distance (in meter) can be found with the following expression:

distancemeter=focalLengthpx×objectRealSizemeterheightObjectpx

Once the distance and the angle have been found, we update these values on global variables and we publish the image with the object detected, the distance and the angle on the topic /image_rectangle. The SURF algorithm in find_object_2d convert the images in grayscale before performing the sign detection, the difference between the sign “Alive worker” and “Dead worker” (color difference). To differentiate these two images, I have performed a color filtering to detect if green is detected on the sign detected. To perform the color filtering, I have converted the image in HSV encoding to perform a more robust color detection. HSV characterizes the colors with Hue, Saturation and Value. A representation of these parameters can be seen in the following figure:

After doing the conversion, I have used the OpenCV method inRange to detect the range of Green colors that should be detected (40, 40, 40) to (70, 255, 255). This method returns a mask where the match color is in white and the color out of the range are in black. After that, I have performed a contour detection to find the size of the white areas, if the biggest white area is bigger than 10px (minimal size set to avoid false detection) in height and width, we consider that the person is alive, else the person is dead.

  1. Grand Challenge:

The grand Challenge is solved by using simultaneously the features described in previous parts. The complete application can be launched with this command:

$ export TURTLEBOT3_MODEL=waffle_pi

$ roslaunch integrated_robotics_project rescue_application.launch

This launch file will launch the navigation (gmapping, move_base), the localization (amcl, map_server) and object detection (find_object_2d) with the parameters to perform the complete task.

By using the data from the object detection part (object type, distance and angle from the robot), we can add on the robot map the corresponding marker (color, shape, size) when an object is detected with the camera. We can compute the pose of the object by taking as reference the pose of the robot provided by the /odom topic:

objectPositionX = robotposeX + distanceObjectRobot × cos⁡robotyaw - angleObjectRobot

objectPositionY = robotposeY + distanceObjectRobot × sin⁡robotyaw - angleObjectRobot

The marker can be placed at the position where they are detected as done in the figure bellow. When the object has been found, we compute a mean with the position found. We include in the mean each detection while the object detected is the same. When another object is detected, we compute a mean position and add a mean marker in the map. The demonstration of the grand challenge is available at this link: https://youtu.be/7DYixZJ19nY

  1. Experimentations:

To go further in solving the scenario, I thought it might be interesting to make the map created by the robot, updated in real time, available to all member of the rescue team. I then developed an application allowing all users to communicate with the robot. This figure exposes the main page of the application, this is where the member of the rescue team can select the type of interaction they want to perform with the robot.

$ export TURTLEBOT3_MODEL=waffle_pi

$ roslaunch integrated_robotics_project rescue_application.launch

  1. Robot Teleoperation

The first type of interaction is teleoperation, this screen provides an directional cross to direct the robot (which publish directly on the topic /cmd_vel) and a video player to read the video stream of the topic /image_rectangle.

  1. Map Observation

This mode allows the user to visualize the map of the robot in the app.The image of the map is generated from the occupancy grid published on the topic /map and will then be used to generate a grayscale image using OpenCV (-1 in gray, 0 in white and 100 in black). All the information to characteristics of the map (height, width, resolution) are available in the nav_msgs/OccupancyGrid message, in the MapMetaData part. The black square on the image is the robot represented by using the data form the /odom topic, the position is converted in the map frame by using the resolution of the map. Once this map generated from the occupancy grid is available on the topic /map we can send it to the application (developed with the SDK Flutter https://flutter.dev, tested on android device) by using the node rosbridge_server (https://wiki.ros.org/rosbridge_server) which provide webSockets to perform bidirectional communication layer between clients (web/smartphone app) and ROS server. The content of topics publication is passed through JSON messages. The package web_video_server (https://wiki.ros.org/web_video_server) is used to perform an HTTP stream.

The figure bellow presents the part of the application which allows to visualize a video stream of the topic /image_rectangle and the image of the map generated from the occupancy grid on topic /image_from_occupancy.

  1. Navigation Control

The last type of interaction is the Navigation Control, it allows to send a new goal by clicking on the map available in the application. In this screen, the map has an overlay which get the click position, convert it into the robot frame and publish it on the topic /instructions (share through the rosbridge node). The program navigation_goal_on_click.py will subscribe to the topic /instruction and will create an action Goal with its frame and send it to move_base and set an new objective for the navigation. The demonstration is available at this link: https://youtu.be/MAO-09s4pJ0 In the version presented in the video, Mean Markers are also added on the map (Markers are represented with letters). This method could be used once the exploration with explore_lite is completed or to perform a more targeted exploration for a certain area. It could be useful to investigate in areas where the robot would not have detected a sign while an obstacle (a box) would be visible on the map. It could also be useful in the case of the scenario to orientate the research of the robot in a specific area.

  1. Possible improvement and future experimentations

This system and the application could be improved in many ways. The first way could be to improve the accuracy of the object detection and the measure of the distance. The precision of the marker position could be improved by using a more advanced statistic method to process the robot pose data. Sometime the map generated from the occupencyGrid could have a shift when the change of resolution occurs. It could be interesting to investigate more to find a way to improve the robustness of the map generation. Another improvement would be to make the data available outside of the local network. Indeed, the current network implementation of the system allows the robot to communicate with all devices connected to the local Wi-Fi network (not always accessible in a rescue scenario). The use of a real-time database (as Firebase https://firebase.google.com/docs/database) could allow to share the data outside of the local network. The most interesting improvement could be to build the map by using multiple robots, fuse the maps and share the information get by every robot to explore the environment faster.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published