In partnership with Karelics Oy, we integrated an autonomous exploration algorithm into their existing robotic solution using ROS2 and Docker. Additionally, we connected the robot to the Unity game engine and demonstrated the visualization of robot data (pose, generated map, autonomous exploration goals) in a VR scene that we created.
Artemis Georgopoulou | Fabiano Manschein | Shani Israelov | Yasmina Feriel Djelil (equal contribution)
Code | Blog | Video | Presentation | Report
A screenshot of the map generated by the SAMPO2 robot autonomously exploring an unknown environment. Green spheres are possible goal poses, blue and red lines are detected frontiers.
Overview Video
The XploreR project is the outcome of eight weeks of collaboration with Karelics Oy for the Robotics and XR class at the University of Eastern Finland, taught by Professor Ilkka Jormanainen.
- XploreR
- Table of contents
- Requirements
- First-time setup
- Docker with GUIs
- Running the project
- Unity
- Branches
- Todo list
- Thanks
The project uses the following technology stack:
- ROS 2 Galactic (Gazebo, turtle bot simulation, Rviz)
- Docker
- Unity 2021.3.14f1
As such, to fully run the project, the following is required:
- Docker (and docker-compose): recommended to install Docker Desktop
- (Windows) VcXsrv Windows X Server: https://sourceforge.net/projects/vcxsrv/ required for turtlebot3 gazebo simulation
- Unity 2020+: for the Unity scene
For turtlebot3_gazebo
and similar GUI ROS packages to work with Docker, the following steps are necessary. Follow whichever fits your OS.
-
Download and install VcXsrv Windows X Server: https://sourceforge.net/projects/vcxsrv/
-
Start XLaunch (VcXsrv Windows X Server). Note: this can probably be replaced with other X server program.
- Press next until you get to
Extra settings
tab. - Deselect
Native opengl
. - Select
Disable access control
. - Note: sometimes when running simulations XLaunch might get buggy, so you have to kill the whole process and start it again.
- Press next until you get to
-
Get your local IP from
ipconfig
.- Note: this can be done by pressing the Windows key, typing
cmd
, selecting theCommand Prompt
app, then typingipconfig
. Search for a line like this:IPv4 Address. . . . . . . . . . . : 10.143.144.69
- Note: this can be done by pressing the Windows key, typing
-
Open the
environment.env
file and paste your IP in theDISPLAY
variable before:0
, like so:DISPLAY=10.143.144.69:0
. Yourenvironment.env
file should look like this:
DISPLAY=10.143.144.69:0
ROS_DISTRO=galactic
ROS_DOMAIN_ID=1
TURTLEBOT3_MODEL=burger
GAZEBO_MODEL_PATH=/opt/ros/galactic/share/turtlebot3_gazebo/models/
GAZEBO_WORLD_PATH=/opt/ros/galactic/share/turtlebot3_gazebo/worlds/
COMPOSE_DOCKER_CLI_BUILD=0
IMPORTANT: you'll need to update your IP every time it changes!
To run Docker without sudo
:
sudo groupadd docker
sudo gpasswd -a $USER docker
newgrp docker
Run the following command:
xhost +local:`docker inspect --format='{{ .Config.Hostname }}' gazebo`
IMPORTANT: This command is required on every reboot.
In the environment.env
file, check if DISPLAY
is correct by opening a terminal and running:
echo $DISPLAY
Write the result to DISPLAY
(normally, it's either :0
or :1
). Now you're ready to run the project!
Note: If the project still doesn't work, you might need to install Nvidia Container Toolkit, then run the following:
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
If you've done the first-time setup, remember to do the following on every system reboot:
- If on Windows, open XLaunch (X server) and configure it like previously. Update your IP in the
environment.env
file. - If on Linux, remember to run
on every reboot.
xhost +local:`docker inspect --format='{{ .Config.Hostname }}' gazebo`
Spin-up the containers with:
docker-compose up
If this is your first time here, it might take a couple minutes to build the image. Once it's done, you should see explore
, gazebo
, and rostcp
containers up and their messages.
To shutdown, use CTRL+C
in the terminal running the containers.
NOTE: often, the robot gets stuck at the start of the simulation. To fix this, go to Rviz and give it a Nav2 Pose Goal
(make the robot move and map a bit). If the explore
node considered the exploration done at the start of the simulation, follow the instruction in Resume exploration.
Alternatively, you can run the containers in detached mode:
docker-compose up -d
This will leave the terminal free while the containers run in the background. To shutdown, run the following:
docker-compose down
You can docker exec
into any of the containers and run ros2
commands from the get-go (sourcing is done automatically). For example, going into the explore
container:
docker exec -it explore bash
Sometimes the explore
node will stop exploration, reporting that there are no more frontiers. This can happen when the simulation takes too long to launch. To resume exploration, exec
into a container and run the following command:
ros2 topic pub /explore/resume std_msgs/Bool '{data: true}' -1
This publishes a single message to the /explore/resume
topic, toggling the exploration back on. If the exploration keeps stopping, remove the -1
so it is constantly resumed.
The map is set by default to labyrinthe.world
, a complex labyrinthe world created for this project. It is separated in 4 zones, with a central start area:
- red: longer hallways and a spiral
- blue: simple long hallways, a path with many tight corners, and a trident shaped path
- green: furniture room with 2 stairs, a series of 3 small tables, and a big table
- purple: highly chaotic and randomly placed walls
All zones are connected to their neighbor zones and the central area.
To change the world loaded in Gazebo, open docker-compose.yml
and look for the gazebo
service. There, under the command
, change the last part of the path in the world:=
parameter with one of the following (or any worlds added to the worlds
folder):
- labyrinthe.world
- empty_world.world
- turtlebot3_world.world
- turtlebot3_house.world
- turtlebot3_dqn_stage1.world
- turtlebot3_dqn_stage2.world
- turtlebot3_dqn_stage3.world
- turtlebot3_dqn_stage4.world
For example, to change it to the turtlebot3_house.world
world, the final command would look like this:
command: ros2 launch nav2_bringup tb3_simulation_launch.py slam:=True world:=/opt/ros/galactic/share/turtlebot3_gazebo/worlds/turtlebot3_world.world
NOTE: except for the labyrinthe, this will load the world without spawning the robot. To add a robot, go to the Insert
tab and add a turtlebot
to the world.
NOTE2: the labyrinth is intentionally called labyrinthe, as it was made in France.
The gazebo
container can be used to create new worlds and models. Follow the following steps:
- Set one of the
.world
worlds as described in Changing the Gazebo map to use it as a base/template and spin-up thegazebo
container - In the Gazebo simulation, click on the
Edit
tab in the toolbar, thenBuilding editor
- Create your model (warning: you can't save and edit it later!)
- Save the model somewhere easy to find in the container file system (e.g., the
root
folder) - (requires the
Docker
extension on VS Code) Go to VSCode, access the Docker tab, and search for the model file you saved. Download it to themodels
folder - Exit the
Building editor
- Go to the
Insert
tab and click onAdd Path
. Search for the folder containing your model's folder and add it - Now you can add your model to the world. Add any other models as desired.
- Once done, go to
File
andSave world as
. Save it in an easy-to-find folder (e.g., root) as a.world
file - Repeat step
5.
, but for the.world
file, and save it in theworlds
folder.
With this, your world is available for use by following the Changing the Gazebo map subsection. All models saved to the models
folder will also be available in the container next to the turtlebot
models.
NOTE: new files in the models
and worlds
folders will require the container to be rebuilt with:
docker-compose up --build
This project is designed to communicate with a Unity scene running the ROS-TCP Connector
and Unity Robotics Visualizations
packages. This is achieved via the rostcp
container running the ROS-TCP-Endpoint
package from the main-ros2
branch. All of this is based on the tutorials provided by Unity on Github:
- Unity-Technologies/Unity-Robotics-Hub/tree/main/tutorials/ros_unity_integration
- Unity-Technologies/Robotics-Nav2-SLAM-Example
To replicate this, follow the ros_unity_integration
tutorial first.
Furthermore, we used a VR scene to visualize data from the robot. The following headsets were tested:
- Samsung Odyssey with HMD Odyssey controllers (WMR)
- Pimax 8KX with both Index, Sword, and Vive controllers (SteamVR)
Made with Unity editor version 2021.3.14f1
.
Assets used:
The following branches are available:
- The
main
branch contains the most up-to-date working version of the project. Here, the Unity scene doesn't contain VR content. - The
vr
branch contains the Unity VR scene. - The
unity-pc-backup
branch is a backup for the Unity scene without VR.
The Unity scene without VR has only the ROS2 integration packages: communication and visualization. It's purpose is to be used with mouse and keyboard.
Current tasks and planned features include:
- Change the VR scene to a 3D map view, allowing the user to see the map in more detail.
- Add user interaction to both the 2D and 3D maps, where user interaction (tap, touch) could be used to switch from autonomous exploration to manual control, allowing the user to set the navigation target point.
- Add new autonomous exploration strategy implementations, e.g. Next-Best-View exploration, and compare the different strategies.
- Add new Gazebo worlds for testing autonomous exploration (e.g., a labyrinth).
- Remake the VR scene with up-to-date XR plugins, and allow interchangeable use between keyboard+mouse and VR headset.
- Add a Augmented Reality (AR) scene for visualizing ROS2 data in a phone app with AR (e.g., using a QR code).
- Add the Unity project
./xplorer_unity
to Git LFS. - Fix the video previews so they look like video players instead of a static image.
We would like to sincerely express our appreciation to Karelics Oy for the support they’ve sent our way working on this project.
This project makes use of the following open source libraries:
- m-explore-ros2 for autonomous exploration on ROS2
Many thanks to the authors!