Capstone project of Udacity Self-Driving Car Nanodegree (cf. repo).
Team Hot Wheels
- Unity3D: 3D game engine used for our simulation.
This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.
Please use one of the two installation options, either native or docker installation.
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- ROS Kinetic if you have Ubuntu 16.04.
- ROS Indigo if you have Ubuntu 14.04.
-
- Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary)
-
Download the Udacity Simulator.
Build the docker container
docker build . -t capstone
Run the docker file
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone
Install the dependencies to run the Python code on each ROS node:
cd sdcnd-captstone
pip install -r requirements.txt
wget https://github.com/frgfm/sdcnd-capstone/releases/download/v0.1.0/faster_rcnn_resnet50_coco_finetuned.pb
mv faster_rcnn_resnet50_coco_finetuned.pb ros/src/tl_detector/light_classification/
After installing Unity3D, you will need an environment build to run the simulation. Download the appropriate build for your OS and extract it:
If you encounter an issue with the above builds, please refer to the "Available Game Builds" section of this readme.
Now you should be able to build the project and run the styx server:
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
- Download training bag that was recorded on the Udacity self-driving car.
- Unzip the file
unzip traffic_light_bag_file.zip
- Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
- Launch your project in site mode
cd sdncd-capstone/ros
roslaunch launch/site.launch
- Confirm that traffic light detection works on real life images
Outside of requirements.txt
, here is information on other driver/library versions used in the simulator and Carla:
Specific to these libraries, the simulator grader and Carla use the following:
Simulator | Carla | |
---|---|---|
Nvidia driver | 384.130 | 384.130 |
CUDA | 8.0.61 | 8.0.61 |
cuDNN | 6.0.21 | 6.0.21 |
TensorRT | N/A | N/A |
OpenCV | 3.2.0-dev | 2.4.8 |
OpenMP | N/A | N/A |
We are working on a fix to line up the OpenCV versions between the two.
This project involves an agent (vehicle on a highway) exposed to a continuous state space and continuous action space. The environment can be switched to manual mode to give controls to the user, by default the only accepted inputs are the default controls communicated by ROS through a WSGI application.
This Unity environment gives a large state space with inherent constraints on the agent state.
In manual mode, if we violate common highway driving rules, warnings will be thrown. The environment will have the car position to strictly take values in what the controller sends and will expose both the agent state and sensor measurements to our codebase.
Please refer to this repository for further details.
- Respect traffic lights stops
- Comply with speed limits
- Keep the lane along the way
In the ros/src
folder, you will different nodes:
tl_detector
: responsible for object detection on traffic lightstwist_controller
: responsible for vehicle controlswaypoint_follower
: responsible for trajectory followingwaypoint_loader
: loads the waypoints on the map (position of traffic lights)waypoint_updater
: selects an appropriate behavior based ontl_detector
information.
#### Perception
This node is responsible for detecting traffic lights in range and classify their color for the planning module to work. For inference speed, we could have selected single-shot detectors such as SSD or YOLOv3, but as a first attempt, we used Faster RCNN for performance reasons.
It is available on TensorFlow model zoo, and is a well-known architecture used by some self-driving car makers for their own vehicles. The dataset of Alex Lechner was used for training to avoid manually labeling a dataset as well as its instructions for training models on it.
For efficiency purposes, we only do model inference when the traffic light is in close range since the information is not actionable beforehand.
#### Planning
The planning node publishes waypoints for the vehicle to follow, along with the velocity for each one of them. We had to reduce the default number of waypoints to avoid significant lags on workspace, down to 20.
Udacity provided an Autoware ROS node that will publish the twist commands for linear and angular velocities
The previously mentioned implementation yields a smooth driving agent able to evolve in the highway environment. For better insights on the perception module, the below visualization include another window with the output of the object detection model.
The trained object detection model is available for download here.
The full-length lap recording is available for download in the release attachments:
This implementation is of my own design but widely uses the following methods and concepts:
- Object detection papers: Faster-RCNN, SSD, YOLOv3
- Traffic light dataset: Alex Lechner dataset
- Model pretrained models: TensorFlow model zoo
Distributed under the MIT License. See LICENSE
for more information.