Juan David Galvis [email protected]
David Cardozo [email protected]
Carlos Alvarez [email protected]
John Betancourt [email protected]
Andres Rengifo [email protected]
In this project we develop a system which integrates multiple components to drive a car autonomously (drive by wire, waypoint generation, steering and throttle control and traffic light classification). This system is first implemented in simulation and then in a real car (Carla, Udacity's self driving car.)
Following modules are implemented:
Using a ssd_mobilenetv1 pretrained model (with the COCO dataset), we took data provided by Alex lechner's group to implement transfer learning and obtain a Deep Neural Network model to detect and classify traffic light on images from both simulation and Carla (udacity's self driving car). We use 2 models, one for real images and one for simulation images They should be switched in the file catkin_ws/src/tl_detector/tl_detector.py line 27.
The main goal of this module is to publish a set of waypoints to be followed by the car, we have reduced the number of waypoints in the lane generation to 50 to reduce the computational load. First, the algorithm searches for the closest waypoint to the car and take the next 50 waypoints from the base waypoints. Then, we checked for the stop signal and if it is present, we generate a deceleration waypoint set based on the formula '10-10 exp(-x^2/128)' generating a soft brake behavior. Finally, the final waypoint list is published using a rostopic.
The drive-by-wire node implements the controllers needed to move the vehicle to follow the target waypoints. The break and throttle are regulated using a classic PID controller, the reference is the linear velocity taken from the waypoint follower. The throttle control takes into account the sign of the control output to send the corresponding brake value when it is necessary. The steering control uses the angular and linear velocities from the waypoint follower twist to calculate the corresponding steering angle.
By integrating all these modules, we can make the car follow waypoints on the road's middle lane while stopping in the presence of red lights.
In order to see how the correct performance of the traffic light classifier, we set the car on manual mode for it to see red, yellow and green lights:
Video of simulation on this link.
This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction here.
Please use one of the two installation options, either native or docker installation.
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- ROS Kinetic if you have Ubuntu 16.04.
- ROS Indigo if you have Ubuntu 14.04.
-
- Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary)
-
Download the Udacity Simulator.
Build the docker container
docker build . -t capstone
Run the docker file
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone
To set up port forwarding, please refer to the instructions from term 2
- Clone the project repository
git clone https://github.com/udacity/CarND-Capstone.git
- Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt
- Make and run styx
cd ros
catkin_make
source devel/setup.sh
roslaunch launch/styx.launch
- Run the simulator
- Download training bag that was recorded on the Udacity self-driving car.
- Unzip the file
unzip traffic_light_bag_file.zip
- Play the bag file
rosbag play -l traffic_light_bag_file/traffic_light_training.bag
- Launch your project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch
- Confirm that traffic light detection works on real life images
Outside of requirements.txt
, here is information on other driver/library versions used in the simulator and Carla:
Specific to these libraries, the simulator grader and Carla use the following:
Simulator | Carla | |
---|---|---|
Nvidia driver | 384.130 | 384.130 |
CUDA | 8.0.61 | 8.0.61 |
cuDNN | 6.0.21 | 6.0.21 |
TensorRT | N/A | N/A |
OpenCV | 3.2.0-dev | 2.4.8 |
OpenMP | N/A | N/A |
We are working on a fix to line up the OpenCV versions between the two.