This project implements the concept of behavioral cloning by training a neural network to drive like human. In the real world, human can control a car with factors like thruttle, brake, steering angle etc. While this project only considers the steering angle.
This code should run under Python 3.5 or later.
Dependencies are:
- tensorflow 1.4 or later
- keras
- numpy
- sklearn
- tqdm
- python-opencv
It implements the neural network architecture.
Train the neural network and save the model.
Usage:
python train.py -f FOLDER_OF_DATAOnce the model has been saved, it can be used with drive.py using this command:
python drive.py model.h5The above command will load the trained model and use the model to make predictions on individual images in real-time and send the predicted angle back to the server via a websocket connection. The simulator can be found here.
python drive.py model.h5 run1The fourth argument, run1, is the directory in which to save the images seen by the agent. If the directory already exists, it'll be overwritten.
ls run1
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_424.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_451.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_477.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_528.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_573.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_618.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_697.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_723.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_749.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_817.jpg
...The image file name is a timestamp of when the image was seen. This information is used by video.py to create a chronological video of the agent driving.
python video.py run1Creates a video based on images found in the run1 directory. The name of the video will be the name of the directory followed by '.mp4', so, in this case the video will be run1.mp4.
Optionally, one can specify the FPS (frames per second) of the video:
python video.py run1 --fps 48Will run the video at 48 FPS. The default FPS is 60.
The pre-trained model.
Video from running the model on the simulator.