Skip to content

AaltoVision/donkeycar-dreamer

Repository files navigation

Learning to Drive Small Scale Cars from Scratch -- with Dreamer

MIT License

This codebase contains the code to learn to drive a Donkey Car from images using model-based reinforcement learning. This approach follows the Dreamer algorithm, which learns a world model to predict future latent states, and learns policy and value function purely based on latent imagination. This implementation is able to learn to follow a track in 5 minutes of driving around on a track which corresponds to about 6000 samples from the environment.

Core files models.py contains all models used in our experiments, including the world model, actor model and value model. agent.py includes the dreamer agent. dreamer.py contains the code for using the agent to drive in the environment as well as training the agent.

Running the code Install the required libraries:

For running the experiments, please refers https://github.com/ari-viitala/donkeycar/tree/master.

References:

[1] Dream to Control: Learning Behaviors by Latent Imagination

[2] Tensorflow implementation, with tensorflow1.0

[3] Tensorflow implementation, with tensorflow2.0

[4] Learning Latent Dynamics for Planning from Pixels

[5] PlaNet implementation from @Kaixhin

[6] Donkeycar

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages