irl-benchmark
is a modular library for evaluating various Inverse Reinforcement Learning algorithms. It provides an extensible platform for experimenting with different environments, algorithms and metrics.
conda create --name irl-benchmark python=3.6
source activate irl-benchmark
pip install -r requirements.txt
Start by generating expert data by
python generate_expert_data.py
Then run
python main.py
to get an overview of how all the components of irl-benchmark
work together.
Documentation is available as work in progress at: https://johannesheidecke.github.io/irl-benchmark.
You may find the extending part useful if you are planning to author new algorithms.
- Apprenticeship Learning (SVM Based)
- Apprenticeship Learning (Projection Based)
- Maximum Entropy IRL
- Maximum Causal Entropy IRL
Copyright: Adria Garriga-Alonso, Anton Osika, Johannes Heidecke, Max Daniel, and Sayan Sarkar.