Skip to content

Latest commit

 

History

History
49 lines (26 loc) · 1.9 KB

README.md

File metadata and controls

49 lines (26 loc) · 1.9 KB

CircleCI Maintainability Test Coverage

Introduction

irl-benchmark is a modular library for evaluating various Inverse Reinforcement Learning algorithms. It provides an extensible platform for experimenting with different environments, algorithms and metrics.

Installation

conda create --name irl-benchmark python=3.6

source activate irl-benchmark

pip install -r requirements.txt

Getting Started

Start by generating expert data by

python generate_expert_data.py

Then run

python main.py

to get an overview of how all the components of irl-benchmark work together.

Documentation

Documentation is available as work in progress at: https://johannesheidecke.github.io/irl-benchmark.

You may find the extending part useful if you are planning to author new algorithms.

Environemts

Algorithms

Metrics

Copyright: Adria Garriga-Alonso, Anton Osika, Johannes Heidecke, Max Daniel, and Sayan Sarkar.