Skip to content
/ rlly Public

A C++ library for reinforcement learning environments

License

Notifications You must be signed in to change notification settings

omardrwch/rlly

Repository files navigation

rlly - A C++ library for reinforcement learning environments (under development)

build docs

The goal of rlly is to implement simple environments for reinforcement learning algorithms in C++, with an interface similar to the OpenAI gym library for Python.

Requirements

  • C++ 17

  • For rendering:

    • OpenGL
    • freeglut
      $ sudo apt-get install freeglut3-dev
    • Remark: It is possible to use the library without rendering options and avoid these requirements.
  • For test coverage (to run scripts/run_tests_with_coverage.sh):

    • lcov sudo apt install lcov

How to use the library

Header-only

You can use rlly installing anything. All you have to do is to copy the file rlly.hpp and use it in your project.

The file rlly.hpp is generated by running

$ bash generate_header/run.sh

and does not include rendering classes and functions by default.

To include code for rendering in rlly.hpp, run

$ bash generate_header/run.sh -rendering

Building and installing

You can also install the library using the following commands:

$ mkdir build
$ cd build
$ cmake .. -DCMAKE_INSTALL_PREFIX=<INSTALL_LOCATION>
$ make install

and you should replace <INSTALL_LOCATION> by the directory where you wish to install the library (e.g. /home/your_username/my_cpp_packages).

If you have freeglut and OpenGL installed in your system, both rlly and rlly_rendering will be installed. Otherwise, only rlly is installed.

To use the library in your project, add the following to your project's CMakeLists.txt:

find_package(rlly REQUIRED           PATHS <INSTALL_LOCATION>)
find_package(rlly_rendering REQUIRED PATHS <INSTALL_LOCATION>)

then you can link rlly and rlly_rendering to your targets.

Here is a sample code using the installed libraries:

// file rlly_example.cpp
#include <rlly/env.h>
#include <rlly/render.h>

int main() 
{
    rlly::env::CartPole env;
    env.enable_rendering();
    for(int ii = 0; ii < 50; ii ++) env.step(env.action_space.sample());
    rlly::render::render_env(env);

    return 0;
}

and the corresponding CMakeLists.txt:

cmake_minimum_required(VERSION 3.0)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
project(rlly_lib_example)

find_package(rlly REQUIRED           PATHS <INSTALL_LOCATION>)
find_package(rlly_rendering REQUIRED PATHS <INSTALL_LOCATION>)

add_executable(example rlly_example.cpp)
target_link_libraries(example rlly rlly_rendering)

Examples

The examples below show how to interact with some rlly envinroments and how to visualize them. For more examples, see folder examples/.

MountainCar

#include <vector>
// the header needs to be generated with -rendering option
#include "rlly.hpp"

int main()
{
    // create environment, set seed and enable rendering
    rlly::env::MountainCar env;
    env.set_seed(123);
    env.enable_rendering();
    // run
    int horizon = 200;
    for(int hh = 0; hh < horizon; hh++)
    {
        auto action = env.action_space.sample();
        auto step_result = env.step(action);
    }
    // render
    rlly::render::render_env(env);
    return 0;
}

alt text

CartPole

#include <iostream>
#include <vector>
#include "rlly.hpp"

int main()
{
    // create environment and enable rendering
    rlly::env::CartPole env;
    env.enable_rendering();
    // run
    int horizon = 200;
    for(int ii = 0; ii < horizon; ii++)
    {
        auto action = env.action_space.sample();
        auto step_result = env.step(action);
        std::cout << "state = "; rlly::utils::vec::printvec(step_result.next_state);
        std::cout << "reward = " << step_result.reward << std::endl;
        if (step_result.done) break;
    }
    // render
    rlly::render::render_env(env);
    return 0;
}

alt text

GridWorld

#include <iostream>
#include <vector>
#include "rlly.hpp"

int main(void)
{
    // create environment, set seed and enable rendering
    double fail_prob = 0.0;          // failure probability
    double reward_smoothness = 0.0;  // reward = exp( - distance(next_state, goal_state)^2 / reward_smoothness^2)
    double sigma = 0.1;              // reward noise (Gaussian)
    rlly::env::GridWorld env(5, 10, fail_prob, reward_smoothness, sigma);
    env.set_seed(123);
    env.enable_rendering();
     // run
    int horizon = 50;
    for(int hh = 0; hh < horizon; hh++)
    {
        int action = env.action_space.sample();
        auto step_result = env.step(action);
    }
    // render (graphic)
    rlly::render::render_env(env);
    // render (text)
    env.render();
    return 0;
}

alt text

SquareWorld

It is a continuous-state version of a GridWorld.

#include <iostream>
#include <vector>
#include "rlly.hpp"

int main()
{
    // create environment and enable rendering
    rlly::env::SquareWorld env;
    env.enable_rendering();
    // run
    int horizon = 50;
    for(int ii = 0; ii < horizon; ii++)
    {
        auto action = env.action_space.sample();
        auto step_result = env.step(action);
        std::cout << "state = "; rlly::utils::vec::printvec(step_result.next_state);
        std::cout << "reward = " << step_result.reward << std::endl;
        if (step_result.done) break;
    }
    // render
    rlly::render::render_env(env);
    return 0;
}

alt text

Documentation

To view the documentation, run

doxygen Doxyfile

and open the file docs/html/index.html.

Testing

Creating a new test

  • Create a file test/my_test.cpp using Catch2.

  • In the file test/CMakeLists.txt, include my_test.cpp in the list of files in add_executable().

  • Run

$ bash scripts/run_tests.sh

Third-party code

All third-party code are in the directory ext with their respective LICENSE files.

About

A C++ library for reinforcement learning environments

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages