A toolkit providing easy and unified access to building control environments for reinforcement learning (RL). Compared to other domains, RL environments for building control tend to be more difficult to install and handle. Most environments require the user to either manually install a building simulator (e.g. EnergyPlus) or to manually manage Docker containers. This can be tedious.
Beobench was created to make building control environments easier to use and experiments more reproducible. Beobench uses Docker to manage all environment dependencies in the background so that the user doesn't have to. A standardised API, illustrated in the figure below, allows the user to easily configure experiments and evaluate new RL agents on building control environments.
- Large collection of building control environments: Out-of-the-box Beobench provides access to environments from BOPTEST, Energym, and Sinergym. Beobench combines the environments from these frameworks into the (to the best of our knowledge) largest single collection of building control environments. See environment list here.
- Clean and light-weight installation: Beobench is installed via pip and only requires Docker as an additional non-python dependency (see installation guide). Without Beobench, most building control environments will require manually installing building simulators or directly managing docker containers.
- Built-in RL agents: Beobench allows the user to apply any agent from the Ray RLlib collection in addition to agents provided by the user directly.
- Easily extendable: want to use Beobench with an environment not yet included? The support for user-defined Docker contexts makes it easy to use Beobench with any RL environment.
Install docker on your machine (if on Linux, check the additional installation steps)
Install Beobench using:
pip install beobench
Warning: OS support
- Linux: recommended and tested (Ubuntu 20.04).
- Windows: only use via Windows Subsystem for Linux (WSL) recommended.
- macOS: experimental support for Apple silicon systems — only intended for development purposes (not running experiments). Intel-based macOS support untested.
To get started with our first experiment, we set up an experiment configuration. Experiment configurations can be given as a yaml file or a Python dictionary. The configuration fully defines an experiment, configuring everything from the RL agent to the environment and its wrappers. The figure below illustrates the config structure.
Let's look at a concrete example. Consider this config.yaml
file:
agent:
# script to run inside experiment container
origin: ./agent.py
# configuration that can be accessed by script above
config:
num_steps: 100
env:
# gym framework from which we want use an environment
gym: sinergym
# gym-specific environment configuration
config:
# sinergym environment name
name: Eplus-5Zone-hot-continuous-v1
wrappers: [] # no wrappers added for this example
general:
# save experiment data to ``./beobench_results`` directory
local_dir: ./beobench_results
The agent.origin
setting in the configuration file above sets the agent script to be ./agent.py
. The agent script is the main code that is run inside the experiment container. Most of the time this script will define an RL agent but it could really be anything. Simply put, we can think of Beobench as a tool to (1) build a special Docker container and then (2) execute an agent script inside that container.
Let's create an example agent script, agent.py
:
from beobench.experiment.provider import create_env, config
# create environment and get starting observation
env = create_env()
observation = env.reset()
for _ in range(config["agent"]["config"]["num_steps"]):
# sample random action from environment's action space
action = env.action_space.sample()
# take selected action in environment
observation, reward, done, info = env.step(action)
env.close()
The only Beobench-specific part of this script is the first line:
we import the create_env
function and the config
dictionary from beobench.experiment.provider
.
The create_env
function allows us to create the environment
as definded in our configuration.
The config
dictionary gives us access to the full experiment configuration
(as defined before). These two imports are only available inside an experiment container.
Note
We can use these two imports regardless of the gym framework we are using. This invariability allows us to create agent scripts that work across frameworks.
After these Beobench imports, the agent.py
script above just takes a few random actions in the environment. Feel free to customize the agent script to your requirements.
Alternatively, there are also a number of pre-defined agent scripts available, including a script for using RLlib.
Given the configuration and agent script above, we can run the experiment either via the command line:
beobench run --config config.yaml
or in Python:
import beobench
beobench.run(config = "config.yaml")
Either command will:
- Build an experiment container with Sinergym installed.
- Execute
agent.py
inside that container.
You have just run your first Beobench experiment.
To learn more about using Beobench, look at the advanced usage section in the documentation.
https://beobench.readthedocs.io
Gym | Environment | Type* | Description |
---|---|---|---|
BOPTEST | bestest_air |
original, beobench | |
bestest_hydronic |
original, beobench | ||
bestest_hydronic_heat_pump |
original, beobench | ||
multizone_residential_hydronic |
original, beobench | ||
singlezone_commercial_hydronic |
original, beobench | ||
Energym | Apartments2Thermal-v0 |
original, beobench | |
Apartments2Grid-v0 |
original, beobench | ||
ApartmentsThermal-v0 |
original, beobench | ||
ApartmentsGrid-v0 |
original, beobench | ||
OfficesThermostat-v0 |
original, beobench | ||
MixedUseFanFCU-v0 |
original, beobench | ||
SeminarcenterThermostat-v0 |
original, beobench | ||
SeminarcenterFull-v0 |
original, beobench | ||
SimpleHouseRad-v0 |
original, beobench | ||
SimpleHouseRSla-v0 |
original, beobench | ||
SwissHouseRSlaW2W-v0 |
original, beobench | ||
SwissHouseRSlaA2W-v0 |
original, beobench | ||
SwissHouseRSlaTank-v0 |
original, beobench | ||
SwissHouseRSlaTankDhw-v0 |
original, beobench | ||
Sinergym | Eplus-demo-v1 |
original, beobench | |
Eplus-5Zone-hot-discrete-v1 |
original, beobench | ||
Eplus-5Zone-mixed-discrete-v1 |
original, beobench | ||
Eplus-5Zone-cool-discrete-v1 |
original, beobench | ||
Eplus-5Zone-hot-continuous-v1 |
original, beobench | ||
Eplus-5Zone-mixed-continuous-v1 |
original, beobench | ||
Eplus-5Zone-cool-continuous-v1 |
original, beobench | ||
Eplus-5Zone-hot-discrete-stochastic-v1 |
original, beobench | ||
Eplus-5Zone-mixed-discrete-stochastic-v1 |
original, beobench | ||
Eplus-5Zone-cool-discrete-stochastic-v1 |
original, beobench | ||
Eplus-5Zone-hot-continuous-stochastic-v1 |
original, beobench | ||
Eplus-5Zone-mixed-continuous-stochastic-v1 |
original, beobench | ||
Eplus-5Zone-cool-continuous-stochastic-v1 |
original, beobench | ||
Eplus-datacenter-discrete-v1 |
original, beobench | ||
Eplus-datacenter-continuous-v1 |
original, beobench | ||
Eplus-datacenter-discrete-stochastic-v1 |
original, beobench | ||
Eplus-datacenter-continuous-stochastic-v1 |
original, beobench | ||
Eplus-IWMullion-discrete-v1 |
original, beobench | ||
Eplus-IWMullion-continuous-v1 |
original, beobench | ||
Eplus-IWMullion-discrete-stochastic-v1 |
original, beobench | ||
Eplus-IWMullion-continuous-stochastic-v1 |
original, beobench |
* Types of environments:
Need help using Beobench or want to discuss the toolkit? Reach out via contact-gh (at) arduin.io
and we are very happy to help either via email or in a call.
If you find Beobench helpful in your work, please consider citing the accompanying paper:
@inproceedings{10.1145/3538637.3538866, author = {Findeis, Arduin and Kazhamiaka, Fiodar and Jeen, Scott and Keshav, Srinivasan}, title = {Beobench: A Toolkit for Unified Access to Building Simulations for Reinforcement Learning}, year = {2022}, isbn = {9781450393973}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3538637.3538866}, doi = {10.1145/3538637.3538866}, booktitle = {Proceedings of the Thirteenth ACM International Conference on Future Energy Systems}, pages = {374–382}, numpages = {9}, keywords = {reinforcement learning, building energy optimisation, building simulation, building control}, location = {Virtual Event}, series = {e-Energy '22} }
MIT license, see credits and license page in docs for more detailed information.