Skip to content
This repository has been archived by the owner on May 6, 2021. It is now read-only.

Releases: JuliaReinforcementLearning/ReinforcementLearningEnvironments.jl

v0.2.1

20 Feb 08:04
aaa04a0
Compare
Choose a tag to compare

ReinforcementLearningEnvironments v0.2.1

Diff since v0.2.0

Closed issues:

  • Add OpenSpiel (#18)
  • Should seed! be part of the interface? (#24)
  • Expectations on observation type (#25)

Merged pull requests:

  • add OpenSpiel (#33) (@findmyway)
  • CompatHelper: add new compat entry for "StatsBase" at version "0.32" (#34) (@github-actions[bot])

v0.2.0

17 Feb 16:04
4f16777
Compare
Choose a tag to compare

ReinforcementLearningEnvironments v0.2.0

Diff since v0.1.3

Closed issues:

  • Make Observation mutable? (#20)

Merged pull requests:

  • use RLBase instead (#29) (@findmyway)
  • Install TagBot as a GitHub Action (#30) (@JuliaTagBot)
  • CompatHelper: add new compat entry for "ReinforcementLearningBase" at version "0.6" (#31) (@github-actions[bot])

v0.1.3

02 Nov 09:48
v0.1.3
Compare
Choose a tag to compare

Add more keyword parameters to AtariEnv.
Add upper bounds of dependencies.

v0.1.2

28 Sep 15:22
v0.1.2
033cf4c
Compare
Choose a tag to compare

Allow cart pole environment to reach the max_step

v0.1.1

28 Aug 12:44
v0.1.1
Compare
Choose a tag to compare

Unify the return of observe(env)

v0.1.0

07 Aug 13:55
v0.1.0
82a21f3
Compare
Choose a tag to compare

Register this package so that we can drop the many other dependencies in ReinforcementLearning.jl.