This repository has been archived by the owner on May 6, 2021. It is now read-only.
Releases: JuliaReinforcementLearning/ReinforcementLearningCore.jl
Releases · JuliaReinforcementLearning/ReinforcementLearningCore.jl
v0.7.4
ReinforcementLearningCore v0.7.4
Merged pull requests:
- Add support of length and fetch! for CircularArraySLARTTrajectory (#225) (@ilancoulon)
- use findmax in Compat (#227) (@findmyway)
v0.7.3
ReinforcementLearningCore v0.7.3
Merged pull requests:
v0.7.2
ReinforcementLearningCore v0.7.2
Merged pull requests:
- minor fix with RandomStartPolicy (#212) (@findmyway)
- refactor StopAfterNoImprovement (#213) (@norci)
- In DoEveryNEpisode, added a keyword argument stage. (#214) (@norci)
- add cache to speed up sampling (#216) (@findmyway)
- Simplify trajectories (#217) (@findmyway)
- support general trajectory in BatchSampler (#218) (@findmyway)
v0.7.1
ReinforcementLearningCore v0.7.1
Merged pull requests:
- Fix doc string to pass doc building (#211) (@findmyway)
v0.7.0
ReinforcementLearningCore v0.7.0
Merged pull requests:
- Support rlintro (#200) (@findmyway)
- CompatHelper: bump compat for "FillArrays" to "0.11" (#203) (@github-actions[bot])
- CompatHelper: bump compat for "Adapt" to "3" (#204) (@github-actions[bot])
- bugfix in StopAfterNoImprovement. (#206) (@norci)
- CompatHelper: bump compat for "Functors" to "0.2" (#207) (@github-actions[bot])
- Bugfix with TabularRandomPolicy (#208) (@findmyway)
- bugfix with TabularRandomPolicy (#209) (@findmyway)
v0.6.3
ReinforcementLearningCore v0.6.3
Merged pull requests:
- A more efficient version of QBasedPolicy (#199) (@findmyway)
v0.6.2
ReinforcementLearningCore v0.6.2
Merged pull requests:
- Add default checks (#191) (@findmyway)
- CompatHelper: bump compat for "Zygote" to "0.6" (#192) (@github-actions[bot])
- added doc in agent.jl (#193) (@norci)
- Decouple the behavior of PreActStage (#195) (@findmyway)
- Add MultiAgent and NamedPolicy (#196) (@findmyway)
v0.6.1
ReinforcementLearningCore v0.6.1
Merged pull requests:
- added StopAfterNoImprovement. (#177) (@norci)
- CompatHelper: bump compat for "JLD" to "0.11" (#184) (@github-actions[bot])
- updated github workflows. (#185) (@norci)
- updated format_pr. (#186) (@norci)
- Update RLBase to the latest version (#188) (@findmyway)
- Add RandomStartPolicy & restrict QBasedPolicy to return index of action (#189) (@findmyway)
v0.6.0
ReinforcementLearningCore v0.6.0
Closed issues:
- Is it possible to have multiple progress meters for ComposedStopCondition? (#160)
Merged pull requests:
- Add GitHub Actions CI (#168) (@Sid-Bhatia-0)
- Reorganize code structure to support distributed reinforcement learning (#169) (@findmyway)
- removed dup code in run.jl. (#170) (@norci)
- CompatHelper: add new compat entry for "Functors" at version "0.1" (#180) (@github-actions[bot])
- More enhancements (#181) (@findmyway)
- Bugfix with prioritized trajectory (#183) (@findmyway)
v0.5.1
ReinforcementLearningCore v0.5.1
Merged pull requests:
- CompatHelper: bump compat for "FillArrays" to "0.10" (#147) (@github-actions[bot])
- Revert auto format related changes (#148) (@findmyway)
- Minor edits to CircularArrayBuffer (#149) (@Sid-Bhatia-0)
- CompatHelper: bump compat for "CUDA" to "2.1" (#151) (@github-actions[bot])
- fix minor typos (#152) (@Sid-Bhatia-0)
- Updates related to DistributedRL (#153) (@findmyway)
- MassInstallAction: Install the CompatHelper workflow on this repository (#164) (@findmyway)