Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Classic environments in separate package? #123

Closed
darsnack opened this issue Jun 22, 2020 · 4 comments
Closed

Classic environments in separate package? #123

darsnack opened this issue Jun 22, 2020 · 4 comments

Comments

@darsnack
Copy link

Opening this up because of JuliaReinforcementLearning/CommonRLInterface.jl#18 (comment). This package has some classic environments implemented, as well as a lot of wrapped environments. In that sense, this package does function as a "glue" package (a one-stop shop as mentioned in the README).

As discussed in the linked comment, there are some features that could be added to the classic environments here. I was considering doing that as a PR, but then I thought it might make more sense to split that into its own package. I could do that in FluxML/Gym.jl (I think we can also take ownership of this repo if preferred), since it already supported the rendering logic which is the most involved part. Then this package could just be a "glue"/wrapper package.

What are folks thoughts on this approach?

@jbrea
Copy link
Collaborator

jbrea commented Jun 22, 2020

In fact, we had these in a seperate package some time ago... I think since ReinforcementLearningEnvironments.jl is pretty light-weight without the optionally loaded packages, @findmyway decided to integrate the classic control environments.
I am fine with both options, seperating them out or keeping them here...

@findmyway
Copy link
Member

findmyway commented Jun 22, 2020

That's ok to me. My only concern is, that repo seems not to be actively maintained.

Another issue is, both packages have some AbstractSpace implemented. I'm not sure if they are same or not. But we'd better only have one. Or we can wait for the next release of CommonRLInterface to have some agreements on this aspect.

@darsnack
Copy link
Author

Yeah if I were to go that route, I would throw about the abstractions in that package and use one of the RL base packages in either this org, JuliaPOMDPs, or CommonRLInterface.jl.

@findmyway findmyway transferred this issue from JuliaReinforcementLearning/ReinforcementLearningEnvironments.jl Nov 22, 2020
@findmyway
Copy link
Member

In the latest version of ReinforcementLearningEnvironments.jl, only some very simple environments which are typical enough are included. So I think we can just keep them as they are now.

findmyway added a commit to findmyway/ReinforcementLearning.jl that referenced this issue May 3, 2021
findmyway pushed a commit that referenced this issue May 3, 2021
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants