Package structure and architectural roadmap #419
Replies: 1 comment 3 replies
-
Hi @pkienscherf ,
Looking forward to your upcoming works!
This is by design. See discussions here. I bet you've seen the project structure in the README: +-----------------------------------------------------------------------------------+ | | | ReinforcementLearning.jl | | | | +------------------------------+ | | | ReinforcementLearningBase.jl | | | +----|-------------------------+ | | | | | | +--------------------------------------+ | | +---->+ ReinforcementLearningEnvironments.jl | | | | +--------------------------------------+ | | | | | | +------------------------------+ | | +---->+ ReinforcementLearningCore.jl | | | +----|-------------------------+ | | | | | | +-----------------------------+ | | +---->+ ReinforcementLearningZoo.jl | | | +----|------------------------+ | | | | | | +-------------------------------------+ | | +---->+ DistributedReinforcementLearning.jl | | | +-------------------------------------+ | | | +------|----------------------------------------------------------------------------+ | | +-------------------------------------+ +---->+ ReinforcementLearningExperiments.jl | | +-------------------------------------+ | | +----------------------------------------+ +---->+ ReinforcementLearningAnIntroduction.jl | +----------------------------------------+ Let me explain the goal of each sub-package first since based on your following comment, it seems the doc doesn't make this point clear.
So the answer to your question is now clear. Different packages have different targeting users.
Hope this will make things more clear in your mind.
Emm, I think they are not interchangeable? For interfaces defined in
TBH I do not have a very clear roadmap in my head. Currently we have several OSPP students working on offline RL and MARL, hoping that we can have a more feature-rich ecosystem. On one hand, I wish this package can be polished further to have a more fluent experience when applying RL algorithms in this package to user's customized problems. I must admit that there's still a large gap compared to many RL packages in the Python world like StableBaselines/RLlib/acme. And I can receive people's feature request every now and then. On the other hand, I plan to bring this package to a new stage with a distributed actor system (some ongoing work Oolong.jl), hoping that this package will have more practical usages.
Of course not. Really wish to have more people on board. 🤗 |
Beta Was this translation helpful? Give feedback.
-
Hi all,
I recently started re-implementing some RL projects I've been working on within the RL.jl framework (due to its great auxiliary functions, friendly enforcing of APIs and logging that really helps/saves work!). Starting to dig deeper into the package itself, particularly with a view to implementing some MARL applications in the future, some questions regarding the package architecture arose to me:
Why is RL.jl currently designed as wrapper around sub-packages with own dependencies and versioning? Would it not be easier to use "regular" sub-modules instead? Is the structure mostly due to legacy or by design? I created a fork (pkienscherf/ReinforcementLearning.jl) with an alternative structure of sub-modules that made it easier for me to work with the package in development mode, so I'd be interested to hear what you think about such approach (obviously that was a quick draft not including some of the remaining re-structuring like moving the test).
My second thought concerns the difference between Core and Base. Obviously from code inspection you can find some general patterns, but I find the two terms (semantically) quite interchangeable, especially for RL.jl-newbies who might want to contribute to the package (like myself 😊 ).
Both of these points also refer to a more general question, as to whether there is a mid- to long-term roadmap for the package and its architecture to orient contributors?
I hope I was not inadvertently hypercritical; I really like RL.jl and I'd love to help moving it forward once I understand it better (and have sufficient time on my hands 😊).
Beta Was this translation helpful? Give feedback.
All reactions