-
-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Next Release Plan (v0.11) #614
Comments
@JuliaRegistrator register |
Registration pull request created: JuliaRegistries/General/61734 After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningBase |
Registration pull request created: JuliaRegistries/General/63479 After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningEnvironments |
Registration pull request created: JuliaRegistries/General/63531 After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningCore |
Registration pull request updated: JuliaRegistries/General/63532 After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningBase |
Registration pull request created: JuliaRegistries/General/78892 After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningBase |
Registration pull request updated: JuliaRegistries/General/78892 After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningCore |
Registration pull request created: JuliaRegistries/General/78898 After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningCore |
Registration pull request updated: JuliaRegistries/General/78898 After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningEnvironments |
Registration pull request created: JuliaRegistries/General/78919 After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
Registration pull request created: JuliaRegistries/General/102429 Tip: Release NotesDid you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
To add them here just re-invoke and the PR will be updated. TaggingAfter the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningExperiments |
Registration pull request created: JuliaRegistries/General/102430 Tip: Release NotesDid you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
To add them here just re-invoke and the PR will be updated. TaggingAfter the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
Also, note the warning: Version 0.11.0 skips over 0.9.0 |
Registration pull request created: JuliaRegistries/General/102431 Tip: Release NotesDid you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
To add them here just re-invoke and the PR will be updated. TaggingAfter the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningZoo |
Registration pull request created: JuliaRegistries/General/102457 Tip: Release NotesDid you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
To add them here just re-invoke and the PR will be updated. TaggingAfter the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningExperiments |
Registration pull request updated: JuliaRegistries/General/102431 Tip: Release NotesDid you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
To add them here just re-invoke and the PR will be updated. TaggingAfter the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningBase |
Registration pull request created: JuliaRegistries/General/103658 Tip: Release NotesDid you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
To add them here just re-invoke and the PR will be updated. TaggingAfter the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningCore |
Registration pull request created: JuliaRegistries/General/103660 Tip: Release NotesDid you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
To add them here just re-invoke and the PR will be updated. TaggingAfter the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningEnvironments |
Registration pull request created: JuliaRegistries/General/103665 Tip: Release NotesDid you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
To add them here just re-invoke and the PR will be updated. TaggingAfter the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register |
Registration pull request created: JuliaRegistries/General/103670 Tip: Release NotesDid you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
To add them here just re-invoke and the PR will be updated. TaggingAfter the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
@JuliaRegistrator register subdir=src/ReinforcementLearningFarm |
Registration pull request created: JuliaRegistries/General/103673 Tip: Release NotesDid you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
To add them here just re-invoke and the PR will be updated. TaggingAfter the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version. This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:
|
So, this is done. Thanks for the great teamwork @HenriDeh!!! |
It was all you! I wish I had more time to allocate to RL, but I must focus on something else at the moment. |
Goal
Improve the interactions between ReinforcementLearning.jl and other ecosystems in Julia.
Why is it important?
In the early days of developing this package, the main goal is to reproduce some popular (deep) RL algorithms. It's still important to keep adding new emerging algorithms into this package. But as an engineer, I always think the higher impact is achieved only when users really apply those powerful RL algorithms to the problems they are interested in. In recent years, many important packages across different domains were developed in Julia and the whole ecosystem improved a lot. Although the interfaces defined in this package are loose and flexible, people are still unsure how to use this package due to lacking concrete examples. Adding more examples and removing some restricted assumptions will greatly encourage more people to try this package. On the other hand, doing so will also improve the quality of this package.
Potential breaking changes
The most important change would be decoupling the training data generation and policy optimization. The state is assumed to be a tensor by default in many cases. This is the main blocking issue when interacting with many other packages. Besides, the async training pipeline will not only improve the performance of existing algorithms on a single node but also provide the foundation of large scale training in future releases (possibly in v0.12)
Key issues to be addressed
Following are some of the existing issues on the top of my mind. Please raise new ones if you wish to be addressed in the next release.
Environments
Stil no luck to address this issue, so I have to remove OpenSpiel related part in the next release.enable OpenSpiel #691Refactor Existing Policies
JuliaRL_BC_CartPole
JuliaRL_DQN_CartPole
AddJuliaRL_DQN_CartPole
#650JuliaRL_PrioritizedDQN_CartPole
add PrioritizedDQN #698JuliaRL_Rainbow_CartPole
add rainbow #724JuliaRL_QRDQN_CartPole
add QRDQN #699JuliaRL_REMDQN_CartPole
add REMDQN #708JuliaRL_IQN_CartPole
add IQN #710JuliaRL_VMPO_CartPole
JuliaRL_VPG_CartPole
add VPG #733JuliaRL_BasicDQN_MountainCar
JuliaRL_DQN_MountainCar
JuliaRL_A2C_CartPole
JuliaRL_A2CGAE_CartPole
JuliaRL_PPO_CartPole
JuliaRL_MAC_CartPole
JuliaRL_DDPG_Pendulum
JuliaRL_SAC_Pendulum
JuliaRL_TD3_Pendulum
JuliaRL_PPO_Pendulum
JuliaRL_BasicDQN_SingleRoomUndirected
Add New Policies
Question: Can ReinforcementLearning.jl handle Partially Observed Markov Processes (POMDPs)? #608
Rename some functions to help beginners navigate source code #326
Improve the code structure and docs on general utils when defining a network Unify common network architectures and patterns #139
Add alternatives to Flux.jl Experimental support of Torch.jl #136
Change clip_by_global_norm! into a Optimizer #193
Derivative-Free Reinforcement Learning #206
Reinforcement Learning and Combinatorial Optimization #250
Model based reinforcement learning #262 and WIP: PETS algorithm from facebook/mbrl #531
Revisit Support multiple discrete action space #347
Gain in VPGPolicy does not account for terminal states? #578
Integrate CFR related algos at https://github.com/WhiffleFish/CounterfactualRegret.jl ?
Combine transformers and RL #392 Borrow some ideas from https://github.com/facebookresearch/salina
Training pipeline
Documentation
TDLearner
#580MultiThreadEnv
in detail.MultiThreadEnv
with custom (continuous) action spaces fails #596Utils
Timeline
I'm not sure I can fix them all. But at least I'll take a deep look into them and then tag a new release at the end of this quarter (around the end of June 2022).
The text was updated successfully, but these errors were encountered: