Releases: Stable-Baselines-Team/stable-baselines3-contrib
SB3-Contrib v2.3.0: New defaults hyperparameters for QR-DQN
Breaking Changes:
- Upgraded to Stable-Baselines3 >= 2.3.0
- The default
learning_starts
parameter ofQRDQN
have been changed to be consistent with the other offpolicy algorithms
# SB3 < 2.3.0 default hyperparameters, 50_000 corresponded to Atari defaults hyperparameters
# model = QRDQN("MlpPolicy", env, learning_starts=50_000)
# SB3 >= 2.3.0:
model = QRDQN("MlpPolicy", env, learning_starts=100)
New Features:
- Added
rollout_buffer_class
androllout_buffer_kwargs
arguments to MaskablePPO - Log success rate
rollout/success_rate
when available for on policy algorithms
Others:
- Fixed
train_freq
type annotation for tqc and qrdqn (@Armandpl) - Fixed
sb3_contrib/common/maskable/*.py
type annotations - Fixed
sb3_contrib/ppo_mask/ppo_mask.py
type annotations - Fixed
sb3_contrib/common/vec_env/async_eval.py
type annotations
Documentation:
- Add some additional notes about
MaskablePPO
(evaluation and multi-process) (@icheered)
Full Changelog: v2.2.1...v2.3.0
SB3-Contrib v2.2.1
SB3 Contrib (more algorithms): https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo
Stable-Baselines Jax (SBX): https://github.com/araffin/sbx
Breaking Changes:
- Upgraded to Stable-Baselines3 >= 2.2.1
- Switched to
ruff
for sorting imports (isort is no longer needed), black and ruff version now require a minimum version - Dropped
x is False
in favor ofnot x
, which means that callbacks that wrongly returned None (instead of a boolean) will cause the training to stop (@iwishiwasaneagle)
New Features:
- Added
set_options
forAsyncEval
- Added
rollout_buffer_class
androllout_buffer_kwargs
arguments to TRPO
Others:
- Fixed
ActorCriticPolicy.extract_features()
signature by adding an optionalfeatures_extractor
argument - Update dependencies (accept newer Shimmy/Sphinx version and remove
sphinx_autodoc_typehints
)
SB3-Contrib v2.1.0
SB3 Contrib (more algorithms): https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo
Stable-Baselines Jax (SBX): https://github.com/araffin/sbx
Breaking Changes:
- Removed Python 3.7 support
- SB3 now requires PyTorch >= 1.13
- Upgraded to Stable-Baselines3 >= 2.1.0
New Features:
- Added Python 3.11 support
Bug Fixes:
- Fixed MaskablePPO ignoring
stats_window_size
argument
Full Changelog: v2.0.0...v2.1.0
SB3-Contrib v2.0.0: Gymnasium Support
Warning
Stable-Baselines3 (SB3) v2.0 will be the last one supporting python 3.7 (end of life in June 2023).
We highly recommended you to upgrade to Python >= 3.8.
SB3 Contrib (more algorithms): https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo
Stable-Baselines Jax (SBX): https://github.com/araffin/sbx
To upgrade:
pip install stable_baselines3 sb3_contrib rl_zoo3 --upgrade
or simply (rl zoo depends on SB3 and SB3 contrib):
pip install rl_zoo3 --upgrade
Breaking Changes
- Switched to Gymnasium as primary backend, Gym 0.21 and 0.26 are still supported via the
shimmy
package (@carlosluis, @arjun-kg, @tlpss) - Upgraded to Stable-Baselines3 >= 2.0.0
Bug fixes
- Fixed QRDQN update interval for multi envs
Others
- Fixed
sb3_contrib/tqc/*.py
type hints - Fixed
sb3_contrib/trpo/*.py
type hints - Fixed
sb3_contrib/common/envs/invalid_actions_env.py
type hints
Full Changelog: v1.8.0...v2.0.0
SB3-Contrib v1.8.0
Warning
Stable-Baselines3 (SB3) v1.8.0 will be the last one to use Gym as a backend.
Starting with v2.0.0, Gymnasium will be the default backend (though SB3 will have compatibility layers for Gym envs).
You can find a migration guide here.
If you want to try the SB3 v2.0 alpha version, you can take a look at PR #1327.
RL Zoo3 (training framework): https://github.com/DLR-RM/rl-baselines3-zoo
To upgrade:
pip install stable_baselines3 sb3_contrib rl_zoo3 --upgrade
or simply (rl zoo depends on SB3 and SB3 contrib):
pip install rl_zoo3 --upgrade
Breaking Changes:
- Removed shared layers in
mlp_extractor
(@AlexPasqua) - Upgraded to Stable-Baselines3 >= 1.8.0
New Features:
- Added
stats_window_size
argument to control smoothing in rollout logging (@jonasreiher)
Bug Fixes:
Deprecations:
Others:
- Moved to pyproject.toml
- Added github issue forms
- Fixed Atari Roms download in CI
- Fixed
sb3_contrib/qrdqn/*.py
type hints - Switched from
flake8
toruff
Documentation:
- Added warning about potential crashes caused by
check_env
in theMaskablePPO
docs (@AlexPasqua)
SB3-Contrib v1.7.0 : Bug fixes for PPO LSTM and quality of life improvements
Warning
Shared layers in MLP policy (mlp_extractor
) are now deprecated for PPO, A2C and TRPO.
This feature will be removed in SB3 v1.8.0 and the behavior ofnet_arch=[64, 64]
will create separate networks with the same architecture, to be consistent with the off-policy algorithms.
Note
TRPO models saved with SB3 < 1.7.0 will show a warning about
missing keys in the state dict when loaded with SB3 >= 1.7.0.
To suppress the warning, simply save the model again.
You can find more info in issue # 1233
Breaking Changes:
- Removed deprecated
create_eval_env
,eval_env
,eval_log_path
,n_eval_episodes
andeval_freq
parameters,
please use anEvalCallback
instead - Removed deprecated
sde_net_arch
parameter - Upgraded to Stable-Baselines3 >= 1.7.0
New Features:
- Introduced mypy type checking
- Added support for Python 3.10
- Added
with_bias
parameter toARSPolicy
- Added option to have non-shared features extractor between actor and critic in on-policy algorithms (@AlexPasqua)
- Features extractors now properly support unnormalized image-like observations (3D tensor)
when passingnormalize_images=False
Bug Fixes:
- Fixed a bug in
RecurrentPPO
where the lstm states where incorrectly reshaped forn_lstm_layers > 1
(thanks @kolbytn) - Fixed
RuntimeError: rnn: hx is not contiguous
while predicting terminal values forRecurrentPPO
whenn_lstm_layers > 1
Deprecations:
- You should now explicitely pass a
features_extractor
parameter when callingextract_features()
- Deprecated shared layers in
MlpExtractor
(@AlexPasqua)
Others:
- Fixed flake8 config
- Fixed
sb3_contrib/common/utils.py
type hint - Fixed
sb3_contrib/common/recurrent/type_aliases.py
type hint - Fixed
sb3_contrib/ars/policies.py
type hint - Exposed modules in
__init__.py
with__all__
attribute (@ZikangXiong) - Removed ignores on Flake8 F401 (@ZikangXiong)
- Upgraded GitHub CI/setup-python to v4 and checkout to v3
- Set tensors construction directly on the device
- Standardized the use of
from gym import spaces
SB3-Contrib v1.6.2: Progress bar
Breaking Changes:
- Upgraded to Stable-Baselines3 >= 1.6.2
New Features:
- Added
progress_bar
argument in thelearn()
method, displayed using TQDM and rich packages
Deprecations:
- Deprecate parameters
eval_env
,eval_freq
andcreate_eval_env
Others:
- Fixed the return type of
.load()
methods so that they now useTypeVar
SB3-Contrib v1.6.1: Bug fix release
Breaking Changes:
- Fixed the issue that
predict
does not always return action asnp.ndarray
(@qgallouedec) - Upgraded to Stable-Baselines3 >= 1.6.1
Bug Fixes:
- Fixed the issue of wrongly passing policy arguments when using CnnLstmPolicy or MultiInputLstmPolicy with
RecurrentPPO
(@mlodel) - Fixed division by zero error when computing FPS when a small number of time has elapsed in operating systems with low-precision timers.
- Fixed calling child callbacks in MaskableEvalCallback (@CppMaster)
- Fixed missing verbose parameter passing in the
MaskableEvalCallback
constructor (@BurakDmb) - Fixed the issue that when updating the target network in QRDQN, TQC, the
running_mean
andrunning_var
properties of batch norm layers are not updated (@honglu2875)
Others:
- Changed the default buffer device from
"cpu"
to"auto"
sb3-contrib v1.6.0: RecurrentPPO (aka PPO LSTM) and better defaults for learning from pixels with offpolicy algos
Breaking changes:
- Upgraded to Stable-Baselines3 >= 1.6.0
- Changed the way policy "aliases" are handled ("MlpPolicy", "CnnPolicy", ...), removing the former
register_policy
helper,policy_base
parameter and usingpolicy_aliases
static attributes instead (@Gregwar) - Renamed
rollout/exploration rate
key torollout/exploration_rate
for QRDQN (to be consistent with SB3 DQN) - Upgraded to python 3.7+ syntax using
pyupgrade
- SB3 now requires PyTorch >= 1.11
- Changed the default network architecture when using
CnnPolicy
orMultiInputPolicy
with TQC,
share_features_extractor
is now set to False by default and thenet_arch=[256, 256]
(instead ofnet_arch=[]
that was before)
New Features
- Added
RecurrentPPO
(aka PPO LSTM)
Bug Fixes:
- Fixed a bug in
RecurrentPPO
when calculating the masked loss functions (@rnederstigt) - Fixed a bug in
TRPO
where kl divergence was not implemented forMultiDiscrete
space