Skip to content

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github Sep 18, 2021

Bumps pytorch-lightning from 1.4.3 to 1.4.7.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.4.7] - 2021-09-14

  • Fixed logging of nan parameters (#9364)
  • Fixed replace_sampler missing the batch size under specific conditions (#9367)
  • Pass init args to ShardedDataParallel (#9483)
  • Fixed collision of user argument when using ShardedDDP (#9512)
  • Fixed DeepSpeed crash for RNNs (#9489)

Contributors

@​asanakoy @​awaelchli @​borisdayma @​carmocca @​guotuofeng @​justusschock @​kaushikb11 @​rohitgr7 @​SeanNaren

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.4.6] - 2021-09-10

  • Fixed an issues with export to ONNX format when a model has multiple inputs (#8800)
  • Removed deprecation warnings being called for on_{task}_dataloader (#9279)
  • Fixed save/load/resume from checkpoint for DeepSpeed Plugin (#8397, #8644, #8627)
  • Fixed EarlyStopping running on train epoch end when check_val_every_n_epoch>1 is set (#9156)
  • Fixed an issue with logger outputs not being finalized correctly after prediction runs (#8333)
  • Fixed the Apex and DeepSpeed plugin closure running after the on_before_optimizer_step hook (#9288)
  • Fixed the Native AMP plugin closure not running with manual optimization (#9288)
  • Fixed bug where data-loading functions where not getting the correct running stage passed (#8858)
  • Fixed intra-epoch evaluation outputs staying in memory when the respective *_epoch_end hook wasn't overridden (#9261)
  • Fixed error handling in DDP process reconciliation when _sync_dir was not initialized (#9267)
  • Fixed PyTorch Profiler not enabled for manual optimization (#9316)
  • Fixed inspection of other args when a container is specified in save_hyperparameters (#9125)
  • Fixed signature of Timer.on_train_epoch_end and StochasticWeightAveraging.on_train_epoch_end to prevent unwanted deprecation warnings (#9347)

Contributors

@​ananthsub @​awaelchli @​Borda @​four4fish @​justusschock @​kaushikb11 @​s-rog @​SeanNaren @​tangbinh @​tchaton @​xerus

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.4.5] - 2021-08-31

  • Fixed reduction using self.log(sync_dict=True, reduce_fx={mean,max}) (#9142)
  • Fixed not setting a default value for max_epochs if max_time was specified on the Trainer constructor (#9072)
  • Fixed the CometLogger, no longer modifies the metrics in place. Instead creates a copy of metrics before performing any operations (#9150)
  • Fixed DDP "CUDA error: initialization error" due to a copy instead of deepcopy on ResultCollection (#9239)

Contributors

@​ananthsub @​bamblebam @​carmocca @​daniellepintz @​ethanwharris @​kaushikb11 @​sohamtiwari3120 @​tchaton

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.4.7] - 2021-09-14

  • Fixed logging of nan parameters (#9364)
  • Fixed replace_sampler missing the batch size under specific conditions (#9367)
  • Pass init args to ShardedDataParallel (#9483)
  • Fixed collision of user argument when using ShardedDDP (#9512)
  • Fixed DeepSpeed crash for RNNs (#9489)

[1.4.6] - 2021-09-07

  • Fixed an issues with export to ONNX format when a model has multiple inputs (#8800)

  • Removed deprecation warnings being called for on_{task}_dataloader (#9279)

  • Fixed save/load/resume from checkpoint for DeepSpeed Plugin ( #8397, #8644, #8627)

  • Fixed EarlyStopping running on train epoch end when check_val_every_n_epoch>1 is set (#9156)

  • Fixed an issue with logger outputs not being finalized correctly after prediction runs (#8333)

  • Fixed the Apex and DeepSpeed plugin closure running after the on_before_optimizer_step hook (#9288)

  • Fixed the Native AMP plugin closure not running with manual optimization (#9288)

  • Fixed bug where data-loading functions where not getting the correct running stage passed (#8858)

  • Fixed intra-epoch evaluation outputs staying in memory when the respective *_epoch_end hook wasn't overridden (#9261)

  • Fixed error handling in DDP process reconciliation when _sync_dir was not initialized (#9267)

  • Fixed PyTorch Profiler not enabled for manual optimization (#9316)

  • Fixed inspection of other args when a container is specified in save_hyperparameters (#9125)

  • Fixed signature of Timer.on_train_epoch_end and StochasticWeightAveraging.on_train_epoch_end to prevent unwanted deprecation warnings (#9347)

  • Fixed error reporting in DDP process reconciliation when processes are launched by an external agent (#9389)

  • Fixed missing deepspeed distributed call (#9540)

[1.4.5] - 2021-08-31

  • Fixed reduction using self.log(sync_dict=True, reduce_fx={mean,max}) (#9142)
  • Fixed not setting a default value for max_epochs if max_time was specified on the Trainer constructor (#9072)
  • Fixed the CometLogger, no longer modifies the metrics in place. Instead creates a copy of metrics before performing any operations (#9150)
  • Fixed DDP "CUDA error: initialization error" due to a copy instead of deepcopy on ResultCollection (#9239)

[1.4.4] - 2021-08-24

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning) from 1.4.3 to 1.4.7.
- [Release notes](https://github.com/PyTorchLightning/pytorch-lightning/releases)
- [Changelog](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/CHANGELOG.md)
- [Commits](Lightning-AI/pytorch-lightning@1.4.3...1.4.7)

---
updated-dependencies:
- dependency-name: pytorch-lightning
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Sep 18, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Sep 25, 2021

Superseded by #64.

@dependabot dependabot bot closed this Sep 25, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/tune/pytorch-lightning-1.4.7 branch September 25, 2021 07:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant