Skip to content

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github Dec 4, 2021

Bumps pytorch-lightning from 1.4.3 to 1.5.4.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.5.4] - 2021-11-30

Fixed

  • Fixed support for --key.help=class with the LightningCLI (#10767)
  • Fixed _compare_version for python packages (#10762)
  • Fixed TensorBoardLogger SummaryWriter not close before spawning the processes (#10777)
  • Fixed a consolidation error in Lite when attempting to save the state dict of a sharded optimizer (#10746)
  • Fixed the default logging level for batch hooks associated with training from on_step=False, on_epoch=True to on_step=True, on_epoch=False (#10756)

Removed

Contributors

@​awaelchli @​carmocca @​kaushikb11 @​rohitgr7 @​tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.5.3] - 2021-11-24

Fixed

  • Fixed ShardedTensor state dict hook registration to check if torch distributed is available (#10621)
  • Fixed an issue with self.log not respecting a tensor's dtype when applying computations (#10076)
  • Fixed LigtningLite _wrap_init popping unexisting keys from DataLoader signature parameters (#10613)
  • Fixed signals being registered within threads (#10610)
  • Fixed an issue that caused Lightning to extract the batch size even though it was set by the user in LightningModule.log (#10408)
  • Fixed Trainer(move_metrics_to_cpu=True) not moving the evaluation logged results to CPU (#10631)
  • Fixed the {validation,test}_step outputs getting moved to CPU with Trainer(move_metrics_to_cpu=True) (#10631)
  • Fixed signals being registered within threads (#10610)
  • Fixed an issue with collecting logged test results with multiple dataloaders (#10522)

Contributors

@​ananthsub @​awaelchli @​carmocca @​jiwidi @​kaushikb11 @​qqueing @​rohitgr7 @​shabie @​tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.5.2] - 2021-11-16

Fixed

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.5.4] - 2021-11-30

Fixed

  • Fixed support for --key.help=class with the LightningCLI (#10767)
  • Fixed _compare_version for python packages (#10762)
  • Fixed TensorBoardLogger SummaryWriter not close before spawning the processes (#10777)
  • Fixed a consolidation error in Lite when attempting to save the state dict of a sharded optimizer (#10746)
  • Fixed the default logging level for batch hooks associated with training from on_step=False, on_epoch=True to on_step=True, on_epoch=False (#10756)

Removed

[1.5.3] - 2021-11-24

Fixed

  • Fixed ShardedTensor state dict hook registration to check if torch distributed is available (#10621)
  • Fixed an issue with self.log not respecting a tensor's dtype when applying computations (#10076)
  • Fixed LigtningLite _wrap_init popping unexisting keys from DataLoader signature parameters (#10613)
  • Fixed signals being registered within threads (#10610)
  • Fixed an issue that caused Lightning to extract the batch size even though it was set by the user in LightningModule.log (#10408)
  • Fixed Trainer(move_metrics_to_cpu=True) not moving the evaluation logged results to CPU (#10631)
  • Fixed the {validation,test}_step outputs getting moved to CPU with Trainer(move_metrics_to_cpu=True) (#10631)
  • Fixed an issue with collecting logged test results with multiple dataloaders (#10522)

[1.5.2] - 2021-11-16

Fixed

  • Fixed CombinedLoader and max_size_cycle didn't receive a DistributedSampler (#10374)
  • Fixed an issue where class or init-only variables of dataclasses were passed to the dataclass constructor in utilities.apply_to_collection (#9702)
  • Fixed isinstance not working with init_meta_context, materialized model not being moved to the device (#10493)
  • Fixed an issue that prevented the Trainer to shutdown workers when execution is interrupted due to failure(#10463)
  • Squeeze the early stopping monitor to remove empty tensor dimensions (#10461)
  • Fixed sampler replacement logic with overfit_batches to only replace the sample when SequentialSampler is not used (#10486)
  • Fixed scripting causing false positive deprecation warnings (#10470, #10555)
  • Do not fail if batch size could not be inferred for logging when using DeepSpeed (#10438)
  • Fixed propagation of device and dtype information to submodules of LightningLite when they inherit from DeviceDtypeModuleMixin (#10559)

[1.5.1] - 2021-11-09

Fixed

  • Fixed apply_to_collection(defaultdict) (#10316)
  • Fixed failure when DataLoader(batch_size=None) is passed (#10345)

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning) from 1.4.3 to 1.5.4.
- [Release notes](https://github.com/PyTorchLightning/pytorch-lightning/releases)
- [Changelog](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/CHANGELOG.md)
- [Commits](Lightning-AI/pytorch-lightning@1.4.3...1.5.4)

---
updated-dependencies:
- dependency-name: pytorch-lightning
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Dec 4, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Dec 11, 2021

Superseded by #83.

@dependabot dependabot bot closed this Dec 11, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/tune/pytorch-lightning-1.5.4 branch December 11, 2021 08:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant