Skip to content

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github Nov 20, 2021

Bumps pytorch-lightning from 1.4.3 to 1.5.2.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.5.2] - 2021-11-16

Fixed

  • Fixed CombinedLoader and max_size_cycle didn't receive a DistributedSampler (#10374)
  • Fixed an issue where class or init-only variables of dataclasses were passed to the dataclass constructor in utilities.apply_to_collection (#9702)
  • Fixed isinstance not working with init_meta_context, materialized model not being moved to the device (#10493)
  • Fixed an issue that prevented the Trainer to shutdown workers when execution is interrupted due to failure(#10463)
  • Squeeze the early stopping monitor to remove empty tensor dimensions (#10461)
  • Fixed sampler replacement logic with overfit_batches to only replace the sample when SequentialSampler is not used (#10486)
  • Fixed scripting causing false positive deprecation warnings (#10470, #10555)
  • Do not fail if batch size could not be inferred for logging when using DeepSpeed (#10438)
  • Fixed propagation of device and dtype information to submodules of LightningLite when they inherit from DeviceDtypeModuleMixin (#10559)

Contributors

@​a-gardner1 @​awaelchli @​carmocca @​justusschock @​Raahul-Singh @​rohitgr7 @​SeanNaren @​tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.5.1] - 2021-11-09

Fixed

  • Fixed apply_to_collection(defaultdict) (#10316)
  • Fixed failure when DataLoader(batch_size=None) is passed (#10345)
  • Fixed interception of __init__ arguments for sub-classed DataLoader re-instantiation in Lite (#10334)
  • Fixed issue with pickling CSVLogger after a call to CSVLogger.save (#10388)
  • Fixed an import error being caused by PostLocalSGD when torch.distributed not available (#10359)
  • Fixed the logging with on_step=True in epoch-level hooks causing unintended side-effects. Logging with on_step=True in epoch-level hooks will now correctly raise an error (#10409)
  • Fixed deadlocks for distributed training with RichProgressBar (#10428)
  • Fixed an issue where the model wrapper in Lite converted non-floating point tensors to float (#10429)
  • Fixed an issue with inferring the dataset type in fault-tolerant training (#10432)
  • Fixed dataloader workers with persistent_workers being deleted on every iteration (#10434)

Contributors

@​EspenHa @​four4fish @​peterdudfield @​rohitgr7 @​tchaton @​kaushikb11 @​awaelchli @​Borda @​carmocca

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Lightning 1.5: LightningLite, Fault-Tolerant Training, Loop Customization, Lightning Tutorials, LightningCLI v2, RichProgressBar, CheckpointIO Plugin, and Trainer Strategy Flag

The PyTorch Lightning team and its community are excited to announce Lightning 1.5, introducing support for LightningLite, Fault-tolerant Training, Loop Customization, Lightning Tutorials, LightningCLI V2, RichProgressBar, CheckpointIO Plugin, Trainer Strategy flag, and more!

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.5.2] - 2021-11-16

Fixed

  • Fixed CombinedLoader and max_size_cycle didn't receive a DistributedSampler (#10374)
  • Fixed an issue where class or init-only variables of dataclasses were passed to the dataclass constructor in utilities.apply_to_collection (#9702)
  • Fixed isinstance not working with init_meta_context, materialized model not being moved to the device (#10493)
  • Fixed an issue that prevented the Trainer to shutdown workers when execution is interrupted due to failure(#10463)
  • Squeeze the early stopping monitor to remove empty tensor dimensions (#10461)
  • Fixed sampler replacement logic with overfit_batches to only replace the sample when SequentialSampler is not used (#10486)
  • Fixed scripting causing false positive deprecation warnings (#10470, #10555)
  • Do not fail if batch size could not be inferred for logging when using DeepSpeed (#10438)
  • Fixed propagation of device and dtype information to submodules of LightningLite when they inherit from DeviceDtypeModuleMixin (#10559)

[1.5.1] - 2021-11-09

Fixed

  • Fixed apply_to_collection(defaultdict) (#10316)
  • Fixed failure when DataLoader(batch_size=None) is passed (#10345)
  • Fixed interception of __init__ arguments for sub-classed DataLoader re-instantiation in Lite (#10334)
  • Fixed issue with pickling CSVLogger after a call to CSVLogger.save (#10388)
  • Fixed an import error being caused by PostLocalSGD when torch.distributed not available (#10359)
  • Fixed the logging with on_step=True in epoch-level hooks causing unintended side-effects. Logging with on_step=True in epoch-level hooks will now correctly raise an error (#10409)
  • Fixed deadlocks for distributed training with RichProgressBar (#10428)
  • Fixed an issue where the model wrapper in Lite converted non-floating point tensors to float (#10429)
  • Fixed an issue with inferring the dataset type in fault-tolerant training (#10432)
  • Fixed dataloader workers with persistent_workers being deleted on every iteration (#10434)

[1.5.0] - 2021-11-02

Added

  • Added support for monitoring the learning rate without schedulers in LearningRateMonitor (#9786)
  • Added registration of ShardedTensor state dict hooks in LightningModule.__init__ if the PyTorch version supports ShardedTensor (#8944)
  • Added error handling including calling of on_keyboard_interrupt() and on_exception() for all entrypoints (fit, validate, test, predict) (#8819)
  • Added a flavor of training_step that takes dataloader_iter as an argument (#8807)
  • Added a state_key property to the Callback base class (#6886)
  • Added progress tracking to loops:
    • Integrated TrainingEpochLoop.total_batch_idx (#8598)
    • Added BatchProgress and integrated TrainingEpochLoop.is_last_batch (#9657)
    • Avoid optional Tracker attributes (#9320)
    • Reset current progress counters when restarting an epoch loop that had already finished (#9371)
    • Call reset_on_restart in the loop's reset hook instead of when loading a checkpoint (#9561)
    • Use completed over processed in reset_on_restart (#9656)
    • Renamed reset_on_epoch to reset_on_run (#9658)
  • Added batch_size and rank_zero_only arguments for log_dict to match log (#8628)
  • Added a check for unique GPU ids (#8666)

... (truncated)

Commits
  • 0865ad1 Fix propagation of device and dtype properties in Lite modules (#10559)
  • a707438 [DeepSpeed] Do not fail if batch size could not be inferred for logging (#10438)
  • ae6da92 1.5.2 release
  • 1ecb962 Change attributes of RichProgressBarTheme dataclass (#10454)
  • 9e45024 Fix scripting causing false positive deprecation warnings (#10555)
  • 122e503 Skip strategy=ddp_spawn, accelerator=cpu, python>=3.9 tests (#10550)
  • 5f4a5fe Fix to_torchscript() causing false positive deprecation warnings (#10470)
  • 53ff840 Resolve instantiation problem with init_meta_context (#10493)
  • 5e6db79 Squeeze the early stopping monitor (#10461)
  • 391e0d6 shutdown workers on failure (#10463)
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning) from 1.4.3 to 1.5.2.
- [Release notes](https://github.com/PyTorchLightning/pytorch-lightning/releases)
- [Changelog](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/CHANGELOG.md)
- [Commits](Lightning-AI/pytorch-lightning@1.4.3...1.5.2)

---
updated-dependencies:
- dependency-name: pytorch-lightning
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Nov 20, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Nov 27, 2021

Superseded by #81.

@dependabot dependabot bot closed this Nov 27, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/tune/pytorch-lightning-1.5.2 branch November 27, 2021 08:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant