Skip to content

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github Nov 13, 2021

Bumps pytorch-lightning from 1.4.3 to 1.5.1.

Release notes

Sourced from pytorch-lightning's releases.

Standard weekly patch release

[1.5.1] - 2021-11-09

Fixed

  • Fixed apply_to_collection(defaultdict) (#10316)
  • Fixed failure when DataLoader(batch_size=None) is passed (#10345)
  • Fixed interception of __init__ arguments for sub-classed DataLoader re-instantiation in Lite (#10334)
  • Fixed issue with pickling CSVLogger after a call to CSVLogger.save (#10388)
  • Fixed an import error being caused by PostLocalSGD when torch.distributed not available (#10359)
  • Fixed the logging with on_step=True in epoch-level hooks causing unintended side-effects. Logging with on_step=True in epoch-level hooks will now correctly raise an error (#10409)
  • Fixed deadlocks for distributed training with RichProgressBar (#10428)
  • Fixed an issue where the model wrapper in Lite converted non-floating point tensors to float (#10429)
  • Fixed an issue with inferring the dataset type in fault-tolerant training (#10432)
  • Fixed dataloader workers with persistent_workers being deleted on every iteration (#10434)

Contributors

@​EspenHa @​four4fish @​peterdudfield @​rohitgr7 @​tchaton @​kaushikb11 @​awaelchli @​Borda @​carmocca

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Lightning 1.5: LightningLite, Fault-Tolerant Training, Loop Customization, Lightning Tutorials, LightningCLI v2, RichProgressBar, CheckpointIO Plugin, and Trainer Strategy Flag

The PyTorch Lightning team and its community are excited to announce Lightning 1.5, introducing support for LightningLite, Fault-tolerant Training, Loop Customization, Lightning Tutorials, LightningCLI V2, RichProgressBar, CheckpointIO Plugin, Trainer Strategy flag, and more!

Highlights

Lightning 1.5 marks our biggest release yet. Over 60 contributors have worked on features, bugfixes and documentation improvements for a total of 640 commits since v1.4. Here are some highlights:

Fault-tolerant Training

Fault-tolerant Training is a new internal mechanism that enables PyTorch Lightning to recover from a hardware or software failure. This is particularly interesting while training in the cloud with preemptive instances which can shutdown at any time. Once a Lightning experiment unexpectedly exits, a temporary checkpoint is saved that contains the exact state of all loops and the model. With this new experimental feature, you will be able to restore your training mid-epoch on the exact batch and continue training as if it never got interrupted.

PL_FAULT_TOLERANT_TRAINING=1 python train.py

LightningLite

LightningLite enables pure PyTorch users to scale their existing code to any kind of hardware while retaining full control over their own loops and optimization logic.

With just a few lines of code and no large refactoring, you get support for multi-device, multi-node, running on different accelerators (CPU, GPU, TPU), native automatic mixed precision (half and bfloat16), and double precision, in just a few seconds. And no special launcher required! Check out our documentation to find out how you can get one step closer to boilerplate-free research!

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

[1.5.1] - 2021-11-09

Fixed

  • Fixed apply_to_collection(defaultdict) (#10316)
  • Fixed failure when DataLoader(batch_size=None) is passed (#10345)
  • Fixed interception of __init__ arguments for sub-classed DataLoader re-instantiation in Lite (#10334)
  • Fixed issue with pickling CSVLogger after a call to CSVLogger.save (#10388)
  • Fixed an import error being caused by PostLocalSGD when torch.distributed not available (#10359)
  • Fixed the logging with on_step=True in epoch-level hooks causing unintended side-effects. Logging with on_step=True in epoch-level hooks will now correctly raise an error (#10409)
  • Fixed deadlocks for distributed training with RichProgressBar (#10428)
  • Fixed an issue where the model wrapper in Lite converted non-floating point tensors to float (#10429)
  • Fixed an issue with inferring the dataset type in fault-tolerant training (#10432)
  • Fixed dataloader workers with persistent_workers being deleted on every iteration (#10434)

[1.5.0] - 2021-11-02

Added

  • Added support for monitoring the learning rate without schedulers in LearningRateMonitor (#9786)
  • Added registration of ShardedTensor state dict hooks in LightningModule.__init__ if the PyTorch version supports ShardedTensor (#8944)
  • Added error handling including calling of on_keyboard_interrupt() and on_exception() for all entrypoints (fit, validate, test, predict) (#8819)
  • Added a flavor of training_step that takes dataloader_iter as an argument (#8807)
  • Added a state_key property to the Callback base class (#6886)
  • Added progress tracking to loops:
    • Integrated TrainingEpochLoop.total_batch_idx (#8598)
    • Added BatchProgress and integrated TrainingEpochLoop.is_last_batch (#9657)
    • Avoid optional Tracker attributes (#9320)
    • Reset current progress counters when restarting an epoch loop that had already finished (#9371)
    • Call reset_on_restart in the loop's reset hook instead of when loading a checkpoint (#9561)
    • Use completed over processed in reset_on_restart (#9656)
    • Renamed reset_on_epoch to reset_on_run (#9658)
  • Added batch_size and rank_zero_only arguments for log_dict to match log (#8628)
  • Added a check for unique GPU ids (#8666)
  • Added ResultCollection state_dict to the Loop state_dict and added support for distributed reload (#8641)
  • Added DeepSpeed collate checkpoint utility function (#8701)
  • Added a handles_accumulate_grad_batches property to the training type plugins (#8856)
  • Added a warning to WandbLogger when reusing a wandb run (#8714)
  • Added log_graph argument for watch method of WandbLogger (#8662)
  • LightningCLI additions:
    • Added LightningCLI(run=False|True) to choose whether to run a Trainer subcommand (#8751)
    • Added support to call any trainer function from the LightningCLI via subcommands (#7508)
    • Allow easy trainer re-instantiation (#7508)
    • Automatically register all optimizers and learning rate schedulers (#9565)
    • Allow registering custom optimizers and learning rate schedulers without subclassing the CLI (#9565)
    • Support shorthand notation to instantiate optimizers and learning rate schedulers (#9565)
    • Support passing lists of callbacks via command line (#8815)
    • Support shorthand notation to instantiate models (#9588)
    • Support shorthand notation to instantiate datamodules (#10011)

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning) from 1.4.3 to 1.5.1.
- [Release notes](https://github.com/PyTorchLightning/pytorch-lightning/releases)
- [Changelog](https://github.com/PyTorchLightning/pytorch-lightning/blob/master/CHANGELOG.md)
- [Commits](Lightning-AI/pytorch-lightning@1.4.3...1.5.1)

---
updated-dependencies:
- dependency-name: pytorch-lightning
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Nov 13, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Nov 20, 2021

Superseded by #79.

@dependabot dependabot bot closed this Nov 20, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/tune/pytorch-lightning-1.5.1 branch November 20, 2021 08:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant