Skip to content

Conversation

@dependabot
Copy link

@dependabot dependabot bot commented on behalf of github Apr 24, 2021

Bumps pytorch-lightning from 1.0.3 to 1.2.10.

Release notes

Sourced from pytorch-lightning's releases.

Quick patch release

Fixing missing packaging package in dependencies, which was affecting the only installation to a very blank system.

Standard weekly patch release

[1.2.9] - 2021-04-20

Fixed

  • Fixed the order to call for world ranks & the root_device property in TPUSpawnPlugin (#7074)
  • Fixed multi-gpu join for Horovod (#6954)
  • Fixed parsing for pre-release package versions (#6999)

Contributors

@​irasit @​Borda @​kaushikb11

Standard weekly patch release

[1.2.8] - 2021-04-14

Added

  • Added TPUSpawn + IterableDataset error message (#6875)

Fixed

  • Fixed process rank not being available right away after Trainer instantiation (#6941)
  • Fixed sync_dist for tpus (#6950)
  • Fixed AttributeError for require_backward_grad_sync` when running manual optimization with sharded plugin (#6915)
  • Fixed --gpus default for parser returned by Trainer.add_argparse_args (#6898)
  • Fixed TPU Spawn all gather (#6896)
  • Fixed EarlyStopping logic when min_epochs or min_steps requirement is not met (#6705)
  • Fixed csv extension check (#6436)
  • Fixed checkpoint issue when using Horovod distributed backend (#6958)
  • Fixed tensorboard exception raising (#6901)
  • Fixed setting the eval/train flag correctly on accelerator model (#6983)
  • Fixed DDP_SPAWN compatibility with bug_report_model.py (#6892)
  • Fixed bug where BaseFinetuning.flatten_modules() was duplicating leaf node parameters (#6879)
  • Set better defaults for rank_zero_only.rank when training is launched with SLURM and torchelastic:
    • Support SLURM and torchelastic global rank environment variables (#5715)
    • Remove hardcoding of local rank in accelerator connector (#6878)

Contributors

@​ananthsub @​awaelchli @​ethanwharris @​justusschock @​kandluis @​kaushikb11 @​liob @​SeanNaren @​skmatz

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Standard weekly patch release

[1.2.7] - 2021-04-06

... (truncated)

Changelog

Sourced from pytorch-lightning's changelog.

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog.

[1.3.0] - 2021-MM-DD

Added

  • Added a teardown hook to ClusterEnvironment (#6942)

  • Added utils for NaN/Inf detection for gradients and parameters (#6834)

  • Added more explicit exception message when trying to execute trainer.test() or trainer.validate() with fast_dev_run=True (#6667)

  • Added LightningCLI class to provide simple reproducibility with minimum boilerplate training cli. (#4492)

  • Added gradient_clip_algorithm argument to Trainer for gradient clipping by value (#6123).

  • Added a way to print to terminal without breaking up the progress bar (#5470)

  • Added support to checkpoint after training steps in ModelCheckpoint callback (#6146)

  • Added checkpoint parameter to callback's on_save_checkpoint hook (#6072)

  • Added RunningStage.SANITY_CHECKING (#4945)

  • Added TrainerState.{FITTING,VALIDATING,TESTING,PREDICTING,TUNING} (#4945)

  • Added Trainer.validate() method to perform one evaluation epoch over the validation set (#4948)

  • Added LightningEnvironment for Lightning-specific DDP (#5915)

  • Added teardown() hook to LightningDataModule (#4673)

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Apr 24, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github May 8, 2021

Superseded by #19.

@dependabot dependabot bot closed this May 8, 2021
@dependabot dependabot bot deleted the dependabot/pip/python/requirements/pytorch-lightning-1.2.10 branch May 8, 2021 07:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dependencies Pull requests that update a dependency file

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant