You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You have developed great schedulers in timm. One thing that makes them a bit challenging to integrate with usual pytorch pipelines and lightning is the 'epoch' argument in the 'step' method. Pytorch saves the last step as an internal argument so calling the 'step' will automatically go one step further.
I understand that there are so many use cases that you might want to resume training from a certain start point. So, I suggest having an internal record of the epoch argument, and if the user wants to override it that can pass their intended epoch number.
If you want I can work on a PR.
The text was updated successfully, but these errors were encountered:
You have developed great schedulers in timm. One thing that makes them a bit challenging to integrate with usual pytorch pipelines and lightning is the 'epoch' argument in the 'step' method. Pytorch saves the last step as an internal argument so calling the 'step' will automatically go one step further.
I understand that there are so many use cases that you might want to resume training from a certain start point. So, I suggest having an internal record of the epoch argument, and if the user wants to override it that can pass their intended epoch number.
If you want I can work on a PR.
The text was updated successfully, but these errors were encountered: