You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a problem where I use min_epochs because it can take a while before the training starts to converge.
EarlyStopping is triggered quite early, but I thought to set min_epochs appropriately to 'get over' that initial period.
However, even though training is converging by the time we reach min_epochs, early stopping will stop training immediately once we reached min_epochs, just because it was triggered very early on in training.
I think that EarlyStopping should pick itself back up if we improve upon the monitored metric before reaching min_epochs.
I also see this and think the implementation would be better suited if, after min_epochs is reached, EarlyStopping takes precedence. As it stands right now, it is as if EarlyStopping does not exist because training exits once min_epochs is reached no matter what.
Bug description
I have a problem where I use
min_epochs
because it can take a while before the training starts to converge.EarlyStopping is triggered quite early, but I thought to set
min_epochs
appropriately to 'get over' that initial period.However, even though training is converging by the time we reach
min_epochs
, early stopping will stop training immediately once we reachedmin_epochs
, just because it was triggered very early on in training.I think that
EarlyStopping
should pick itself back up if we improve upon the monitored metric before reachingmin_epochs
.Example
Trainer
config:Now imagine
EarlyStopping
triggering at epoch 100, butval_loss
improving at 101 all the way until epoch 1000 - right now training will still stop.What version are you seeing the problem on?
v2.2
How to reproduce the bug
No response
Error messages and logs
No response
Environment
No response
More info
No response
The text was updated successfully, but these errors were encountered: