You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I encountered the following console prompt when executing train.py. I suspect it may be caused by a version conflict. I have installed the latest versions of PyTorch and CUDA according to your requirements in the readme. I would like to confirm the specific versions you are using. (I can obtain training results, but after testing, the generated training results are incorrect. When I replace the training results with the ckpt you provided, the test results are normal. So I suspect the error is caused by this prompt.)
console prompt:Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.
The text was updated successfully, but these errors were encountered:
Hi @ZC-peng , this codebase is intended to use the latest version of pytorch. The network architecture is quite simple and standard so it's unlikely to have weird dependencies or compatibility issues.
Hello, I encountered the following console prompt when executing train.py. I suspect it may be caused by a version conflict. I have installed the latest versions of PyTorch and CUDA according to your requirements in the readme. I would like to confirm the specific versions you are using. (I can obtain training results, but after testing, the generated training results are incorrect. When I replace the training results with the ckpt you provided, the test results are normal. So I suspect the error is caused by this prompt.)
console prompt:Detected call of
lr_scheduler.step()
beforeoptimizer.step()
. In PyTorch 1.1.0 and later, you should call them in the opposite order:optimizer.step()
beforelr_scheduler.step()
. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.The text was updated successfully, but these errors were encountered: