You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I just wonder the difference of train script between "single_gpu" and "data_parallel", since they seem like have the same structure and module, also using the same API.
By the way, would you introduce how to use the distributed one? I am a little bit confuse about how to set the url and how to start using this.
Thx.
The text was updated successfully, but these errors were encountered:
os.environ["CUDA_VISIBLE_DEVICES"] = gpu_devices net = nn.DataParallel(net)
I suggest people to take some effort themselves looking at the code before posting questions. Not a frequent pytorch user myself but this should not be difficult.
Hi, I just wonder the difference of train script between "single_gpu" and "data_parallel", since they seem like have the same structure and module, also using the same API.
By the way, would you introduce how to use the distributed one? I am a little bit confuse about how to set the url and how to start using this.
Thx.
The text was updated successfully, but these errors were encountered: