This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
Correct way to train Sequential() model on GPU #19839
Unanswered
LewsTherin511
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I've been trying to train a model using the GPU on a server, but I'm getting the error:
I found a few similar topics on the forum, and they all seem to point out to some problems with the installed versions of CUDA.
However, on the same machine I can train some object detection models on GPU within the same environment without any errors, so I guess the problem is in my code.
This is what I'm doing so far:
the error occurs even if I try something as simple as:
within the for loop, so I assume there's something wrong here?
Beta Was this translation helpful? Give feedback.
All reactions