Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GpuArrayException (b'GPU is too old for CUDA version') #573

Open
piyushrungta25 opened this issue Jun 17, 2018 · 5 comments
Open

GpuArrayException (b'GPU is too old for CUDA version') #573

piyushrungta25 opened this issue Jun 17, 2018 · 5 comments

Comments

@piyushrungta25
Copy link

Hi I installed libgpuarray and pygpu along with theano following instruction from docs.
When I run

import pygpu
pygpu.test()

I get the following error

pygpu.gpuarray.GpuArrayException: b'GPU is too old for CUDA version'

Same error when importing theano with device = cuda0 in .theanorc.

My cuda installation is working fine(verified by making and running the deviceQuery from cuda samples).

Video card - Nvidia Geforce 820m
nvidia driver - Driver Version: 390.59
CUDA Driver Version / Runtime Version - 9.1 / 8.0
OS - Manjaro linux

Some search showed that 820m supports CUDA 8.0. And my gpu is clearly not too old for the CUDA since the samples are running fine. Searching for the exception dosent turn up any relevant results so any help here would be appreciated.

@nouiz
Copy link
Member

nouiz commented Jul 3, 2018 via email

@nuqz
Copy link

nuqz commented Aug 16, 2018

@nouiz thanks for your tip about driver versions. Finally got Theano working after installing drivers from cuda toolkit run package.

@susanwl
Copy link

susanwl commented Sep 19, 2018

Hi,

I have absolutely the same issue with the same same gpu.

My setup is nvidia-drivers : 390.87 and cuda : 8.0.61. And deviceQuery
gives:

CUDA Driver Version / Runtime Version 9.1 / 8.0

I believe that the current check for device compatibility is not fully correct,
since:

  1. CUDA Driver driver version apparently indicates not the currently supported
    cuda version, but instead the maximum cuda version that is currently
    supported:
    see

    That is, CUDA Driver Version 9.1 supports all cuda versions from 1 to 9.1.

  2. I have manually disabled compatibility check in gpuarray_buffer_cuda.c
    and compiled libgpuarray without it. Without that check pygpu.test()
    completes almost without errors:

Ran 7253 tests in 258.780s

FAILED (SKIP=7, errors=5)

Among these 5 errors, 4 are due to missing libnccl.so and one:

File "/usr/lib64/python3.6/site-packages/pygpu/reduction.py", line 263, in call
raise ValueError("Array too big to be reduced along the

Correspondingly, I believe that the compatibility check gives false positive
for this setup, and probably should be rewritten to check for cuda runtime
version instead of driver version.

@dsimba
Copy link

dsimba commented Dec 19, 2018

Hi,

I have absolutely the same issue with the same same gpu.

My setup is nvidia-drivers : 390.87 and cuda : 8.0.61. And deviceQuery
gives:

CUDA Driver Version / Runtime Version 9.1 / 8.0

I believe that the current check for device compatibility is not fully correct,
since:

1. CUDA Driver driver version apparently indicates not the currently supported
   cuda version, but instead the maximum cuda version that is currently
   supported:
   [see](https://devtalk.nvidia.com/default/topic/1032152/cuda-driver-version-9-0-cuda-runtime-version-8-0/)
   That is, CUDA Driver Version 9.1 supports all cuda versions from 1 to 9.1.

2. I have manually disabled compatibility check in `gpuarray_buffer_cuda.c`
   and compiled `libgpuarray` without it. Without that check `pygpu.test()`
   completes almost without errors:

hi, would you mind sharing the exact details of the change you did to disable this version check?

thanks

@susanwl
Copy link

susanwl commented Jan 18, 2019

Hi,

I am not sure how detailed I should be, but I have just commented out that check in src/gpuarray_buffer_cuda.c. Here is the exact patch I have used against v0.7.6
patch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants