-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some work with CUDNN #2797
base: master
Are you sure you want to change the base?
Some work with CUDNN #2797
Conversation
Currently, this assumes that cudnn is installed in /usr/local/cuda/lib64 and /usr/local/cuda/include, because that's what my current machine has it installed. This can be cleaned up later.
Double check that we correctly transpose height and width everywhere!
b17 disk is no longer writable.
No longer publicly expose the default constructor. That could allow someone to do something like this: CuDevice device1; CuDevice device2; We're not sure what this would cause, but it probably wouldn't be good.
Document data formats expected of each member function.
There may be edge cases in the configure script with this remaining. We still don't look up the version of CUDNN to download for your CUDA version. Also make sure that non-CUDA builds still compile. This is achieved by the cu-cudnn-helper.h file, which forward declares cudnn types, which fortunately are all pointer types (except for enums, which don't matter in this case).
Remove support for CUDA versions that don't support CUDNN v7. Remove 32-bit CUDA mode, since no recent version of CUDA supports it. I may not have removed all instances of it.
Automatically download CUDNN as part of configure.
Convert assertion to warning, although I am not sure this works, since if an algorithm fails to be found because of an out-of-memory error, it is likely to fail at training time for the same reason.
Hi Dan, check out danpovey#53 to see some results of my poking this. |
Use cudnnSetTensor4dDescriptor with strides we calculate ourselves instead.
Change filter type back to NCHW, since it supports more algos.
It supports only the precomputed implicit GEMM implementation of convolution.
The ConvolutionComputation class does not know whether or not the bias is optional at initialization time. It will simply avoid using the bias if it is a nullptr.
|
||
|
||
void ConvolutionComputation:: | ||
ConvolveForward(const MatrixBase<BaseFloat> &input, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there some reason why we don't make this class own a "ConvolutionComputation" object, and have it defer to the functions like ConvolveForward defined in convolution.h?
The layouts used in convolution.h are different.
Thanks for this! I'll look at it again to-morrow.
…On Sat, Oct 27, 2018 at 12:43 AM Daniel Galvez ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In src/nnet3/convolution-cudnn.cc
<#2797 (comment)>:
> + bias_desc_,
+ bias.Data(),
+ activation_desc_,
+ output_desc_,
+ output->Data()));
+ } else
+#endif
+ {
+ ConvolveForward(input.Mat(), params.Mat(), bias.Vec(),
+ &(output->Mat()));
+ }
+}
+
+
+void ConvolutionComputation::
+ConvolveForward(const MatrixBase<BaseFloat> &input,
Is there some reason why we don't make this class own a
"ConvolutionComputation" object, and have it defer to the functions like
ConvolveForward defined in convolution.h?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#2797 (review)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADJVuxXtz7tHWUbwZrkhgm-xA-9rsC50ks5uo-SAgaJpZM4X0qg7>
.
|
Oh, of course. Thanks.
On Fri, Oct 26, 2018 at 9:55 PM Daniel Povey <[email protected]>
wrote:
… The layouts used in convolution.h are different.
Thanks for this! I'll look at it again to-morrow.
On Sat, Oct 27, 2018 at 12:43 AM Daniel Galvez ***@***.***>
wrote:
> ***@***.**** commented on this pull request.
> ------------------------------
>
> In src/nnet3/convolution-cudnn.cc
> <#2797 (comment)>:
>
> > + bias_desc_,
> + bias.Data(),
> + activation_desc_,
> + output_desc_,
> + output->Data()));
> + } else
> +#endif
> + {
> + ConvolveForward(input.Mat(), params.Mat(), bias.Vec(),
> + &(output->Mat()));
> + }
> +}
> +
> +
> +void ConvolutionComputation::
> +ConvolveForward(const MatrixBase<BaseFloat> &input,
>
> Is there some reason why we don't make this class own a
> "ConvolutionComputation" object, and have it defer to the functions like
> ConvolveForward defined in convolution.h?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <
#2797 (review)>,
> or mute the thread
> <
https://github.com/notifications/unsubscribe-auth/ADJVuxXtz7tHWUbwZrkhgm-xA-9rsC50ks5uo-SAgaJpZM4X0qg7
>
> .
>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#2797 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AEi_UJZfrE9NUUpQmunzwD1SNwUyBzFHks5uo-dFgaJpZM4X0qg7>
.
--
Daniel Galvez
http://danielgalvez.me
https://github.com/galv
|
Don't use cudnnConvolutionBiasActivationForward.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed by a bot strictly because of inactivity. This does not mean that we think that this issue is not important! If you believe it has been closed hastily, add a comment to the issue and mention @kkm000, and I'll gladly reopen it. |
This issue has been automatically marked as stale by a bot solely because it has not had recent activity. Please add any comment (simply 'ping' is enough) to prevent the issue from being closed for 60 more days if you believe it should be kept open. |
Still preliminary; not working yet.