You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AMICA is better than the Infomax algorithm used in CUDAICA; the former is noise-resistant & quasi-nonstationary (can use multiple mixture models with adaptive likelihood across timeframes)... AMICA is currently Intel CPU-only & much slower than CUDAICA
Compile AMICA's fortran source with CUDA's Fortran compiler... sounds simple but it's not
Required Fortran edits:
Switch all calls to Intel MKL libraries/functions to their CUDA-optimized alternatives (function names are different)
Add logic for ascertaining available RAM & VRAM, edit existing data chunking logic to account for both (e.g. differentiate GPU-constrained machines)
Convert all data to single-precision (Jason: 'ok for theaters multiplication') except during precision-critical operations (follow CUDAICA as guide for implementing this)
Alternate Matlab-CUDA implementation:
See Jason's e-mail with matlab implementation of AMICA
Add logic for data chunking using gpuArray calls
Add logic for normalization & integration of weights across data chunks
Keep as Matlab code or use Matlab CUDA compiler?
Lots of work but keep in back burner
The text was updated successfully, but these errors were encountered:
AMICA is better than the Infomax algorithm used in CUDAICA; the former is noise-resistant & quasi-nonstationary (can use multiple mixture models with adaptive likelihood across timeframes)... AMICA is currently Intel CPU-only & much slower than CUDAICA
Compile AMICA's fortran source with CUDA's Fortran compiler... sounds simple but it's not
Required Fortran edits:
Alternate Matlab-CUDA implementation:
Lots of work but keep in back burner
The text was updated successfully, but these errors were encountered: