Skip to content

Installation

Roshan Rao edited this page Jun 11, 2019 · 30 revisions

Installation

You can install tsne-cuda by using the conda modules/binaries on a supported system configuration, or by installing from source. The most recent release has not yet been pushed to conda, so we recommend compiling from source if you can. However you can still install the old version via conda. We are working towards releasing the new version on conda.

Note: There appear to be some compilation instability issues when using parallel compilation. Running Make twice seems to fix it, as does running make without parallel compile. We believe this was fixed in our most recent release.

Conda (Recommended)

Requirements

If you are installing from conda, you need to be sure that you have the following requirements:

  • CUDA 8.0, 9.0, or 9.1: The binaries are built to support these three versions (Obtainable here)
  • Compatible GPU: Our code uses optimized features in the binaries which rely on the GPU having compute architecture 5.0 or greater. To find your GPU compatibility, use the list here.
  • All other requirements should be automatically taken care of by conda.

Installing

If you are using a supported configuration, you can install with one of

conda install tsnecuda -c cannylab # [DEFAULT] For CUDA9.0
conda install tsnecuda cuda80 -c cannylab # For CUDA8.0
conda install tsnecuda cuda91 -c cannylab -c numba # For CUDA9.1
# cuda80/cuda91 shown above is a feature, it doesn't install CUDA for you.

For cuda91, the numba channel is required to obtain cudatoolkit==9.1, so the command will fail if you don't add the channel.

Building from Source

Requirements

A number of requirements are necessary for building our code from source.

  • CUDA: You will need a working version of the CUDA toolkit which can be obtained from here. Our code has been tested compiling with CUDA versions 8.0, 9.0 and 9.1, and 10.0. Other versions may not be supported.
  • CMAKE: Version >= 3.5.1 which can be obtained by running sudo apt install cmake on ubuntu/debian systems.
  • MKL/OpenBLAS: If you're using MKL, install it using the intel provided installation scripts. If you're using OpenBLAS install it using sudo apt install libopenblas-dev on ubuntu/debian systems.
  • GCC/llvm-clang: This is likely already installed on your system. If not, on ubuntu you can run sudo apt install build-essential to get a version.
  • OpenMP: On ubuntu this is likely already installed with your version of GCC. For other distributions, be sure your compiler has OpenMP support.
  • Python (for Python bindings): Python is not required, however to build the python bindings you must install Python. This library was tested with Python 3, however it is possible that Python 2 will work as well (though it is untested).
  • Doxygen: To build the documentation, a working version of doxygen is required (which can be obtained using sudo apt install doxygen on debian/ubuntu systems).
  • ZMQ: Necessary for building the interactive visualization. On ubuntu you can obtain ZMQ by using sudo apt install libzmq-dev.

Compiling from source

First, clone the repository, and change into the cloned directory using:

git clone https://github.com/rmrao/tsne-cuda.git && cd tsne-cuda

Next, initialize the submodules from the root directory using:

git submodule init
git submodule update

Next, change in the build directory:

cd build/

From the build directory, we can configure our project. There are a number of options that may be necessary:

  • -DBUILD_PYTHON: (DEFAULT ON) Build the python package. This is necessary for the python bindings.
  • -DBUILD_TEST: (DEFAULT OFF) Build the test suite. To turn this on, use -DBUILD_TEST=TRUE
  • -WITH_MKL: (DEFAULT OFF) Build with MKL support. If your MKL is installed in the default location "/opt/intel/mkl" then this is the only argument you need to change. If MKL is installed somewhere else, you must also pass the root MKL directory with -DMKL_DIR= . If this is off, you must have OpenBLAS installed.
  • -DWITH_ZMQ: (DEFAULT OFF) There is a bug when using GCC version >= 6.0 with nvcc which means that ZMQ cannot be properly compiled. Thus, to avoid this bug which is present in Ubuntu versions 17.10 and 18.04 by default, you must use -DWITH_ZMQ=FALSE. CUDA 10.0 recently fixed this, so you may be able to use this feature with CUDA 10.0.
  • -DCMAKE_CXX_COMPILER,-DCMAKE_C_COMPILER (DEFAULT system default) It is possible on newer systems that you will get a compatability error "NVCC does not support GCC versions greater than 6.4.0". To fix this error, you can install an older compiler, and use these cmake options to build the library.

To configure, use the following CMAKE command:

cmake .. CMAKE_ARGS

where the CMAKE_ARGS are taken from the above. Thus, if you wanted to not build the python bindings and build ZMQ, then you would use the following command:

cmake .. -DBUILD_PYTHON=FALSE -DWITH_ZMQ=TRUE

Finally, to build the library use:

make

For speedy compilation (using multiple threads), you can use

make -j<num cores>

Using multiple threads may throw errors in the compilation due to nonexistent files. To fix this, just run a single threaded make after compilation completes.

Installing the python bindings

Once you have compiled the python bindings you can install the Python bindings by changing into the build/python directory, and running:

python setup.py install

Validating the Install

Unfortunately, the current set of tests do not compile. The best way to determine that everything is working is to run

import tsnecuda
tsnecuda.test()

This does a t-SNE on 5000 points, so it should complete relatively quickly (1-2 seconds). If there are no error messages and it doesn't hang, you should be good to go.

Clone this wiki locally