-
Notifications
You must be signed in to change notification settings - Fork 0
TensorFlow
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications. For our purposes, this is very useful for transient detection. Properly utilizing TensorFlow requires using gpu support (otherwise we are working our cpu far too hard and it will be far too slow). To ensure you have a basic understanding of TensorFlow, experiment with the following commands.
To run TensorFlow with GPU support, we use Docker which is the preferred way to do this on Linux (and one of the only ways). While the setup is quite complicated, you can verify Docker is installed and working quite easily using their hello-world option:
$ sudo docker run hello-world
Note that docker requires sudo permission by default which we have not adjusted since the use case is currently quite limited. If you do not have permission to use sudo, you will not be able to use TensorFlow under the current configuration.
##NVIDIA Docker Support
To utilize the GPU, we are spoiled by NVIDIA and only require one tool, which is documented here: To verify that NVIDIA Docker Support is working, run the following: https://github.com/NVIDIA/nvidia-docker
$ sudo docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi
For more information, please see the various documentations below.
https://www.tensorflow.org/install/gpu https://www.tensorflow.org/install/docker https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker