🎉 CONTRIBUTIONS WELCOME! 🎉
See the TODO section
- Documentation: https://roicat.readthedocs.io/en/latest/
- Discussion forum: https://groups.google.com/g/roicat_support
- Technical support: Github Issues
A simple-to-use Python package for automatically classifying images of cells and tracking them across imaging sessions/planes.
Why use ROICaT?
- It's easy to use. You don't need to know how to code. You can use the interactive notebooks or online app to run the pipelines with just a few clicks.
- It's accurate. ROICaT was desgined to be better than existing tools. It is capable of classifying and tracking neuron ROIs at accuracies approaching human performance out of the box.
- It's fast and computational requirements are low. You can run it on a laptop. It was designed to be used with >1M ROIs, and can utilize GPUs to speed things up.
With ROICaT, you can:
- Classify ROIs into different categories (e.g. neurons, dendrites, glia, etc.).
- Track ROIs across imaging sessions/planes (e.g. ROI #1 in session 1 is the same as ROI #7 in session 2).
What data types can ROICaT process?
- ROICaT can accept any imaging data format including: Suite2p, CaImAn, CNMF, NWB, raw/custom ROI data and more. See below for details on how to use any data type with ROICaT.
- Online App: Good for first time users. Try it out without installing anything.
- Interactive notebook. Or run on google colab:
- Command line interface script:
roicat --pipeline tracking --path_params /path/to/params.yaml --dir_data /folder/with/data/ --dir_save /folder/save/ --prefix_name_save expName --verbose
- Interactive notebook - Drawing. Or run on google colab:
- Interactive notebook - Labeling
- Interactive notebook - Train classifier
- Interactive notebook - Inference with classifier
OTHER:
- Custom data importing notebook
- Use the API to integrate ROICaT functions into your own code: Documentation.
- Run the full tracking pipeline using the CLI or
roicat.pipelines.pipeline_tracking
with default parameters generated fromroicat.util.get_default_paramaters()
saved as a yaml file.
ROICaT works on Windows, MacOS, and Linux. If you have any issues during the installation process, please make a github issue with the error.
- Anaconda or Miniconda.
- If using Windows: Microsoft C++ Build Tools
- The below commands should be run in the terminal (Mac/Linux) or Anaconda Prompt (Windows).
conda create -n roicat python=3.12
conda activate roicat
You will need to activate the environment with conda activate roicat
each time
you want to use ROICaT.
pip install roicat[all]
pip install git+https://github.com/RichieHakim/roiextractors
Note on zsh: if you are using a zsh terminal, change command to: pip3 install --user 'roicat[all]'
Note on installing GPU support on Windows: see
GPU Troubleshooting
documentation.
Note on opencv: The headless version of opencv is installed by default. If
the regular version is already installed, you will need to uninstall it first.
git clone https://github.com/RichieHakim/ROICaT
Then, navigate to the ROICaT/notebooks/jupyter
directory to run the notebooks.
There are 2 parts to upgrading ROICaT: the Python package and the
repository files which contain the notebooks and scripts.
Activate your environment first, then...
To upgrade the Python package, run:
pip install --upgrade roicat[all]
To upgrade the repository files, navigate your terminal to the ROICaT
folder and run:
git pull
- Pass ROIs through ROInet: Images of the ROIs are passed through a neural network which outputs a feature vector for each image describing what the ROI looks like.
- Classification: The feature vectors can then be used to classify ROIs:
- A simple regression-like classifier can be trained using user-supplied labeled data (e.g. an array of images of ROIs and a corresponding array of labels for each ROI).
- Alternatively, classification can be done by projecting the feature vectors into a lower-dimensional space using UMAP and then simply circling the region of space to classify the ROIs.
- Tracking: The feature vectors can be combined with information about the position of the ROIs to track the ROIs across imaging sessions/planes.
Although, we recommend transitioning to using the notebooks or CLI instead of the app, you can download and run the app locally with the following command:
sudo docker run -it -p 7860:7860 --platform=linux/amd64 --shm-size=10g registry.hf.space/richiehakim-roicat-tracking:latest streamlit run app.py
- Add in method to use more similarity metrics for tracking
- Coordinate descent on each similarity metric
- Add F and Fneu to data_roicat, dFoF and trace quality metric functions
- Add in notebook for demonstrating using temporal similarity metrics (SWT on dFoF)
- Make a standard classifier
- Try other clustering methods
- Make image aligner based on image similarity + RANSAC of centroids or s_SF
- Better post-hoc curation metrics and visualizations
- Improve non-rigid image registration methods (border performance)
- Make non-rigid image registration optional
- Finish ROIextractors integration
- Update automatic regression module (make new repo for it)
- Switch to ONNX for ROInet
- Some more integration tests
- Figure out RNG / OS differences issues for tests
- Add more documentation / tutorials
- Make a GUI
- Add settings to the GUI
[ ] Make a Docker containerMake colab demo notebook not require user data- Make a better CLI
- Switch to pyproject.toml
- Improve params.json / default params system
- Spruce up training code
- Write the paper
- Make tweet about it
- Make a video or two on how to use it
- Maybe use lightthetorch for torch installation
- Better Readme
- More documentation
- Make a regression model for in-plane-ness
- Formalize bounty program