Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

C++ Tutorial #387

Draft
wants to merge 29 commits into
base: branch-24.12
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
79bc8b4
Adding stub files for vector indexes
cjnolet Sep 9, 2024
0db2eae
Checking inx!
cjnolet Sep 23, 2024
f6731bc
Adding more content
cjnolet Sep 23, 2024
0e4c016
COntinuing to flesh out getting started materials
cjnolet Sep 23, 2024
c17e842
Another updat3
cjnolet Sep 23, 2024
47b03ef
Updates
cjnolet Sep 24, 2024
6598ff8
Migrating the C++ tutorial
cjnolet Sep 24, 2024
c619e26
Adding images and lniks
cjnolet Sep 24, 2024
e8494c4
Moving a few things around
cjnolet Sep 24, 2024
4c81760
Merge branch 'branch-24.10' into doc-2410-index_docs
cjnolet Sep 24, 2024
838b05f
Merge branch 'branch-24.10' into doc-2410-index_docs
cjnolet Sep 27, 2024
9d0e012
More updates to index guides in docs
cjnolet Sep 27, 2024
6d21d50
Merge branch 'doc-2410-index_docs' of github.com:rapidsai/cuvs into d…
cjnolet Sep 27, 2024
6143c2c
Merge branch 'branch-24.10' into doc-2410-index_docs
cjnolet Oct 2, 2024
397e5ff
MOre docs updates
cjnolet Oct 2, 2024
831c2a2
Merge branch 'doc-2410-index_docs' of github.com:rapidsai/cuvs into d…
cjnolet Oct 2, 2024
55f11ee
MOre updates
cjnolet Oct 2, 2024
8c1fa13
Updates
cjnolet Oct 2, 2024
59bc6bd
Use 64 bit types for dataset size calculation in CAGRA graph optimize…
tfeher Oct 3, 2024
5629977
Add a static library for cuvs (#382)
benfred Oct 3, 2024
31cd39b
Finishing tuning guide write-up
cjnolet Oct 3, 2024
d9ef452
More info
cjnolet Oct 3, 2024
325d718
Updating build
cjnolet Oct 3, 2024
4d2d305
Updating readme
cjnolet Oct 3, 2024
82ec71b
Merge branch 'branch-24.10' into doc-2410-index_docs
cjnolet Oct 3, 2024
a84978a
Apply suggestions from code review
cjnolet Oct 3, 2024
35c8029
Removing the cpp_tutorial
cjnolet Oct 3, 2024
b6eb5b8
Merge branch 'doc-2410-index_docs' of github.com:rapidsai/cuvs into d…
cjnolet Oct 3, 2024
ae24662
Adding cpp tutorial back
cjnolet Oct 3, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
97 changes: 23 additions & 74 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,7 @@
# <div align="left"><img src="https://rapids.ai/assets/images/rapids_logo.png" width="90px"/>&nbsp;cuVS: Vector Search and Clustering on the GPU</div>

> [!note]
> cuVS is a new library mostly derived from the approximate nearest neighbors and clustering algorithms in the [RAPIDS RAFT](https://github.com/rapidsai/raft) library of data mining primitives. RAPIDS RAFT currently contains the most fully-featured versions of the approximate nearest neighbors and clustering algorithms in cuVS. We are in the process of migrating the algorithms from RAFT to cuVS, but if you are unsure of which to use, please consider the following:
> 1. RAFT contains C++ and Python APIs for all of the approximate nearest neighbors and clustering algorithms.
> 2. cuVS contains a growing support for different languages, including C, C++, Python, and Rust. We will be adding more language support to cuVS in the future but will not be improving the language support for RAFT.
> 3. Once all of RAFT's approximate nearest neighbors and clustering algorithms are moved to cuVS, the RAFT APIs will be deprecated and eventually removed altogether. Once removed, RAFT will become a lightweight header-only library. In the meantime, there's no harm in using RAFT if support for additional languages is not needed.

> cuVS is a new library mostly derived from the approximate nearest neighbors and clustering algorithms in the [RAPIDS RAFT](https://github.com/rapidsai/raft) library of machine learning and data mining primitives. As of version 24.10 (Release in October 2024), cuVS contains the most fully-featured versions of the approximate nearest neighbors and clustering algorithms from RAFT. The algorithms which have been migrated over to cuVS will be removed from RAFT in version 24.12 (released in December 2024).

## Contents

Expand All @@ -18,10 +14,11 @@

## Useful Resources

- [Documentation](https://docs.rapids.ai/api/cuvs/): Library documentation.
- [Build and Install Guide](https://docs.rapids.ai/api/cuvs/nightly/build): Instructions for installing and building cuVS.
- [Getting Started Guide](https://docs.rapids.ai/api/cuvs/nightly/getting_started): Guide to getting started with cuVS.
- [Code Examples](https://github.com/rapidsai/cuvs/tree/HEAD/examples): Self-contained Code Examples.
- [API Reference Documentation](https://docs.rapids.ai/api/cuvs/nightly/api_docs): API Documentation.
- [Getting Started Guide](https://docs.rapids.ai/api/cuvs/nightly/getting_started): Getting started with RAFT.
- [Build and Install Guide](https://docs.rapids.ai/api/cuvs/nightly/build): Instructions for installing and building cuVS.
- [RAPIDS Community](https://rapids.ai/community.html): Get help, contribute, and collaborate.
- [GitHub repository](https://github.com/rapidsai/cuvs): Download the cuVS source code.
- [Issue tracker](https://github.com/rapidsai/cuvs/issues): Report issues or request features.
Expand All @@ -32,32 +29,32 @@ cuVS contains state-of-the-art implementations of several algorithms for running

## Installing cuVS

cuVS comes with pre-built packages that can be installed through [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html#managing-python). Different packages are available for the different languages supported by cuVS:
cuVS comes with pre-built packages that can be installed through [conda](https://conda.io/projects/conda/en/latest/user-guide/getting-started.html#managing-python) and [pip](https://pip.pypa.io/en/stable/). Different packages are available for the different languages supported by cuVS:

| Python | C/C++ |
|--------|-----------------------------|
| `cuvs` | `libcuvs`, `libcuvs-static` |
| Python | C/C++ |
|--------|-----------|
| `cuvs` | `libcuvs` |

### Stable release

It is recommended to use [mamba](https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html) to install the desired packages. The following command will install the Python package. You can substitute `cuvs` for any of the packages in the table above:
It is recommended to use [mamba](https://conda.github.io/conda-libmamba-solver/user-guide/) to install the desired packages. The following command will install the Python package. You can substitute `cuvs` for any of the packages in the table above:

```bash
mamba install -c conda-forge -c nvidia -c rapidsai cuvs
conda install -c conda-forge -c nvidia -c rapidsai cuvs
```

### Nightlies
If installing a version that has not yet been released, the `rapidsai` channel can be replaced with `rapidsai-nightly`:

```bash
mamba install -c conda-forge -c nvidia -c rapidsai-nightly cuvs=24.10
conda install -c conda-forge -c nvidia -c rapidsai-nightly cuvs=24.10
```

Please see the [Build and Install Guide](https://docs.rapids.ai/api/cuvs/stable/build/) for more information on installing cuVS and building from source.
Please see the [Build and Install Guide](https://docs.rapids.ai/api/cuvs/nightly/build/) for more information on installing cuVS and building from source.

## Getting Started

The following code snippets train an approximate nearest neighbors index for the CAGRA algorithm.
The following code snippets train an approximate nearest neighbors index for the CAGRA algorithm in the various different languages supported by cuVS.

### Python API

Expand Down Expand Up @@ -85,7 +82,7 @@ cagra::index_params index_params;
auto index = cagra::build(res, index_params, dataset);
```

For more examples of the C++ APIs, refer to the [examples](https://github.com/rapidsai/cuvs/tree/HEAD/examples) directory in the codebase.
For more code examples of the C++ APIs, including drop-in Cmake project templates, please refer to the [C++ examples](https://github.com/rapidsai/cuvs/tree/HEAD/examples) directory in the codebase.

### C API

Expand All @@ -110,6 +107,8 @@ cuvsCagraIndexParamsDestroy(index_params);
cuvsResourcesDestroy(res);
```

For more code examples of the C APIs, including drop-in Cmake project templates, please refer to the [C examples](https://github.com/rapidsai/cuvs/tree/branch-24.10/examples/c)

### Rust API

```rust
Expand Down Expand Up @@ -171,67 +170,17 @@ fn cagra_example() -> Result<()> {
}
```

For more code examples of the Rust APIs, including a drop-in project templates, please refer to the [Rust examples](https://github.com/rapidsai/cuvs/tree/branch-24.10/examples/rust).

## Contributing

If you are interested in contributing to the cuVS library, please read our [Contributing guidelines](docs/source/contributing.md). Refer to the [Developer Guide](docs/source/developer_guide.md) for details on the developer guidelines, workflows, and principles.

## References

When citing cuVS generally, please consider referencing this Github repository.
```bibtex
@misc{rapidsai,
title={Rapidsai/cuVS: Vector Search and Clustering on the GPU.},
url={https://github.com/rapidsai/cuvs},
journal={GitHub},
publisher={Nvidia RAPIDS},
author={Rapidsai},
year={2024}
}
```

If citing CAGRA, please consider the following bibtex:
```bibtex
@misc{ootomo2023cagra,
title={CAGRA: Highly Parallel Graph Construction and Approximate Nearest Neighbor Search for GPUs},
author={Hiroyuki Ootomo and Akira Naruse and Corey Nolet and Ray Wang and Tamas Feher and Yong Wang},
year={2023},
eprint={2308.15136},
archivePrefix={arXiv},
primaryClass={cs.DS}
}
```

If citing the k-selection routines, please consider the following bibtex:
```bibtex
@proceedings{10.1145/3581784,
title = {SC '23: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis},
year = {2023},
isbn = {9798400701092},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
abstract = {Started in 1988, the SC Conference has become the annual nexus for researchers and practitioners from academia, industry and government to share information and foster collaborations to advance the state of the art in High Performance Computing (HPC), Networking, Storage, and Analysis.},
location = {, Denver, CO, USA, }
}
```

If citing the nearest neighbors descent API, please consider the following bibtex:
```bibtex
@inproceedings{10.1145/3459637.3482344,
author = {Wang, Hui and Zhao, Wan-Lei and Zeng, Xiangxiang and Yang, Jianye},
title = {Fast K-NN Graph Construction by GPU Based NN-Descent},
year = {2021},
isbn = {9781450384469},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3459637.3482344},
doi = {10.1145/3459637.3482344},
abstract = {NN-Descent is a classic k-NN graph construction approach. It is still widely employed in machine learning, computer vision, and information retrieval tasks due to its efficiency and genericness. However, the current design only works well on CPU. In this paper, NN-Descent has been redesigned to adapt to the GPU architecture. A new graph update strategy called selective update is proposed. It reduces the data exchange between GPU cores and GPU global memory significantly, which is the processing bottleneck under GPU computation architecture. This redesign leads to full exploitation of the parallelism of the GPU hardware. In the meantime, the genericness, as well as the simplicity of NN-Descent, are well-preserved. Moreover, a procedure that allows to k-NN graph to be merged efficiently on GPU is proposed. It makes the construction of high-quality k-NN graphs for out-of-GPU-memory datasets tractable. Our approach is 100-250\texttimes{} faster than the single-thread NN-Descent and is 2.5-5\texttimes{} faster than the existing GPU-based approaches as we tested on million as well as billion scale datasets.},
booktitle = {Proceedings of the 30th ACM International Conference on Information \& Knowledge Management},
pages = {1929–1938},
numpages = {10},
keywords = {high-dimensional, nn-descent, gpu, k-nearest neighbor graph},
location = {Virtual Event, Queensland, Australia},
series = {CIKM '21}
}
```
For the interested reader, many of the accelerated implementations in cuVS are also based on research papers which can provide a lot more background. We also ask you to please cite the corresponding algorithms by referencing them in your own research.
- [CAGRA: Highly Parallel Graph Construction and Approximate Nearest Neighbor Search](https://arxiv.org/abs/2308.15136)
- [Top-K Algorithms on GPU: A Comprehensive Study and New Methods](https://dl.acm.org/doi/10.1145/3581784.3607062>)
- [Fast K-NN Graph Construction by GPU Based NN-Descent](https://dl.acm.org/doi/abs/10.1145/3459637.3482344?casa_token=O_nan1B1F5cAAAAA:QHWDEhh0wmd6UUTLY9_Gv6c3XI-5DXM9mXVaUXOYeStlpxTPmV3nKvABRfoivZAaQ3n8FWyrkWw>)
- [cuSLINK: Single-linkage Agglomerative Clustering on the GPU](https://arxiv.org/abs/2306.16354)
- [GPU Semiring Primitives for Sparse Neighborhood Methods](https://arxiv.org/abs/2104.06357)
76 changes: 71 additions & 5 deletions cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -288,7 +288,7 @@ target_compile_options(
)

add_library(
cuvs SHARED
cuvs_objs OBJECT
src/cluster/kmeans_balanced_fit_float.cu
src/cluster/kmeans_fit_mg_float.cu
src/cluster/kmeans_fit_mg_double.cu
Expand Down Expand Up @@ -436,12 +436,67 @@ add_library(
src/stats/trustworthiness_score.cu
)

set_target_properties(
cuvs_objs
PROPERTIES CXX_STANDARD 17
CXX_STANDARD_REQUIRED ON
CUDA_STANDARD 17
CUDA_STANDARD_REQUIRED ON
POSITION_INDEPENDENT_CODE ON
)
target_compile_options(
cuvs_objs PRIVATE "$<$<COMPILE_LANGUAGE:CXX>:${CUVS_CXX_FLAGS}>"
"$<$<COMPILE_LANGUAGE:CUDA>:${CUVS_CUDA_FLAGS}>"
)
target_link_libraries(
cuvs_objs PUBLIC raft::raft rmm::rmm ${CUVS_CTK_MATH_DEPENDENCIES}
$<TARGET_NAME_IF_EXISTS:OpenMP::OpenMP_CXX>
)

add_library(cuvs SHARED $<TARGET_OBJECTS:cuvs_objs>)
add_library(cuvs_static STATIC $<TARGET_OBJECTS:cuvs_objs>)

target_compile_options(
cuvs INTERFACE $<$<COMPILE_LANG_AND_ID:CUDA,NVIDIA>:--expt-extended-lambda
--expt-relaxed-constexpr>
)

add_library(cuvs::cuvs ALIAS cuvs)
add_library(cuvs::cuvs_static ALIAS cuvs_static)

set_target_properties(
cuvs_static
PROPERTIES BUILD_RPATH "\$ORIGIN"
INSTALL_RPATH "\$ORIGIN"
CXX_STANDARD 17
CXX_STANDARD_REQUIRED ON
POSITION_INDEPENDENT_CODE ON
INTERFACE_POSITION_INDEPENDENT_CODE ON
EXPORT_NAME cuvs_static
)

target_compile_options(cuvs_static PRIVATE "$<$<COMPILE_LANGUAGE:CXX>:${CUVS_CXX_FLAGS}>")

target_include_directories(
cuvs_objs
PUBLIC "$<BUILD_INTERFACE:${DLPACK_INCLUDE_DIR}>"
"$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>"
INTERFACE "$<INSTALL_INTERFACE:include>"
)

target_include_directories(
cuvs_static
PUBLIC "$<BUILD_INTERFACE:${DLPACK_INCLUDE_DIR}>"
INTERFACE "$<INSTALL_INTERFACE:include>"
)

# ensure CUDA symbols aren't relocated to the middle of the debug build binaries
target_link_options(cuvs_static PRIVATE $<HOST_LINK:${CMAKE_CURRENT_BINARY_DIR}/fatbin.ld>)

target_include_directories(
cuvs_static PUBLIC "$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>"
"$<INSTALL_INTERFACE:include>"
)

target_include_directories(
cuvs PUBLIC "$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>"
Expand Down Expand Up @@ -471,11 +526,17 @@ if(NOT BUILD_CPU_ONLY)
PUBLIC rmm::rmm raft::raft ${CUVS_CTK_MATH_DEPENDENCIES}
PRIVATE nvidia::cutlass::cutlass $<TARGET_NAME_IF_EXISTS:OpenMP::OpenMP_CXX> cuvs-cagra-search
)

target_link_libraries(
cuvs_static
PUBLIC rmm::rmm raft::raft ${CUVS_CTK_MATH_DEPENDENCIES}
PRIVATE nvidia::cutlass::cutlass $<TARGET_NAME_IF_EXISTS:OpenMP::OpenMP_CXX> cuvs-cagra-search
)
endif()

if(BUILD_CAGRA_HNSWLIB)
target_link_libraries(cuvs PRIVATE hnswlib::hnswlib)
target_compile_definitions(cuvs PUBLIC CUVS_BUILD_CAGRA_HNSWLIB)
target_link_libraries(cuvs_objs PRIVATE hnswlib::hnswlib)
target_compile_definitions(cuvs_objs PUBLIC CUVS_BUILD_CAGRA_HNSWLIB)
endif()

# Endian detection
Expand Down Expand Up @@ -557,11 +618,16 @@ if(BUILD_C_LIBRARY)
src/neighbors/ivf_flat_c.cpp
src/neighbors/ivf_pq_c.cpp
src/neighbors/cagra_c.cpp
src/neighbors/hnsw_c.cpp
$<$<BOOL:${BUILD_CAGRA_HNSWLIB}>:src/neighbors/hnsw_c.cpp>
src/neighbors/refine/refine_c.cpp
src/distance/pairwise_distance_c.cpp
)

if(BUILD_CAGRA_HNSWLIB)
target_link_libraries(cuvs_c PRIVATE hnswlib::hnswlib)
target_compile_definitions(cuvs_c PUBLIC CUVS_BUILD_CAGRA_HNSWLIB)
endif()

add_library(cuvs::c_api ALIAS cuvs_c)

set_target_properties(
Expand Down Expand Up @@ -600,7 +666,7 @@ include(GNUInstallDirs)
include(CPack)

install(
TARGETS cuvs
TARGETS cuvs cuvs_static cuvs-cagra-search
DESTINATION ${lib_dir}
COMPONENT cuvs
EXPORT cuvs-exports
Expand Down
8 changes: 4 additions & 4 deletions cpp/src/neighbors/detail/cagra/graph_core.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -475,12 +475,12 @@ void sort_knn_graph(
{
RAFT_EXPECTS(dataset.extent(0) == knn_graph.extent(0),
"dataset size is expected to have the same number of graph index size");
const uint32_t dataset_size = dataset.extent(0);
const uint32_t dataset_dim = dataset.extent(1);
const uint64_t dataset_size = dataset.extent(0);
const uint64_t dataset_dim = dataset.extent(1);
const DataT* dataset_ptr = dataset.data_handle();

const IdxT graph_size = dataset_size;
const uint32_t input_graph_degree = knn_graph.extent(1);
const uint64_t input_graph_degree = knn_graph.extent(1);
IdxT* const input_graph_ptr = knn_graph.data_handle();

auto large_tmp_mr = raft::resource::get_large_workspace_resource(res);
Expand Down Expand Up @@ -528,7 +528,7 @@ void sort_knn_graph(
kernel_sort = kern_sort<DataT, IdxT, numElementsPerThread>;
} else {
RAFT_FAIL(
"The degree of input knn graph is too large (%u). "
"The degree of input knn graph is too large (%lu). "
"It must be equal to or smaller than %d.",
input_graph_degree,
1024);
Expand Down
4 changes: 4 additions & 0 deletions cpp/test/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -174,6 +174,8 @@ if(BUILD_TESTS)

if(BUILD_CAGRA_HNSWLIB)
ConfigureTest(NAME NEIGHBORS_HNSW_TEST PATH neighbors/hnsw.cu GPUS 1 PERCENT 100)
target_link_libraries(NEIGHBORS_HNSW_TEST PRIVATE hnswlib::hnswlib)
target_compile_definitions(NEIGHBORS_HNSW_TEST PUBLIC CUVS_BUILD_CAGRA_HNSWLIB)
endif()

ConfigureTest(
Expand Down Expand Up @@ -227,6 +229,8 @@ if(BUILD_C_TESTS)

if(BUILD_CAGRA_HNSWLIB)
ConfigureTest(NAME HNSW_C_TEST PATH neighbors/ann_hnsw_c.cu C_LIB)
target_link_libraries(NEIGHBORS_HNSW_TEST PRIVATE hnswlib::hnswlib)
target_compile_definitions(NEIGHBORS_HNSW_TEST PUBLIC CUVS_BUILD_CAGRA_HNSWLIB)
endif()
endif()

Expand Down
File renamed without changes.
6 changes: 4 additions & 2 deletions docs/source/api_docs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,12 @@ API Reference
=============

.. toctree::
:maxdepth: 1
:caption: Contents:
:maxdepth: 3

c_api.rst
cpp_api.rst
python_api.rst
rust_api/index.rst

* :ref:`genindex`
* :ref:`search`
14 changes: 7 additions & 7 deletions docs/source/build.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,21 +38,21 @@ C, C++, and Python through Conda

The easiest way to install the pre-compiled C, C++, and Python packages is through conda. You can get a minimal conda installation with `miniforge <https://github.com/conda-forge/miniforge>`__.

Use the following commands, depending on your CUDA version, to install cuVS packages (replace `rapidsai` with `rapidsai-nightly` to install more up-to-date but less stable nightly packages). `mamba` is preferred over the `conda` command.
Use the following commands, depending on your CUDA version, to install cuVS packages (replace `rapidsai` with `rapidsai-nightly` to install more up-to-date but less stable nightly packages). `mamba` is preferred over the `conda` command and can be enabled using `this guide <https://conda.github.io/conda-libmamba-solver/user-guide/>`_.

C/C++ Package
~~~~~~~~~~~~~

.. code-block:: bash

mamba install -c rapidsai -c conda-forge -c nvidia libcuvs cuda-version=12.5
conda install -c rapidsai -c conda-forge -c nvidia libcuvs cuda-version=12.5

Python Package
~~~~~~~~~~~~~~

.. code-block:: bash

mamba install -c rapidsai -c conda-forge -c nvidia cuvs cuda-version=12.5
conda install -c rapidsai -c conda-forge -c nvidia cuvs cuda-version=12.5

Python through Pip
^^^^^^^^^^^^^^^^^^
Expand Down Expand Up @@ -97,15 +97,15 @@ Conda environment scripts are provided for installing the necessary dependencies

.. code-block:: bash

mamba env create --name cuvs -f conda/environments/all_cuda-125_arch-x86_64.yaml
mamba activate cuvs
conda env create --name cuvs -f conda/environments/all_cuda-125_arch-x86_64.yaml
conda activate cuvs

The process for building from source with CUDA 11 differs slightly in that your host system will also need to have CUDA toolkit installed which is greater than, or equal to, the version you install into you conda environment. Installing CUDA toolkit into your host system is necessary because `nvcc` is not provided with Conda's cudatoolkit dependencies for CUDA 11. The following example will install create and install dependencies for a CUDA 11.8 conda environment

.. code-block:: bash

mamba env create --name cuvs -f conda/environments/all_cuda-118_arch-x86_64.yaml
mamba activate cuvs
conda env create --name cuvs -f conda/environments/all_cuda-118_arch-x86_64.yaml
conda activate cuvs

The recommended way to build and install cuVS from source is to use the `build.sh` script in the root of the repository. This script can build both the C++ and Python artifacts and provides CMake options for building and installing the headers, tests, benchmarks, and the pre-compiled shared library.

Expand Down
Loading
Loading