Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SNMG ANN #231

Merged
merged 67 commits into from
Oct 3, 2024
Merged
Show file tree
Hide file tree
Changes from 50 commits
Commits
Show all changes
67 commits
Select commit Hold shift + click to select a range
cc1e45a
SNMG ANN
viclafargue Jul 18, 2024
279c345
nccl_clique as header
viclafargue Jul 18, 2024
b10d01d
update linking, build system and conda env
viclafargue Jul 18, 2024
d178155
Answered review
viclafargue Jul 19, 2024
4bc9d9c
Merge branch 'branch-24.08' into snmg-ann
viclafargue Jul 19, 2024
1459248
Apply review
viclafargue Jul 22, 2024
f3a65fc
Answer reviews + small changes
viclafargue Jul 25, 2024
ee2dcc3
Adding documentation
viclafargue Jul 26, 2024
5236cc2
Merge branch 'branch-24.08' into snmg-ann
viclafargue Jul 26, 2024
60bd621
removing unnecessary omp barriers
viclafargue Jul 29, 2024
17f62d2
int64_t change
viclafargue Jul 30, 2024
f523251
tree reduction merge implementation
viclafargue Jul 30, 2024
3e79a44
tree merge solidification
viclafargue Jul 31, 2024
d4cabe0
Adding bench code
viclafargue Aug 6, 2024
37f9755
Merge branch 'branch-24.08' into snmg-ann
viclafargue Aug 6, 2024
504b0c3
Auto max throughput for replicated search
viclafargue Aug 9, 2024
2d0a950
improve batching
viclafargue Aug 20, 2024
169eb15
branch-24.10 merge
viclafargue Sep 6, 2024
686f81d
answering reviews 1
viclafargue Sep 6, 2024
c8d3864
Updating params
viclafargue Sep 9, 2024
51291d8
iface free functions
viclafargue Sep 9, 2024
80cf875
free functions
viclafargue Sep 10, 2024
d60e583
NCCL clique from RAFT handle
viclafargue Sep 18, 2024
3419dfa
load balancing mechanism
viclafargue Sep 19, 2024
7970fdc
Merge branch 'branch-24.10' into snmg-ann
viclafargue Sep 19, 2024
6a220b5
update doc
viclafargue Sep 19, 2024
c5e955f
moving iface struct
viclafargue Sep 23, 2024
60fbef1
include fix
viclafargue Sep 23, 2024
5ea9b9b
small fixes
viclafargue Sep 24, 2024
8b0c8c7
RAFT handle update
viclafargue Sep 26, 2024
bcf97c9
RAFT handle update
viclafargue Sep 26, 2024
9418f7e
smallSearchBatchSize as constexpr
viclafargue Sep 27, 2024
fa457f4
Merge branch 'branch-24.10' into snmg-ann
viclafargue Sep 30, 2024
dc2ccdd
add half type
viclafargue Sep 30, 2024
ed68cd8
fix bench
viclafargue Sep 30, 2024
9e659c4
Update build system
viclafargue Oct 2, 2024
f3bc98a
update iface to only expose device-only search function
viclafargue Oct 2, 2024
d9a83e5
Adding replicated search mode (load-balancer and round-robin)
viclafargue Oct 2, 2024
e6a73c6
CAGRA bench consolidation
viclafargue Oct 2, 2024
d68f572
Adding --mg to conda recipes
viclafargue Oct 2, 2024
6a673c3
resolving merge conflict
viclafargue Oct 2, 2024
55fbb36
enable multi-GPU by default, add a CMake option to control it
jameslamb Oct 2, 2024
5649a49
empty commit to re-trigger CI
jameslamb Oct 2, 2024
a208d49
Merge branch 'branch-24.10' into snmg-ann
jameslamb Oct 2, 2024
e0c232a
revert CUVS_EXPLICIT_INSTANTIATE_ONLY re-introduction
jameslamb Oct 2, 2024
1a5a2f2
Merge branch 'snmg-ann' of github.com:viclafargue/cuvs into snmg-ann
jameslamb Oct 2, 2024
fef0fc9
Removing std comms
cjnolet Oct 2, 2024
c028dca
Remove UCP
cjnolet Oct 2, 2024
a43c4f9
Adding nccl to rapids_build
cjnolet Oct 2, 2024
3b2feb7
add back NCCL dependency, pin to NCCL>=2.19
jameslamb Oct 2, 2024
d77a4e9
Revert "Removing std comms"
cjnolet Oct 3, 2024
4af2c2e
Renaming comms source file
cjnolet Oct 3, 2024
cecb372
Merge branch 'snmg-ann' of github.com:viclafargue/cuvs into snmg-ann
cjnolet Oct 3, 2024
f7a73fd
Merge branch 'branch-24.10' into snmg-ann
cjnolet Oct 3, 2024
ceb6287
Adding ucp to cmakelists
cjnolet Oct 3, 2024
ce37b71
Merge branch 'snmg-ann' of github.com:viclafargue/cuvs into snmg-ann
cjnolet Oct 3, 2024
1f0f5e9
MOre renames
cjnolet Oct 3, 2024
cb8ed0c
Adding libucxx
cjnolet Oct 3, 2024
fe5b6f8
Adding ucxx
cjnolet Oct 3, 2024
e257282
Adding to run time
cjnolet Oct 3, 2024
b6cb776
Adding libucxx to libcuvs y
cjnolet Oct 3, 2024
ac26507
use raw nccl calls
viclafargue Oct 3, 2024
4a10a6c
Removing ucp from cmake
cjnolet Oct 3, 2024
c9515d5
changing serialization path and disabling sharded mode testing
viclafargue Oct 3, 2024
d77704c
round robin check improvment + temporary disable of CAGRA
viclafargue Oct 3, 2024
c2c810c
Merge branch 'branch-24.10' into snmg-ann
viclafargue Oct 3, 2024
4e7398a
fix merge
viclafargue Oct 3, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ ARGS=$*
# scripts, and that this script resides in the repo dir!
REPODIR=$(cd $(dirname $0); pwd)

VALIDARGS="clean libcuvs python rust docs tests bench-ann examples --uninstall -v -g -n --compile-static-lib --allgpuarch --no-nvtx --show_depr_warn --incl-cache-stats --time -h"
VALIDARGS="clean libcuvs python rust docs tests bench-ann examples --uninstall -v -g -n --compile-static-lib --allgpuarch --no-mg --no-nvtx --show_depr_warn --incl-cache-stats --time -h"
HELP="$0 [<target> ...] [<flag> ...] [--cmake-args=\"<args>\"] [--cache-tool=<tool>] [--limit-tests=<targets>] [--limit-bench-ann=<targets>] [--build-metrics=<filename>]
where <target> is:
clean - remove all existing build artifacts and configuration (start over)
Expand All @@ -40,6 +40,7 @@ HELP="$0 [<target> ...] [<flag> ...] [--cmake-args=\"<args>\"] [--cache-tool=<to
--limit-tests - semicolon-separated list of test executables to compile (e.g. NEIGHBORS_TEST;CLUSTER_TEST)
--limit-bench-ann - semicolon-separated list of ann benchmark executables to compute (e.g. HNSWLIB_ANN_BENCH;RAFT_IVF_PQ_ANN_BENCH)
--allgpuarch - build for all supported GPU architectures
--no-mg - disable multi-GPU support
--no-nvtx - disable nvtx (profiling markers), but allow enabling it in downstream projects
--show_depr_warn - show cmake deprecation warnings
--build-metrics - filename for generating build metrics report for libcuvs
Expand All @@ -65,6 +66,7 @@ CMAKE_LOG_LEVEL=""
VERBOSE_FLAG=""
BUILD_ALL_GPU_ARCH=0
BUILD_TESTS=ON
BUILD_MG_ALGOS=ON
BUILD_TYPE=Release
COMPILE_LIBRARY=OFF
INSTALL_TARGET=install
Expand Down Expand Up @@ -261,6 +263,10 @@ if hasArg --allgpuarch; then
BUILD_ALL_GPU_ARCH=1
fi

if hasArg --no-mg; then
BUILD_MG_ALGOS=OFF
fi

if hasArg tests || (( ${NUMARGS} == 0 )); then
BUILD_TESTS=ON
CMAKE_TARGET="${CMAKE_TARGET};${TEST_TARGETS}"
Expand Down Expand Up @@ -353,6 +359,7 @@ if (( ${NUMARGS} == 0 )) || hasArg libcuvs || hasArg docs || hasArg tests || has
-DBUILD_C_TESTS=${BUILD_TESTS} \
-DBUILD_CUVS_BENCH=${BUILD_CUVS_BENCH} \
-DBUILD_CPU_ONLY=${BUILD_CPU_ONLY} \
-DBUILD_MG_ALGOS=${BUILD_MG_ALGOS} \
-DCMAKE_MESSAGE_LOG_LEVEL=${CMAKE_LOG_LEVEL} \
${CACHE_ARGS} \
${EXTRA_CMAKE_ARGS}
Expand Down
1 change: 1 addition & 0 deletions conda/environments/all_cuda-118_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ dependencies:
- libcusparse=11.7.5.86
- librmm==24.10.*,>=0.0.0a0
- make
- nccl>=2.19
- ninja
- numpy>=1.23,<3.0a0
- numpydoc
Expand Down
1 change: 1 addition & 0 deletions conda/environments/all_cuda-118_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ dependencies:
- libcusparse=11.7.5.86
- librmm==24.10.*,>=0.0.0a0
- make
- nccl>=2.19
- ninja
- numpy>=1.23,<3.0a0
- numpydoc
Expand Down
1 change: 1 addition & 0 deletions conda/environments/all_cuda-125_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ dependencies:
- libcusparse-dev
- librmm==24.10.*,>=0.0.0a0
- make
- nccl>=2.19
- ninja
- numpy>=1.23,<3.0a0
- numpydoc
Expand Down
1 change: 1 addition & 0 deletions conda/environments/all_cuda-125_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ dependencies:
- libcusparse-dev
- librmm==24.10.*,>=0.0.0a0
- make
- nccl>=2.19
- ninja
- numpy>=1.23,<3.0a0
- numpydoc
Expand Down
1 change: 1 addition & 0 deletions conda/environments/bench_ann_cuda-118_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ dependencies:
- libcusparse=11.7.5.86
- librmm==24.10.*,>=0.0.0a0
- matplotlib
- nccl>=2.19
- ninja
- nlohmann_json>=3.11.2
- nvcc_linux-aarch64=11.8
Expand Down
1 change: 1 addition & 0 deletions conda/environments/bench_ann_cuda-118_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ dependencies:
- libcusparse=11.7.5.86
- librmm==24.10.*,>=0.0.0a0
- matplotlib
- nccl>=2.19
- ninja
- nlohmann_json>=3.11.2
- nvcc_linux-64=11.8
Expand Down
1 change: 1 addition & 0 deletions conda/environments/bench_ann_cuda-125_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ dependencies:
- libcusparse-dev
- librmm==24.10.*,>=0.0.0a0
- matplotlib
- nccl>=2.19
- ninja
- nlohmann_json>=3.11.2
- openblas
Expand Down
1 change: 1 addition & 0 deletions conda/environments/bench_ann_cuda-125_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ dependencies:
- libcusparse-dev
- librmm==24.10.*,>=0.0.0a0
- matplotlib
- nccl>=2.19
- ninja
- nlohmann_json>=3.11.2
- openblas
Expand Down
3 changes: 3 additions & 0 deletions conda/recipes/libcuvs/conda_build_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@ cmake_version:
h5py_version:
- ">=3.8.0"

nccl_version:
- ">=2.19"

# The CTK libraries below are missing from the conda-forge::cudatoolkit package
# for CUDA 11. The "*_host_*" version specifiers correspond to `11.8` packages
# and the "*_run_*" version specifiers correspond to `11.x` packages.
Expand Down
4 changes: 4 additions & 0 deletions conda/recipes/libcuvs/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ outputs:
host:
- librmm ={{ minor_version }}
- libraft-headers ={{ minor_version }}
- nccl {{ nccl_version }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just pushed a commit adding a NCCL host dependency for cuVS (which it didn't have before) and pinning it to nccl>=2.19 everywhere.

Context:

[200/318] Building CUDA object CMakeFiles/cuvs.dir/src/neighbors/mg/mg_flat_float_int64_t.cu.o

/include/raft/comms/nccl_clique.hpp:22:10: fatal error: nccl.h: No such file or directory
   22 | #include <nccl.h>
      |          ^~~~~~~~

In the interest of time (we're very close to code freeze), I just added this host dependency in all of libcuvs-examples, libcuvs-static, libcuvs, and libcuvs-tests.

cc @jakirkham @vyasr for awareness

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. Thanks James! 🙏

To reflect this, updated this comment: rapidsai/build-planning#102 (comment)

Please feel free to edit that further

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the heads-up James.

- cuda-version ={{ cuda_version }}
{% if cuda_major == "11" %}
- cuda-profiler-api {{ cuda11_cuda_profiler_api_host_version }}
Expand Down Expand Up @@ -131,6 +132,7 @@ outputs:
host:
- librmm ={{ minor_version }}
- libraft-headers ={{ minor_version }}
- nccl {{ nccl_version }}
- cuda-version ={{ cuda_version }}
{% if cuda_major == "11" %}
- cuda-profiler-api {{ cuda11_cuda_profiler_api_host_version }}
Expand Down Expand Up @@ -197,6 +199,7 @@ outputs:
host:
- librmm ={{ minor_version }}
- libraft-headers ={{ minor_version }}
- nccl {{ nccl_version }}
- {{ pin_subpackage('libcuvs', exact=True) }}
- cuda-version ={{ cuda_version }}
- openblas # required by some CPU algos in benchmarks
Expand Down Expand Up @@ -268,6 +271,7 @@ outputs:
host:
- librmm ={{ minor_version }}
- libraft-headers ={{ minor_version }}
- nccl {{ nccl_version }}
- {{ pin_subpackage('libcuvs', exact=True) }}
- cuda-version ={{ cuda_version }}
{% if cuda_major == "11" %}
Expand Down
39 changes: 39 additions & 0 deletions cpp/CMakeLists.txt
viclafargue marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ option(BUILD_C_LIBRARY "Build cuVS C API library" OFF)
option(BUILD_C_TESTS "Build cuVS C API tests" OFF)
option(BUILD_CUVS_BENCH "Build cuVS ann benchmarks" OFF)
option(BUILD_CAGRA_HNSWLIB "Build CAGRA+hnswlib interface" ON)
option(BUILD_MG_ALGOS "Build with multi-GPU support" ON)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summarizing our offline conversation here (somewhere threaded, that could be marked Resolved and collapsed).

I just pushed 55fbb36, which does the following:

  • makes multi-GPU support on by default for wheel and conda builds
  • adds a build.sh option, --no-mg, to run a build with it not enabled
  • adds a CMake option here, to make the default setting here in CMake explicit (so that, for example, wheels will be built with this support enabled by default)

option(CUDA_ENABLE_KERNELINFO "Enable kernel resource usage info" OFF)
option(CUDA_ENABLE_LINEINFO
"Enable the -lineinfo option for nvcc (useful for cuda-memcheck / profiler)" OFF
Expand Down Expand Up @@ -287,6 +288,23 @@ target_compile_options(
"$<$<COMPILE_LANGUAGE:CUDA>:${CUVS_CUDA_FLAGS}>"
)

if(BUILD_MG_ALGOS)
viclafargue marked this conversation as resolved.
Show resolved Hide resolved
set(CUVS_MG_ALGOS
src/neighbors/mg/mg_flat_float_int64_t.cu
src/neighbors/mg/mg_flat_int8_t_int64_t.cu
src/neighbors/mg/mg_flat_uint8_t_int64_t.cu
src/neighbors/mg/mg_pq_float_int64_t.cu
src/neighbors/mg/mg_pq_half_int64_t.cu
src/neighbors/mg/mg_pq_int8_t_int64_t.cu
src/neighbors/mg/mg_pq_uint8_t_int64_t.cu
src/neighbors/mg/mg_cagra_float_uint32_t.cu
src/neighbors/mg/mg_cagra_half_uint32_t.cu
src/neighbors/mg/mg_cagra_int8_t_uint32_t.cu
src/neighbors/mg/mg_cagra_uint8_t_uint32_t.cu
src/neighbors/mg/omp_checks.cu
)
endif()

add_library(
cuvs SHARED
src/cluster/kmeans_balanced_fit_float.cu
Expand Down Expand Up @@ -367,6 +385,17 @@ add_library(
src/neighbors/cagra_serialize_half.cu
src/neighbors/cagra_serialize_int8.cu
src/neighbors/cagra_serialize_uint8.cu
src/neighbors/iface/iface_cagra_float_uint32_t.cu
src/neighbors/iface/iface_cagra_half_uint32_t.cu
src/neighbors/iface/iface_cagra_int8_t_uint32_t.cu
src/neighbors/iface/iface_cagra_uint8_t_uint32_t.cu
src/neighbors/iface/iface_flat_float_int64_t.cu
src/neighbors/iface/iface_flat_int8_t_int64_t.cu
src/neighbors/iface/iface_flat_uint8_t_int64_t.cu
src/neighbors/iface/iface_pq_float_int64_t.cu
src/neighbors/iface/iface_pq_half_int64_t.cu
src/neighbors/iface/iface_pq_int8_t_int64_t.cu
src/neighbors/iface/iface_pq_uint8_t_int64_t.cu
src/neighbors/detail/cagra/cagra_build.cpp
src/neighbors/detail/cagra/topk_for_cagra/topk.cu
$<$<BOOL:${BUILD_CAGRA_HNSWLIB}>:src/neighbors/hnsw.cpp>
Expand Down Expand Up @@ -428,8 +457,13 @@ add_library(
src/selection/select_k_half_uint32_t.cu
src/stats/silhouette_score.cu
src/stats/trustworthiness_score.cu
${CUVS_MG_ALGOS}
)

if(BUILD_MG_ALGOS)
target_compile_definitions(cuvs PUBLIC CUVS_BUILD_MG_ALGOS)
endif()

target_compile_options(
cuvs INTERFACE $<$<COMPILE_LANG_AND_ID:CUDA,NVIDIA>:--expt-extended-lambda
--expt-relaxed-constexpr>
Expand Down Expand Up @@ -459,11 +493,16 @@ if(NOT BUILD_CPU_ONLY)
${CUVS_CUSPARSE_DEPENDENCY} ${CUVS_CURAND_DEPENDENCY}
)

if(BUILD_MG_ALGOS)
set(CUVS_COMMS_DEPENDENCY nccl)
endif()

# Keep cuVS as lightweight as possible. Only CUDA libs and rmm should be used in global target.
target_link_libraries(
cuvs
PUBLIC rmm::rmm raft::raft ${CUVS_CTK_MATH_DEPENDENCIES}
PRIVATE nvidia::cutlass::cutlass $<TARGET_NAME_IF_EXISTS:OpenMP::OpenMP_CXX> cuvs-cagra-search
${CUVS_COMMS_DEPENDENCY}
viclafargue marked this conversation as resolved.
Show resolved Hide resolved
)
endif()

Expand Down
18 changes: 18 additions & 0 deletions cpp/bench/ann/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ option(CUVS_ANN_BENCH_USE_CUVS_BRUTE_FORCE "Include cuVS brute force knn in benc
option(CUVS_ANN_BENCH_USE_CUVS_CAGRA_HNSWLIB "Include cuVS CAGRA with HNSW search in benchmark" ON)
option(CUVS_ANN_BENCH_USE_HNSWLIB "Include hnsw algorithm in benchmark" ON)
option(CUVS_ANN_BENCH_USE_GGNN "Include ggnn algorithm in benchmark" OFF)
option(CUVS_ANN_BENCH_USE_CUVS_MG "Include cuVS ann mg algorithm in benchmark" ${BUILD_MG_ALGOS})
option(CUVS_ANN_BENCH_SINGLE_EXE
"Make a single executable with benchmark as shared library modules" OFF
)
Expand All @@ -55,6 +56,7 @@ if(BUILD_CPU_ONLY)
set(CUVS_ANN_BENCH_USE_CUVS_CAGRA_HNSWLIB OFF)
set(CUVS_ANN_BENCH_USE_GGNN OFF)
set(CUVS_KNN_BENCH_USE_CUVS_BRUTE_FORCE OFF)
set(CUVS_ANN_BENCH_USE_CUVS_MG OFF)
else()
set(CUVS_FAISS_ENABLE_GPU ON)
endif()
Expand All @@ -66,6 +68,7 @@ if(CUVS_ANN_BENCH_USE_CUVS_IVF_PQ
OR CUVS_ANN_BENCH_USE_CUVS_CAGRA
OR CUVS_ANN_BENCH_USE_CUVS_CAGRA_HNSWLIB
OR CUVS_KNN_BENCH_USE_CUVS_BRUTE_FORCE
OR CUVS_ANN_BENCH_USE_CUVS_MG
)
set(CUVS_ANN_BENCH_USE_CUVS ON)
endif()
Expand Down Expand Up @@ -245,6 +248,21 @@ if(CUVS_ANN_BENCH_USE_CUVS_CAGRA_HNSWLIB)
)
endif()

if(CUVS_ANN_BENCH_USE_CUVS_MG)
ConfigureAnnBench(
NAME
CUVS_MG
PATH
src/cuvs/cuvs_benchmark.cu
$<$<BOOL:${CUVS_ANN_BENCH_USE_CUVS_MG}>:src/cuvs/cuvs_mg_ivf_flat.cu>
$<$<BOOL:${CUVS_ANN_BENCH_USE_CUVS_MG}>:src/cuvs/cuvs_mg_ivf_pq.cu>
$<$<BOOL:${CUVS_ANN_BENCH_USE_CUVS_MG}>:src/cuvs/cuvs_mg_cagra.cu>
LINKS
cuvs
nccl
)
endif()

message("CUVS_FAISS_TARGETS: ${CUVS_FAISS_TARGETS}")
message("CUDAToolkit_LIBRARY_DIR: ${CUDAToolkit_LIBRARY_DIR}")
if(CUVS_ANN_BENCH_USE_FAISS_CPU_FLAT)
Expand Down
18 changes: 15 additions & 3 deletions cpp/bench/ann/src/cuvs/cuvs_ann_bench_param_parser.h
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,18 @@ extern template class cuvs::bench::cuvs_cagra<uint8_t, uint32_t>;
extern template class cuvs::bench::cuvs_cagra<int8_t, uint32_t>;
#endif

#ifdef CUVS_ANN_BENCH_USE_CUVS_IVF_FLAT
#ifdef CUVS_ANN_BENCH_USE_CUVS_MG
#include "cuvs_ivf_flat_wrapper.h"
#include "cuvs_mg_ivf_flat_wrapper.h"

#include "cuvs_ivf_pq_wrapper.h"
#include "cuvs_mg_ivf_pq_wrapper.h"

#include "cuvs_cagra_wrapper.h"
#include "cuvs_mg_cagra_wrapper.h"
#endif

#if defined(CUVS_ANN_BENCH_USE_CUVS_IVF_FLAT) || defined(CUVS_ANN_BENCH_USE_CUVS_MG)
template <typename T, typename IdxT>
void parse_build_param(const nlohmann::json& conf,
typename cuvs::bench::cuvs_ivf_flat<T, IdxT>::build_param& param)
Expand All @@ -64,7 +75,7 @@ void parse_search_param(const nlohmann::json& conf,
#endif

#if defined(CUVS_ANN_BENCH_USE_CUVS_IVF_PQ) || defined(CUVS_ANN_BENCH_USE_CUVS_CAGRA) || \
defined(CUVS_ANN_BENCH_USE_CUVS_CAGRA_HNSWLIB)
defined(CUVS_ANN_BENCH_USE_CUVS_CAGRA_HNSWLIB) || defined(CUVS_ANN_BENCH_USE_CUVS_MG)
template <typename T, typename IdxT>
void parse_build_param(const nlohmann::json& conf,
typename cuvs::bench::cuvs_ivf_pq<T, IdxT>::build_param& param)
Expand Down Expand Up @@ -130,7 +141,8 @@ void parse_search_param(const nlohmann::json& conf,
}
#endif

#if defined(CUVS_ANN_BENCH_USE_CUVS_CAGRA) || defined(CUVS_ANN_BENCH_USE_CUVS_CAGRA_HNSWLIB)
#if defined(CUVS_ANN_BENCH_USE_CUVS_CAGRA) || defined(CUVS_ANN_BENCH_USE_CUVS_CAGRA_HNSWLIB) || \
defined(CUVS_ANN_BENCH_USE_CUVS_MG)
template <typename T, typename IdxT>
void parse_build_param(const nlohmann::json& conf, cuvs::neighbors::nn_descent::index_params& param)
{
Expand Down
Loading
Loading