Skip to content

Conversation

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @DarkSharpness, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the FFI capabilities by introducing a comprehensive system for mapping data types between Python and C++. It establishes a C++ dtype_trait for direct conversion to DLPack types and a Python utility to translate Python dtypes into C++ string equivalents, with intelligent backend detection for CPU, CUDA, and ROCm. This work streamlines data exchange and improves compatibility across different hardware platforms within the FFI framework.

Highlights

  • New C++ Dtype Trait: Introduced a new C++ header include/tvm/ffi/extra/dtype.h that defines a dtype_trait template. This trait allows for mapping various C++ data types, including standard integers, floats, booleans, and specialized CUDA/HIP types (like __half, __nv_bfloat16, __hip_bfloat16, and various FP8/FP4 types), to their corresponding DLPack DLDataType representations.
  • Python-side Dtype Mapping Utility: Added a new Python module python/tvm_ffi/cpp/dtype.py which provides a to_cpp_dtype function. This utility converts Python dtype strings (or torch.dtype objects) into their equivalent C++ string representations, supporting CPU, CUDA, and ROCm backends by dynamically detecting the available hardware.
  • Enhanced FFI Interoperability: The changes collectively improve the Foreign Function Interface (FFI) by providing a standardized and extensible way to handle data type conversions between Python and C++, crucial for seamless integration with DLPack and supporting diverse hardware accelerators.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces C++ dtype_trait for mapping C++ types to DLPack dtypes and a corresponding Python utility to_cpp_dtype to convert Python dtype strings to their C++ equivalents. The changes are well-structured, adding support for CPU, CUDA, and ROCm dtypes. The C++ header uses template specializations effectively, and the Python module correctly detects the backend using torch. My feedback includes a couple of minor suggestions to improve code clarity and maintainability by removing an informal comment and some commented-out code.


// HIP
struct __hip_bfloat16;
struct hip_bfloat16; // i don't know why this is a struct instead of alias...
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment appears to be a personal note and is a bit informal for source code. It would be best to remove it to maintain a professional tone in the codebase.

Suggested change
struct hip_bfloat16; // i don't know why this is a struct instead of alias...
struct hip_bfloat16;

Comment on lines +41 to +45
"float8_e4m3fn": "__nv_fp8_e4m3",
# "float8_e4m3fnuz": "__nv_fp8_e4m3",
"float8_e5m2": "__nv_fp8_e5m2",
# "float8_e5m2fnuz": "__nv_fp8_e5m2",
"float8_e8m0fnu": "__nv_fp8_e8m0",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The commented-out lines for float8_e4m3fnuz and float8_e5m2fnuz can be confusing. If these dtypes are not supported for the CUDA backend, it's cleaner to remove these lines entirely. This avoids ambiguity for future maintainers about whether this is an incomplete feature or an intentional omission.

Suggested change
"float8_e4m3fn": "__nv_fp8_e4m3",
# "float8_e4m3fnuz": "__nv_fp8_e4m3",
"float8_e5m2": "__nv_fp8_e5m2",
# "float8_e5m2fnuz": "__nv_fp8_e5m2",
"float8_e8m0fnu": "__nv_fp8_e8m0",
"float8_e4m3fn": "__nv_fp8_e4m3",
"float8_e5m2": "__nv_fp8_e5m2",
"float8_e8m0fnu": "__nv_fp8_e8m0",

Copy link
Member

@junrushao junrushao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this change. LGTM!

@junrushao
Copy link
Member

Looks like there are bunch of errors from:

[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:124: error: Compound tvm_ffi::dtype_trait< __half > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:161: error: Compound tvm_ffi::dtype_trait< __hip_bfloat16 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:191: error: Compound tvm_ffi::dtype_trait< __hip_fp4_e2m1 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:196: error: Compound tvm_ffi::dtype_trait< __hip_fp4x2_e2m1 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:171: error: Compound tvm_ffi::dtype_trait< __hip_fp8_e4m3 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:176: error: Compound tvm_ffi::dtype_trait< __hip_fp8_e4m3_fnuz > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:181: error: Compound tvm_ffi::dtype_trait< __hip_fp8_e5m2 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:186: error: Compound tvm_ffi::dtype_trait< __hip_fp8_e5m2_fnuz > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:129: error: Compound tvm_ffi::dtype_trait< __nv_bfloat16 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:149: error: Compound tvm_ffi::dtype_trait< __nv_fp4_e2m1 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:154: error: Compound tvm_ffi::dtype_trait< __nv_fp4x2_e2m1 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:134: error: Compound tvm_ffi::dtype_trait< __nv_fp8_e4m3 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:139: error: Compound tvm_ffi::dtype_trait< __nv_fp8_e5m2 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:144: error: Compound tvm_ffi::dtype_trait< __nv_fp8_e8m0 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:117: error: Compound tvm_ffi::dtype_trait< bool > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:166: error: Compound tvm_ffi::dtype_trait< hip_bfloat16 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:125: error: Member value (variable) of struct tvm_ffi::dtype_trait< __half > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:162: error: Member value (variable) of struct tvm_ffi::dtype_trait< __hip_bfloat16 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:192: error: Member value (variable) of struct tvm_ffi::dtype_trait< __hip_fp4_e2m1 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:197: error: Member value (variable) of struct tvm_ffi::dtype_trait< __hip_fp4x2_e2m1 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:172: error: Member value (variable) of struct tvm_ffi::dtype_trait< __hip_fp8_e4m3 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:177: error: Member value (variable) of struct tvm_ffi::dtype_trait< __hip_fp8_e4m3_fnuz > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:182: error: Member value (variable) of struct tvm_ffi::dtype_trait< __hip_fp8_e5m2 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:187: error: Member value (variable) of struct tvm_ffi::dtype_trait< __hip_fp8_e5m2_fnuz > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:130: error: Member value (variable) of struct tvm_ffi::dtype_trait< __nv_bfloat16 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:150: error: Member value (variable) of struct tvm_ffi::dtype_trait< __nv_fp4_e2m1 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:155: error: Member value (variable) of struct tvm_ffi::dtype_trait< __nv_fp4x2_e2m1 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:135: error: Member value (variable) of struct tvm_ffi::dtype_trait< __nv_fp8_e4m3 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:140: error: Member value (variable) of struct tvm_ffi::dtype_trait< __nv_fp8_e5m2 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:145: error: Member value (variable) of struct tvm_ffi::dtype_trait< __nv_fp8_e8m0 > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:118: error: Member value (variable) of struct tvm_ffi::dtype_trait< bool > is not documented.
[~] /home/runner/work/tvm-ffi/tvm-ffi/include/tvm/ffi/extra/dtype.h:167: error: Member value (variable) of struct tvm_ffi::dtype_trait< hip_bfloat16 > is not documented.

You may feel free to suppress doxygen generation via

/// \cond Doxygen_Suppress
/// \endcond

if documentation seems pretty redundant

@junrushao junrushao merged commit c51e519 into apache:main Jan 1, 2026
7 checks passed
@DarkSharpness DarkSharpness deleted the cpp_dtype branch January 3, 2026 07:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants