Skip to content

Conversation

@jikechao
Copy link
Member

@jikechao jikechao commented Oct 21, 2025

This PR addresses the issue where tvm.tir.exp does not support integer types (e.g., int32, int64), causing an InternalError during LLVM code generation with the message, The issue arises because the llvm.exp intrinsic expects floating-point inputs, but no type conversion is performed for integer inputs.

I opened this PR to solve it via type conversion. This change aligns the behavior of tir.exp with libraries like PyTorch and NumPy, which implicitly convert integer inputs to floating-point types for their exponential functions.

@gemini-code-assist
Copy link

Summary of Changes

Hello @jikechao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the tvm.tir.exp operator to gracefully handle integer input types. Previously, attempting to use exp with integers would result in an InternalError due to the underlying llvm.exp intrinsic expecting floating-point values. The change introduces an automatic type conversion from integer to float32, aligning the operator's behavior with common numerical libraries and ensuring robust exponential calculations across different data types.

Highlights

  • Integer Support for tir.exp: The tvm.tir.exp operator now supports integer input types by implicitly converting them to float32 before computation, resolving an InternalError during LLVM code generation.
  • Type Conversion Logic: A conditional type cast has been introduced in the exp function within python/tvm/tir/op.py to perform the int to float32 conversion.
  • New Test Case: A new test entry has been added to tests/python/relax/test_transform_legalize_ops_unary.py to specifically verify the correct behavior of R.exp with int32 inputs.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for integer types to tvm.tir.exp by casting them to a floating-point type. This is a good improvement that aligns tvm.tir.exp with the behavior of libraries like NumPy and PyTorch.

My main feedback is to consider casting to float64 instead of float32 to avoid potential overflow and precision loss, especially when the input is int64. I've also suggested updating the docstring for tir.exp to document the new casting behavior and enhancing the tests to include int64 inputs.

With these changes, this will be a solid contribution.

jikechao and others added 2 commits October 21, 2025 21:29
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@jikechao
Copy link
Member Author

@tvm-bot rerun

Removed redundant 'exp' test cases for int32 and int64.
@jikechao jikechao requested review from Lunderberg and tqchen October 22, 2025 01:44
@jikechao jikechao closed this by deleting the head repository Oct 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant