Skip to content

Conversation

@danielenricocahall
Copy link
Contributor

@danielenricocahall danielenricocahall commented Oct 23, 2025

Resolved minor bugs in #21514 to address #21468 to fix the discrepancy in discretization

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @danielenricocahall, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a minor bug in the Discretization layer by removing an unnecessary dtype coercion. The change simplifies the layer's initialization logic and resolves a reported issue. To ensure the stability and correctness of this fix, a new test has been introduced to verify consistent behavior between direct model calls and predictions for the Discretization layer.

Highlights

  • Discretization Layer dtype Handling: The explicit dtype coercion logic within the Discretization layer's __init__ method has been removed. Previously, if dtype was None, it would be coerced to int64 or backend.floatx(). This change aligns with the fix for a minor bug related to dtype discrepancies.
  • New Consistency Test: A new test case, test_model_call_vs_predict_consistency, has been added to discretization_test.py. This test ensures that the Discretization layer produces consistent outputs when called directly via model(input) and when using model.predict(input).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a dtype discrepancy in the Discretization layer. While the proposed change of removing the dtype coercion logic does resolve the inconsistency, it introduces a new issue where output_mode="int" produces float outputs, which is counter-intuitive. My review provides a suggestion to fix the original discrepancy while preserving the expected integer output type. I've also recommended strengthening the new test case to include dtype checks for better validation.

@codecov-commenter
Copy link

codecov-commenter commented Oct 23, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.65%. Comparing base (47fcb39) to head (ebbb85c).
⚠️ Report is 2 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21769      +/-   ##
==========================================
- Coverage   82.69%   82.65%   -0.04%     
==========================================
  Files         573      577       +4     
  Lines       58888    59181     +293     
  Branches     9218     9273      +55     
==========================================
+ Hits        48696    48918     +222     
- Misses       7845     7887      +42     
- Partials     2347     2376      +29     
Flag Coverage Δ
keras 82.46% <100.00%> (-0.04%) ⬇️
keras-jax 63.30% <100.00%> (+0.06%) ⬆️
keras-numpy 57.54% <100.00%> (-0.18%) ⬇️
keras-openvino 34.34% <50.00%> (-0.07%) ⬇️
keras-tensorflow 64.07% <100.00%> (+0.06%) ⬆️
keras-torch 63.62% <100.00%> (+0.06%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@danielenricocahall
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively resolves a dtype discrepancy in the Discretization layer, particularly when output_mode is set to "int". The changes correctly introduce an output_dtype property to centralize dtype logic, ensuring consistent behavior between symbolic and eager execution paths. The removal of the manual dtype handling in __init__ and the subsequent updates in compute_output_spec and call are well-implemented. The addition of test_model_call_vs_predict_consistency is a great way to safeguard against future regressions. I've only found one minor issue in the new test file, which is a duplicated assertion.

Copy link
Collaborator

@fchollet fchollet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the PR!

@google-ml-butler google-ml-butler bot added kokoro:force-run ready to pull Ready to be merged into the codebase labels Oct 24, 2025
@fchollet fchollet merged commit 1ba3b8f into keras-team:master Oct 24, 2025
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kokoro:force-run ready to pull Ready to be merged into the codebase size:S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants