Skip to content

Conversation

@Shekar-77
Copy link

This PR improves backend compatibility checks for DatasetAdapter.
It raises clear errors when a backend (e.g., PyTorch) is used with an incompatible dataset type (e.g., TensorFlow or JAX).

As Keras 3 provides a collection of backend-agnostic operations, this PR ensures that the dataset type matches the active backend.
For example:

-Use tf.data.Dataset with the TensorFlow backend

-Use torch.utils.data.DataLoader with the PyTorch backend

-Use jax datasets with the JAX backend

Changes

Added backend-type validation to DatasetAdapter.init in:

-tf_dataset_adapter.py

-py_dataset_adapter.py

-torch_data_loader_adapter.py

Verified behavior using pytest.
When testing, the KERAS_BACKEND environment variable must match the dataset type.
For example:

-$env:KERAS_BACKEND = "jax"
-pytest keras/src/trainers/data_adapters/py_dataset_adapter_test.py -v

Related Issues

Fixes #21785

@google-cla
Copy link

google-cla bot commented Oct 27, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Shekar-77, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the robustness of Keras 3's data handling by implementing crucial backend compatibility validations within its DatasetAdapter classes. The primary goal is to prevent runtime errors and guide users toward using the correct data structures for their chosen Keras backend (TensorFlow, PyTorch, or JAX), thereby ensuring a smoother and more performant development experience across different deep learning ecosystems.

Highlights

  • Backend Compatibility Checks: Introduced strict backend compatibility checks within DatasetAdapter implementations to ensure that the chosen dataset type aligns with the active Keras backend.
  • Error Handling for Mismatches: Implemented ValueError exceptions that are raised when an incompatible backend-dataset combination is detected, providing clear guidance to the user.
  • Performance Recommendation for TensorFlow: Added a UserWarning when PyDataset is used with the TensorFlow backend, recommending the use of tf.data.Dataset for better performance.
  • New PyTorch Backend Example: Included a new example script (Issues/Keras_with_pytorch_backend.py) demonstrating basic Keras tensor operations when the PyTorch backend is active.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds important backend validation to the dataset adapters, which is a great improvement for user experience by providing clear error messages on backend-dataset incompatibility. The changes in py_dataset_adapter.py, tf_dataset_adapter.py, and torch_data_loader_adapter.py are logical and well-implemented. I've added a few minor suggestions to improve code cleanliness by removing a duplicate import and some development-related comments.

if not isinstance(
dataset, (tf.data.Dataset, tf.distribute.DistributedDataset)
):
# --- ✅ Backend compatibility check ---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment appears to be for debugging or development purposes and adds some noise to the code. It would be cleaner to remove it.

"This adapter only supports the TensorFlow backend."
)

# --- ✅ Dataset type validation ---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment appears to be for debugging or development purposes and adds some noise to the code. It would be cleaner to remove it.

import torch
import keras

# --- ✅ Backend compatibility check ---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment appears to be for debugging or development purposes and adds some noise to the code. It would be cleaner to remove it.

@codecov-commenter
Copy link

codecov-commenter commented Oct 27, 2025

Codecov Report

❌ Patch coverage is 60.00000% with 6 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.62%. Comparing base (eecd34f) to head (e19485d).
⚠️ Report is 9 commits behind head on master.

Files with missing lines Patch % Lines
...s/src/trainers/data_adapters/py_dataset_adapter.py 71.42% 1 Missing and 1 partial ⚠️
...s/src/trainers/data_adapters/tf_dataset_adapter.py 50.00% 1 Missing and 1 partial ⚠️
...rainers/data_adapters/torch_data_loader_adapter.py 50.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21789      +/-   ##
==========================================
- Coverage   82.63%   82.62%   -0.02%     
==========================================
  Files         577      577              
  Lines       59316    59430     +114     
  Branches     9300     9317      +17     
==========================================
+ Hits        49018    49106      +88     
- Misses       7910     7916       +6     
- Partials     2388     2408      +20     
Flag Coverage Δ
keras 82.44% <53.33%> (-0.02%) ⬇️
keras-jax 63.32% <40.00%> (-0.01%) ⬇️
keras-numpy 57.56% <40.00%> (+<0.01%) ⬆️
keras-openvino 34.29% <6.66%> (+<0.01%) ⬆️
keras-tensorflow 64.12% <53.33%> (+<0.01%) ⬆️
keras-torch 63.62% <40.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@keerthanakadiri
Copy link
Contributor

Hi @Shekar-77 , Can you please sign the CLA? Thank you !

Copy link
Collaborator

@hertschuh hertschuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixes #21785

These changes don't address the problem stated in #21785

self._use_multiprocessing = use_multiprocessing
self._max_queue_size = max_queue_size
backend_name = backend.backend()
if backend_name not in ("torch", "jax", "tensorflow", "numpy"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a no-op, all the backends are in the list of supported backends.


# --- ✅ Backend compatibility check ---
backend = keras.backend.backend()
if backend not in ("tensorflow", "numpy", "torch", "jax"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a no-op, all the backends are in the list of supported backends.

import keras

backend = keras.backend.backend()
if backend not in ("torch", "tensorflow", "numpy", "jax"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a no-op, all the backends are in the list of supported backends.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

keras.ops. in tf.data regardless of backends

5 participants