Releases: keras-team/keras
Keras 3.2.0
What changed
- Introduce QLoRA-like technique for LoRA fine-tuning of
DenseandEinsumDenselayers (thereby any LLM) in int8 precision. - Extend
keras.ops.custom_gradientsupport to PyTorch. - Add
keras.layers.JaxLayerandkeras.layers.FlaxLayerto wrap JAX/Flax modules as Keras layers. - Allow
save_model&load_modelto accept a file-like object. - Add quantization support to the
Embeddinglayer. - Make it possible to update metrics inside a custom
compute_lossmethod with all backends. - Make it possible to access
self.lossesinside a customcompute_lossmethod with the JAX backend. - Add
keras.losses.Diceloss. - Add
keras.ops.correlate. - Make it possible to use cuDNN LSTM & GRU with a mask with the TensorFlow backend.
- Better JAX support in
model.export(): add support for aliases, finer control overjax2tfoptions, and dynamic batch shapes. - Bug fixes and performance improvements.
New Contributors
- @abhaskumarsinha made their first contribution in #19302
- @qaqland made their first contribution in #19378
- @tvogel made their first contribution in #19310
- @lpizzinidev made their first contribution in #19409
- @Murhaf made their first contribution in #19444
Full Changelog: v3.1.1...v3.2.0
Keras 3.1.1
This is a minor bugfix release over 3.1.0.
What's Changed
- Unwrap variable values in all stateless calls. by @hertschuh in #19287
- Fix
draw_seedcausing device discrepancy issue duringtorch's symbolic execution by @KhawajaAbaid in #19289 - Fix TestCase.run_layer_test for multi-output layers by @shkarupa-alex in #19293
- Sine docstring by @grasskin in #19295
- Fix
keras.ops.softmaxfor the tensorflow backend by @tirthasheshpatel in #19300 - Fix mixed precision check in TestCase.run_layer_test: compare with output_spec dtype instead of hardcoded float16 by @shkarupa-alex in #19297
- ArrayDataAdapter no longer converts to NumPy and supports sparse tens⦠by @hertschuh in #19298
- add token to codecov by @haifeng-jin in #19312
- Add Tensorflow support for variable
scatter_updatein optimizers. by @hertschuh in #19313 - Replace
dm-treewithoptreeby @james77777778 in #19306 - downgrade codecov to v3 by @haifeng-jin in #19319
- Allow tensors in
tf.Datasets to have different dimensions. by @hertschuh in #19318 - update codecov setting by @haifeng-jin in #19320
- Set dtype policy for uint8 by @sampathweb in #19327
- Use Value dim shape for Attention compute_output_shape by @sampathweb in #19284
New Contributors
- @tirthasheshpatel made their first contribution in #19300
Full Changelog: v3.1.0...v3.1.1
Keras 3.1.0
New features
- Add support for
int8inference. Just callmodel.quantize("int8")to do an in-place conversion of a bfloat16 or float32 model to an int8 model. Note that onlyDenseandEinsumDenselayers will be converted (this covers LLMs and all Transformers in general). We may add more supported layers over time. - Add
keras.config.set_backend(backend)utility to reload a different backend. - Add
keras.layers.MelSpectrogramlayer for turning raw audio data into Mel spectrogram representation. - Add
keras.ops.custom_gradientdecorator (only for JAX and TensorFlow). - Add
keras.ops.image.crop_images. - Add
pad_to_aspect_ratioargument toimage_dataset_from_directory. - Add
keras.random.binomialandkeras.random.betafunctions. - Enable
keras.ops.einsumto run with int8 x int8 inputs and int32 output. - Add
verboseargument in all dataset-creation utilities.
Notable fixes
- Fix Functional model slicing
- Fix for TF XLA compilation error for
SpectralNormalization - Refactor
axislogic across all backends and add support for multiple axes inexpand_dimsandsqueeze
New Contributors
- @mykolaskrynnyk made their first contribution in #19190
- @chicham made their first contribution in #19201
- @joycebrum made their first contribution in #19214
- @EtiNL made their first contribution in #19228
Full Changelog: v3.0.5...v3.1.0
Keras 3.0.5
This release brings many bug fixes and performance improvements, new linear algebra ops, and sparse tensor support for the JAX backend.
Highlights
- Add support for sparse tensors with the JAX backend.
- Add support for saving/loading in bfloat16.
- Add linear algebra ops in
keras.ops.linalg. - Support nested structures in
while_loopop. - Add
erfinvop. - Add
normalizeop. - Add support for
IterableDatasettoTorchDataLoaderAdapter.
New Contributors
- @frazane made their first contribution in #19107
- @SamanehSaadat made their first contribution in #19111
- @sitamgithub-MSIT made their first contribution in #19142
- @timotheeMM made their first contribution in #19169
Full Changelog: v3.0.4...v3.0.5
Keras 3.0.4
This is a minor release with improvements to the LoRA API required by the next release of KerasNLP.
Full Changelog: v3.0.3...v3.0.4
Keras 3.0.3 release
This is a minor Keras release.
What's Changed
- Add built-in LoRA (low-rank adaptation) API to all relevant layers (
Dense,EinsumDense,Embedding). - Add
SwapEMAWeightscallback to make it easier to evaluate model metrics using EMA weights during training. - All
DataAdaptersnow create a native iterator for each backend, improving performance. - Add built-in prefetching for JAX, improving performance.
- The
bfloat16dtype is now allowed in the globalset_dtypeconfiguration utility. - Bug fixes and performance improvements.
New Contributors
- @kiraksi made their first contribution in #18977
- @dugujiujian1999 made their first contribution in #19010
- @neo-alex made their first contribution in #18997
- @anas-rz made their first contribution in #19057
Full Changelog: v3.0.2...v3.0.3
Keras 3.0.2
Breaking changes
There are no known breaking changes in this release compared to 3.0.1.
API changes
- Add
keras.random.binomialandkeras.random.betaRNG functions. - Add masking support to
BatchNormalization. - Add
keras.losses.CTC(loss function for sequence-to-sequence tasks) as well as the lower-level operationkeras.ops.ctc_loss. - Add
ops.random.alpha_dropoutandlayers.AlphaDropout. - Add gradient accumulation support for all backends, and enable optimizer EMA for JAX and torch
Full Changelog: v3.0.1...v3.0.2
Keras 3.0.1
This is a minor release focused on bug fixes and performance improvements.
What's Changed
- Bug fixes and performance improvements.
- Add
stop_evaluatingandstop_predictingmodel attributes for callbacks, similar tostop_training. - Add
keras.device()scope for managing device placement in a multi-backend way. - Support dict items in
PyDataset. - Add
hard_swishactivation and op. - Fix cuDNN LSTM performance on TensorFlow backend.
- Add a
force_downloadarg toget_fileto force cache invalidation.
Full Changelog: v3.0.0...v3.0.1
Keras 3.0.0
Major updates
See the release announcement for a detailed list of major changes. Main highlights compared to Keras 2 are:
- Keras can now be run on top of JAX, PyTorch, TensorFlow, and even NumPy (note that the NumPy backend is inference-only).
- New low-level
keras.opsAPI for building cross-framework components. - New large-scale model distribution
keras.distributionbased on JAX. - New stateless API for layers, models, optimizers, and metrics.
Breaking changes
See this thread for a complete list of breaking changes, as well as the Keras 3 migration guide.
Keras Release 2.15.0
What's Changed
- Typofixes for
StringLookupdocumentation by @cw118 in #18333 - Fix ModelCheckpoint trained-on batch counting when using steps_per_execution>1 by @jasnyj in #17632
- Fix legacy optimizer handling in
compile_from_config(). by @nkovela1 in #18492 - Remove options arg from ModelCheckpoint callback for Keras V3 saving, streamline ModelCheckpoint saving flow. Parameterize associated tests. by @nkovela1 in #18545
- Use TENSORFLOW_VERSION when available during pip_build script by @sampathweb in #18739
New Contributors
Full Changelog: v2.14.0...v2.15.0