Releases: arogozhnikov/einops
Releases · arogozhnikov/einops
v0.8.0: tinygrad, small fixes and updates
TLDR
- tinygrad backend added
- resolve warning in py3.11 related to docstring
- remove graph break for unpack
- breaking TF layers were updated to follow new instructions, new layers compatible with TF 2.16, and not compatible with old TF (certainly does not work with TF2.13)
What's Changed
- Fix invalid escape sequence in einsum docstring by @atwam in #298
- Tinygrad support by @blueridanus in #297
- Coerce bool to int in unpack by @drubinstein in #287
- Remove oneflow from testing by @arogozhnikov in #289
- tests: fix torch installation to force CPU by @arogozhnikov in #288
- Allow anonymous axes in parse_shape, fix #302 by @arogozhnikov in #303
- Codebase standards + update TF layers by @arogozhnikov in #318
- update github actions by @arogozhnikov in #319
New Contributors
- @drubinstein made their first contribution in #287
- @atwam made their first contribution in #298
- @blueridanus made their first contribution in #297
Full Changelog: v0.7.0...v0.8.0
V0.7.0: torch.compile, preserve axis identity, array api
Major changes:
torch.compile
just works, registration of operations happens automatically- JAX's distributed arrays can use ellipses, and in general ellipsis processing now preserves axis identity. This involved changing internal gears of einops.
- Array API:
einops
operations can be used with any framework that follows the standard (seeeinops.array_api
) - Python 3.7 is dead. Good bye, you were great at the time
- Gluon is dropped as previously announced
- reduce/repeat/rearrange all accept lists now
PRs list
- Preserve axis identity + drop python 3.7 by @arogozhnikov in #255
- Support array api by @arogozhnikov in #261
- add type-based caching by @arogozhnikov in #262
- Uniform support for list inputs across rearrange/reduce/repeat by @arogozhnikov in #263
- Drop gluon by @arogozhnikov in #264
- automatically register torch ops in torchdynamo by @arogozhnikov in #265
- release 0.7.0rc1, highlight changes in README by @arogozhnikov in #266
- cover dynamic shapes in torch.compile, introduce fallback if shape was not cacheable by @arogozhnikov in #275
- bump version by @arogozhnikov in #276
- fix #279, update description by @arogozhnikov in #281
- Add support for any/all reductions in einops.reduce by @arogozhnikov in #282
- maintenance: cache pip dependencies by @arogozhnikov in #283
- maintenance: update versions of github actions by @arogozhnikov in #285
Full Changelog: v0.6.1...v0.7.0
V0.7.0rc2: allow dynamic shapes in torch.compile
What's Changed
- cover dynamic shapes in torch.compile, introduce fallback if shape was not cacheable by @arogozhnikov in #275
- bump version by @arogozhnikov in #276
Full Changelog: v0.7.0rc1...v0.7.0rc2
V0.7.0rc1: torch.compile, preserve axis identity, array api
Major changes:
torch.compile
just works, registration of operations happens automatically- JAX's distributed arrays can use ellipses, and in general ellipsis processing now preserves axis identity. This involved changing internal gears of einops.
- Array API:
einops
operations can be used with any framework that follows the standard (seeeinops.array_api
) - Python 3.7 is dead. Good bye, you were great at the time
- Gluon is dropped as previously announced
- Reduce/repeat/rearrange all accept lists now
What's Changed
- Support array api by @arogozhnikov in #261
- add type-based caching by @arogozhnikov in #262
- Uniform support for list inputs across rearrange/reduce/repeat by @arogozhnikov in #263
- Drop gluon by @arogozhnikov in #264
- automatically register torch ops in torchdynamo by @arogozhnikov in #265
- release 0.7.0rc1, highlight changes in README by @arogozhnikov in #266
- Preserve axis identity + drop python 3.7 by @arogozhnikov in #255
Full Changelog: v0.6.1...v0.7.0rc1
V0.6.2rc0: drop python 3.7 + preserve axis identity
pre-release is published to allow public testing of new caching logic (pattern analysis is now dependent on input dimensionality to preserve axis identity).
What's Changed
- Preserve axis identity + drop python 3.7 by @arogozhnikov in #255
Full Changelog: v0.6.1...v0.6.2rc0
V0.6.1: support paddle, support pytorch.compile
- einops layers perfectly interplay with torch.compile
- einops operations needs registration: run
einops._torch_specific.allow_ops_in_compiled_graph()
beforetorch.compile
- paddle is now supported (thanks to @zhouwei25)
- as previously announced, support of mxnet is dropped
What's Changed
- Add PaddlePaddle backend for einops by @zhouwei25 in #242 , #245
- Add
allow_ops_in_compiled_graph
to support torch.compile by @arogozhnikov in #251 - torch.concat -> torch.cat to support packing for torch < 1.10 by @arogozhnikov in #238
- chores: improve documentation and testing by @arogozhnikov in #246
- Remove mxnet by @arogozhnikov in #231
New Contributors
- @zhouwei25 made their first contribution in #242
Full Changelog: v0.6.0...v0.6.1
V0.6.0: pack and unpack
What's Changed
- Introduce einops.pack and einops.unpack by @arogozhnikov in #222
- Update example to match description by @EPronovost in #217
- Improve type hinting by @arogozhnikov in #221
- Cosmetics for pack/unpack: documentation and comments by @arogozhnikov in #223
- Preparations for 0.6.0 release by @arogozhnikov in #224
New Contributors
- @EPronovost made their first contribution in #217
Announcement
Sunsetting experimental mxnet support: no demand and package is outdated, with numerous deprecations and poor support of corner cases. 0.6.0 will be the last release with mxnet backend.
Full Changelog: v0.5.0...v0.6.0
Einops v0.5.0: einsum + support of flax and oneflow.
What's Changed
- Create einsum operation by @MilesCranmer in #197
- Add flax layers by @arogozhnikov in #214
- Add oneflow backend by @arogozhnikov in #181
- Add oneflow backend by @rentainhe in #180
- Fix wrong error message by @arogozhnikov in #196
- Clarify documentation re. default bias on EinMix by @maxeonyx in #201
- corrected spelling mistake: einsops -> einops by @cs-mshah in #205
- add mean-reduction for bfloat16, fix #206 by @arogozhnikov in #209
- add py.typed (adopt PEP 561) by @arogozhnikov in #211
- Delete tensorflow-specific readme file by @arogozhnikov in #212
- Adopt pypa/hatch by @arogozhnikov in #213
New Contributors
- @rentainhe made their first contribution in #180
- @MilesCranmer made their first contribution in #197
- @maxeonyx made their first contribution in #201
- @cs-mshah made their first contribution in #205
Full Changelog: v0.4.1...v0.5.0
Einops v0.4.1: Avoid importing numpy if it is not required
What's Changed
- fix numpy dependency problem by @lucidrains in #176
New Contributors
- @lucidrains made their first contribution in #176
Full Changelog: v0.4.0...v0.4.1
Einops v0.4.0: EinMix and torch.jit.script
Main Changes
- torch.jit.script is supported (in addition to previous torch.jit.trace)
- EinMix (swiss-knife for next-gen MLPs) is added. A much-improved einsum/linear layer is now available.
- einops.repeat in torch does not create copy when possible
Detailed PRs
- Update documentation by @arogozhnikov in #137
- Multiple updates in docs, add Rearrange layer to torch test by @arogozhnikov in #138
- Add support for torch scripting of einops layers by @arogozhnikov in #139
- Introduce EinMix - swiss-knife for next-gen MLPs by @arogozhnikov in #142
- Docs improvements: wording, visual style, EinMix by @arogozhnikov in #143
- Move docs to a separate folder by @arogozhnikov in #144
- Type hinting + add testing for EinMix composition/decomposition by @arogozhnikov in #154
- Reject repeated axes in parse_shape by @dmitriy-serdyuk in #159
- Enable ellipsis in patterns for parse_shape. by @dmitriy-serdyuk in #162
New Contributors
- @dmitriy-serdyuk made their first contribution in #159
Full Changelog: v0.3.2...v0.4.0