Skip to content

Commit

Permalink
Merge pull request #224 from arogozhnikov/packing
Browse files Browse the repository at this point in the history
Preparations for 0.6.0 release
  • Loading branch information
arogozhnikov authored Nov 9, 2022
2 parents ebf084e + 506121c commit d6f7910
Show file tree
Hide file tree
Showing 2 changed files with 34 additions and 39 deletions.
71 changes: 33 additions & 38 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
<!-- this link magically rendered as video, unfortunately not in docs -->


<!--
<a href='http://arogozhnikov.github.io/images/einops/einops_video.mp4' >
Expand All @@ -12,14 +10,13 @@
</a>
-->


<!-- this link magically rendered as video, unfortunately not in docs -->

https://user-images.githubusercontent.com/6318811/177030658-66f0eb5d-e136-44d8-99c9-86ae298ead5b.mp4





# einops
[![Run tests](https://github.com/arogozhnikov/einops/actions/workflows/run_tests.yml/badge.svg)](https://github.com/arogozhnikov/einops/actions/workflows/run_tests.yml)
[![PyPI version](https://badge.fury.io/py/einops.svg)](https://badge.fury.io/py/einops)
Expand All @@ -32,7 +29,8 @@ Supports numpy, pytorch, tensorflow, jax, and [others](#supported-frameworks).

## Recent updates:

- einsum is now a part of einops
- einops 0.6 introduces [packing and unpacking](https://github.com/arogozhnikov/einops/blob/master/docs/4-pack-and-unpack.ipynb)
- einops 0.5: einsum is now a part of einops
- [Einops paper](https://openreview.net/pdf?id=oapKSVM2bcj) is accepted for oral presentation at ICLR 2022 (yes, it worth reading)
- flax and oneflow backend added
- torch.jit.script is supported for pytorch layers
Expand Down Expand Up @@ -99,14 +97,15 @@ Tutorials are the most convenient way to see `einops` in action

- part 1: [einops fundamentals](https://github.com/arogozhnikov/einops/blob/master/docs/1-einops-basics.ipynb)
- part 2: [einops for deep learning](https://github.com/arogozhnikov/einops/blob/master/docs/2-einops-for-deep-learning.ipynb)
- part 3: [improve pytorch code with einops](https://arogozhnikov.github.io/einops/pytorch-examples.html)
- part 3: [packing and unpacking](https://github.com/arogozhnikov/einops/blob/master/docs/4-pack-and-unpack.ipynb)
- part 4: [improve pytorch code with einops](http://einops.rocks/pytorch-examples.html)


## API <a name="API"></a>

`einops` has a minimalistic yet powerful API.

Three operations provided ([einops tutorial](https://github.com/arogozhnikov/einops/blob/master/docs/)
Three core operations provided ([einops tutorial](https://github.com/arogozhnikov/einops/blob/master/docs/)
shows those cover stacking, reshape, transposition, squeeze/unsqueeze, repeat, tile, concatenate, view and numerous reductions)

```python
Expand All @@ -115,29 +114,22 @@ from einops import rearrange, reduce, repeat
output_tensor = rearrange(input_tensor, 't b c -> b c t')
# combine rearrangement and reduction
output_tensor = reduce(input_tensor, 'b c (h h2) (w w2) -> b h w c', 'mean', h2=2, w2=2)
# copy along a new axis
# copy along a new axis
output_tensor = repeat(input_tensor, 'h w -> h w c', c=3)
```
And two corresponding layers (`einops` keeps a separate version for each framework) with the same API.

```python
from einops.layers.chainer import Rearrange, Reduce
from einops.layers.gluon import Rearrange, Reduce
from einops.layers.keras import Rearrange, Reduce
from einops.layers.torch import Rearrange, Reduce
from einops.layers.torch import Rearrange, Reduce
from einops.layers.tensorflow import Rearrange, Reduce
from einops.layers.flax import Rearrange, Reduce
from einops.layers.gluon import Rearrange, Reduce
from einops.layers.keras import Rearrange, Reduce
from einops.layers.chainer import Rearrange, Reduce
```

Layers behave similarly to operations and have the same parameters
(with the exception of the first argument, which is passed during call)

```python
layer = Rearrange(pattern, **axes_lengths)
layer = Reduce(pattern, reduction, **axes_lengths)

# apply created layer to a tensor / variable
x = layer(x)
```
(with the exception of the first argument, which is passed during call).

Example of using layers within a model:
```python
Expand All @@ -146,28 +138,34 @@ from torch.nn import Sequential, Conv2d, MaxPool2d, Linear, ReLU
from einops.layers.torch import Rearrange

model = Sequential(
Conv2d(3, 6, kernel_size=5),
MaxPool2d(kernel_size=2),
...,
Conv2d(6, 16, kernel_size=5),
MaxPool2d(kernel_size=2),
# flattening
# flattening without need to write forward
Rearrange('b c h w -> b (c h w)'),
Linear(16*5*5, 120),
ReLU(),
Linear(120, 10),
)
```

<!---
Additionally two auxiliary functions provided
Later additions to the family are `einsum`, `pack` and `unpack` functions:

```python
from einops import asnumpy, parse_shape
# einops.asnumpy converts tensors of imperative frameworks to numpy
numpy_tensor = asnumpy(input_tensor)
# einops.parse_shape gives a shape of axes of interest
parse_shape(input_tensor, 'batch _ h w') # e.g {'batch': 64, 'h': 128, 'w': 160}
from einops import einsum, pack, unpack
# einsum is like ... einsum, generic and flexible dot-product
# but 1) axes can be multi-lettered 2) pattern goes last 3) works with multiple frameworks
C = einsum(A, B, 'b t1 head c, b t2 head c -> b head t1 t2')

# pack and unpack allow reversibly 'packing' multiple tensors into one.
# Packed tensors may be of different dimensionality:
packed, ps = pack([class_token_bc, image_tokens_bhwc, text_tokens_btc], 'b * c')
class_emb_bc, image_emb_bhwc, text_emb_btc = unpack(transformer(packed), ps, 'b * c')
# Pack/Unpack are more convenient than concat and split, see tutorial
```
-->

Last, but not the least `EinMix` layer is available! <br />
`EinMix` is a generic linear layer, perfect for MLP Mixers and similar architectures.

## Naming <a name="Naming"></a>

Expand Down Expand Up @@ -210,9 +208,7 @@ y = rearrange(x, 'b c h w -> b (c h w)')
```
The second line checks that the input has four dimensions,
but you can also specify particular dimensions.
That's opposed to just writing comments about shapes since
[comments don't work and don't prevent mistakes](https://medium.freecodecamp.org/code-comments-the-good-the-bad-and-the-ugly-be9cc65fbf83)
as we know
That's opposed to just writing comments about shapes since comments don't prevent mistakes, not tested, and without code review tend to be outdated
```python
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)
Expand Down Expand Up @@ -260,7 +256,7 @@ y = x.flatten() # or flatten(x)
Suppose `x`'s shape was `(3, 4, 5)`, then `y` has shape ...

- numpy, cupy, chainer, pytorch: `(60,)`
- keras, tensorflow.layers, mxnet and gluon: `(3, 20)`
- keras, tensorflow.layers, gluon: `(3, 20)`

`einops` works the same way in all frameworks.

Expand All @@ -278,7 +274,7 @@ repeat(image, 'h w -> h (tile w)', tile=2) # in numpy
repeat(image, 'h w -> h (tile w)', tile=2) # in pytorch
repeat(image, 'h w -> h (tile w)', tile=2) # in tf
repeat(image, 'h w -> h (tile w)', tile=2) # in jax
repeat(image, 'h w -> h (tile w)', tile=2) # in mxnet
repeat(image, 'h w -> h (tile w)', tile=2) # in cupy
... (etc.)
```

Expand All @@ -296,7 +292,6 @@ Einops works with ...
- [chainer](https://chainer.org/)
- [gluon](https://gluon.mxnet.io/)
- [tf.keras](https://www.tensorflow.org/guide/keras)
- [mxnet](https://mxnet.apache.org/) (experimental)
- [oneflow](https://github.com/Oneflow-Inc/oneflow) (experimental)
- [flax](https://github.com/google/flax) (experimental)

Expand Down
2 changes: 1 addition & 1 deletion einops/__init__.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
__author__ = 'Alex Rogozhnikov'
__version__ = '0.6.0pre'
__version__ = '0.6.0'


class EinopsError(RuntimeError):
Expand Down

0 comments on commit d6f7910

Please sign in to comment.