Skip to content

Commit c34a50a

Browse files
authored
Merge pull request #14 from arnab39/dev
0.1.1
2 parents cceb6e3 + 3a705c3 commit c34a50a

File tree

36 files changed

+496
-305
lines changed

36 files changed

+496
-305
lines changed

AUTHORS.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,6 @@
22

33
* Arnab Mondal [[email protected]](mailto:[email protected])
44
* [Siba Smarak Panigrahi](https://sibasmarak.github.io/) [[email protected]](mailto:[email protected])
5-
* [Danielle Benesch](https://github.com/danibene) [daniellerbenesch+git@gmail.com](mailto:daniellerbenesch+git@gmail.com)
5+
* [Danielle Benesch](https://github.com/danibene) [[email protected]](mailto:[email protected])
66
* [Jikael Gagnon](https://github.com/jikaelgagnon) [[email protected]](mailto:[email protected])
7-
* [Sékou-Oumar Kaba](https://oumarkaba.github.io)[mailto:[email protected]]
7+
* [Sékou-Oumar Kaba](https://oumarkaba.github.io) [[email protected]](mailto:[email protected])

CHANGELOG.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,12 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
1515

1616
### Removed
1717

18+
## [0.1.1] - 2024-03-15
19+
20+
### Changed
21+
- Operating system classifier in `setup.cfg`.
22+
- Replaced `escnn` dependency with `e2cnn`.
23+
1824
## [0.1.0] - 2024-03-14
1925

2026
### Added

README.md

Lines changed: 45 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -9,17 +9,23 @@
99
</h3>
1010
<br>
1111

12+
# About
13+
EquiAdapt is a [PyTorch](https://pytorch.org) package that provides a flexible and efficient way to make *any* neural network architecture (including large foundation models) equivariant, instead of redesigning and training from scratch. This is done by learning to canonicalize transformed inputs, before feeding them to the prediction model. You can play with this concept in the provided [tutorial](tutorials/images/instance_segmentation_group_equivariant_canonicalization.ipynb) for equivariant adaptation of the Segment-Anything Model (SAM, [Kirillov et. al, 2023](https://arxiv.org/abs/2304.02643)) and images from Microsoft COCO ([Lin et. al, 2014](https://arxiv.org/abs/1405.0312)) dataset for instance segmentation.
1214

13-
# Equivariant adaptation with canonicalization
15+
To learn more about this from a blog, check out: [How to make your foundation model equivariant](https://mila.quebec/en/article/how-to-make-your-foundation-model-equivariant/)
16+
17+
## Equivariant adaptation with canonicalization
1418
![Equivariant adaptation of any prediction network](https://raw.githubusercontent.com/arnab39/equiadapt/main/utils/equiadapt_cat.jpeg "Equivariant adaptation of any prediction network")
1519

16-
![Equivariant adaptation of Segment-Anything Network](https://raw.githubusercontent.com/arnab39/equiadapt/main/utils/equiadapt_sam.gif "Equivariant adaptation of any prediction network")
20+
Read more about this [here](https://proceedings.mlr.press/v202/kaba23a.html)
1721

18-
EquiAdapt is a [PyTorch](https://pytorch.org) package that provides a flexible and efficient way to make *any* neural network architecture (including large foundation models) equivariant, instead of redesigning and training from scratch. This is done by learning to canonicalize transformed inputs, before feeding them to the prediction model.
22+
## Prior regularized canonicalization
23+
![Equivariant adaptation of Segment-Anything Model](https://raw.githubusercontent.com/arnab39/equiadapt/main/utils/equiadapt_sam.gif "Equivariant adaptation of Segment-Anything Model")
1924

20-
You can play with this concept in the provided [tutorial](tutorials/images/instance_segmentation_group_equivariant_canonicalization.ipynb) for equivariant adaptation of the Segment-Anything Model (SAM, [Kirillov et. al, 2023](https://arxiv.org/abs/2304.02643)) and images from Microsoft COCO ([Lin et. al, 2014](https://arxiv.org/abs/1405.0312)) dataset for instance segmentation.
25+
Read more about this [here](https://proceedings.neurips.cc/paper_files/paper/2023/hash/9d5856318032ef3630cb580f4e24f823-Abstract-Conference.html)
2126

22-
# Easy to integrate :rocket:
27+
# How to use?
28+
## Easy to integrate :rocket:
2329

2430
Equiadapt enables users to obtain equivariant versions of existing neural networks with a few lines of code changes:
2531
```diff
@@ -60,7 +66,7 @@ Equiadapt enables users to obtain equivariant versions of existing neural networ
6066
optimizer.step()
6167
```
6268

63-
# Details on using `equiadapt` library
69+
## Details on using `equiadapt` library
6470

6571
1. Create a `canonicalization network` (or use our provided networks: for images, in `equiadapt/images/canonicalization_networks/`).
6672

@@ -98,8 +104,20 @@ loss = canonicalizer.add_prior_regularizer(loss)
98104
loss.backward()
99105
```
100106

101-
# Setup instructions
102-
### Setup Conda environment
107+
# Installation
108+
109+
## Using pypi
110+
You can install the latest [release](https://github.com/arnab39/equiadapt/releases) using:
111+
112+
```pip install equiadapt```
113+
114+
## Manual installation
115+
116+
You can clone this repository and manually install it with:
117+
118+
```pip install git+https://github.com/arnab39/equiadapt```
119+
120+
## Setup Conda environment for examples
103121

104122
To create a conda environment with the necessary packages:
105123

@@ -119,34 +137,19 @@ Note that this might not be a complete list of dependencies. If you encounter an
119137

120138
# Running equiadapt using example code
121139

122-
We provide example code to run equiadapt in different data domains and tasks to achieve equivariance. You can also find a [tutorial](tutorials/images/classification_group_equivariant_canonicalization.ipynb) on how to use equiadapt with minimalistic changes to your own code (for image classification).
123-
124-
Before you jump to the instructions for each of them please follow the setup hydra instructions to create a `.env` file with the paths to store all the data, wandb logs and checkpoints.
125-
126-
127-
<table style="border:1px solid white; border-collapse: collapse;">
128-
<tr>
129-
<th style="border:1px solid white;" rowspan="2"><div align="center">Image</div></th>
130-
<td style="border:1px solid white;">Classification</td>
131-
<td style="border:1px solid white;"><a href="examples/images/classification/README.md">here</a></td>
132-
</tr>
133-
<tr>
134-
<td style="border:1px solid white;">Segmentation</td>
135-
<td style="border:1px solid white;"><a href="examples/images/segmentation/README.md">here</a></td>
136-
</tr>
137-
<tr>
138-
<th style="border:1px solid white;" rowspan="2"><div align="center">Point Cloud</div></th>
139-
<td style="border:1px solid white;">Classification</td>
140-
<td style="border:1px solid white;"><a href="examples/pointcloud/classification/README.md">here</a></td>
141-
</tr>
142-
<tr>
143-
<td style="border:1px solid white;">Part Segmentation</td>
144-
<td style="border:1px solid white;"><a href="examples/pointcloud/part_segmentation/README.md">here</a></td>
145-
</tr>
146-
</table>
147-
148-
### Setup Hydra
149-
- Create a `.env` file in the root of the project with the following content:
140+
We provide [examples](examples) to run equiadapt in different data domains and tasks to achieve equivariance.
141+
142+
- Image:
143+
- Classification: [Link](examples/images/classification/README.md)
144+
- Segmentation: [Link](examples/images/segmentation/README.md)
145+
- Point Cloud:
146+
- Classification: [Link](examples/pointcloud/classification/README.md)
147+
- Part Segmentation: [Link](examples/pointcloud/part_segmentation/README.md)
148+
- Nbody Dynamics: [Link](examples/nbody/README.md)
149+
150+
Our examples use `hydra` to configure hyperparameters. Follow the hydra setup instructions to create a .env file with the paths to store all the data, wandb logs and checkpoints.
151+
152+
Create a `.env` file in the root of the project with the following content:
150153
```
151154
export HYDRA_JOBS="/path/to/your/hydra/jobs/directory"
152155
export WANDB_DIR="/path/to/your/wandb/jobs/directory"
@@ -155,14 +158,17 @@ Before you jump to the instructions for each of them please follow the setup hyd
155158
export CHECKPOINT_PATH="/path/to/your/checkpoint/directory"
156159
```
157160

161+
You can also find [tutorials](tutorials) on how to use equiadapt with minimalistic changes to your own code.
162+
158163

159-
# Related papers
164+
165+
166+
167+
# Related papers and Citations
160168

161169
For more insights on this library refer to our original paper on the idea: [Equivariance with Learned Canonicalization Function (ICML 2023)](https://proceedings.mlr.press/v202/kaba23a.html) and how to extend it to make any existing large pre-trained model equivariant: [Equivariant Adaptation of Large Pretrained Models (NeurIPS 2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/9d5856318032ef3630cb580f4e24f823-Abstract-Conference.html).
162170

163-
To learn more about this from a blog, check out: [How to make your foundation model equivariant](https://mila.quebec/en/article/how-to-make-your-foundation-model-equivariant/)
164171

165-
# Citation
166172
If you find this library or the associated papers useful, please cite the following papers:
167173
```
168174
@inproceedings{kaba2023equivariance,

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# equiadapt
22

3-
Library that provides metrics to asses representation quality
3+
Library to make any existing neural network architecture equivariant
44

55
## Contents
66

equiadapt/common/basecanonicalization.py

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,23 @@
1+
"""
2+
This module defines a base class for canonicalization and its subclasses for different types of canonicalization methods.
3+
4+
Canonicalization is a process that transforms the input data into a canonical (standard) form.
5+
This can be cheap alternative to building equivariant models as it can be used to transform the input data into a canonical form and then use a standard model to make predictions.
6+
Canonicalizarion allows you to use any existing arcitecture (even pre-trained ones) for your task without having to worry about equivariance.
7+
8+
The module contains the following classes:
9+
10+
- `BaseCanonicalization`: This is an abstract base class that defines the interface for all canonicalization methods.
11+
12+
- `IdentityCanonicalization`: This class represents an identity canonicalization method, which is a no-op; it doesn't change the input data.
13+
14+
- `DiscreteGroupCanonicalization`: This class represents a discrete group canonicalization method, which transforms the input data into a canonical form using a discrete group.
15+
16+
- `ContinuousGroupCanonicalization`: This class represents a continuous group canonicalization method, which transforms the input data into a canonical form using a continuous group.
17+
18+
Each class has methods to perform the canonicalization, invert it, and calculate the prior regularization loss and identity metric.
19+
"""
20+
121
from typing import Any, Dict, List, Optional, Tuple, Union
222

323
import torch

0 commit comments

Comments
 (0)