Skip to content

Commit 657ee6f

Browse files
authored
Merge pull request #15 from arnab39/dev
0.1.2
2 parents f39eb87 + 2b0e434 commit 657ee6f

File tree

22 files changed

+303
-67
lines changed

22 files changed

+303
-67
lines changed

.github/pull_request_template.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
This is a template for making a pull-request. You can remove the text and sections and write your own thing if you wish, just make sure you give enough information about how and why. If you have any issues or difficulties, don't hesitate to open an issue.
2+
3+
4+
# Description
5+
6+
The aim is to add this feature ...
7+
8+
# Proposed Changes
9+
10+
I changed the `foo()` function so that ...
11+
12+
13+
# Checklist
14+
15+
Here are some things to check before creating the pull request. If you encounter any issues, don't hesitate to ask for help :)
16+
17+
- [ ] I have read the [contributor's guide](https://github.com/arnab39/equiadapt/blob/main/CONTRIBUTING.md).
18+
- [ ] The base branch of my pull request is the `dev` branch, not the `main` branch.
19+
- [ ] I ran the [code checks](https://github.com/arnab39/equiadapt/blob/main/CONTRIBUTING.md#implement-your-changes) on the files I added or modified and fixed the errors.
20+
- [ ] I updated the [changelog](https://github.com/arnab39/equiadapt/blob/main/CHANGELOG.md).

.github/workflows/ci.yml

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -58,14 +58,16 @@ jobs:
5858
test:
5959
needs: prepare
6060
strategy:
61+
fail-fast: false
6162
matrix:
62-
python:
63-
- "3.7" # oldest Python supported by PSF
64-
- "3.10" # newest Python that is stable
65-
platform:
66-
- ubuntu-latest
67-
# - macos-latest
68-
# - windows-latest
63+
python: ["3.7", "3.8", "3.9", "3.10", "3.11", "3.12"]
64+
platform: [ubuntu-latest, macos-latest, windows-latest]
65+
exclude: # Python < v3.8 does not support Apple Silicon ARM64.
66+
- python: "3.7"
67+
platform: macos-latest
68+
include: # So run those legacy versions on Intel CPUs.
69+
- python: "3.7"
70+
platform: macos-13
6971
runs-on: ${{ matrix.platform }}
7072
steps:
7173
- uses: actions/checkout@v3

.pre-commit-config.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@ exclude: '^docs/conf.py'
22

33
repos:
44
- repo: https://github.com/pre-commit/pre-commit-hooks
5-
rev: v4.5.0
5+
rev: v4.6.0
66
hooks:
77
- id: trailing-whitespace
88
- id: check-added-large-files
@@ -40,7 +40,7 @@ repos:
4040
- id: isort
4141

4242
- repo: https://github.com/psf/black
43-
rev: 24.2.0
43+
rev: 24.4.2
4444
hooks:
4545
- id: black
4646
language_version: python3
@@ -66,7 +66,7 @@ repos:
6666

6767
# Check for type errors with mypy:
6868
- repo: https://github.com/pre-commit/mirrors-mypy
69-
rev: 'v1.9.0'
69+
rev: 'v1.10.0'
7070
hooks:
7171
- id: mypy
7272
args: [--disallow-untyped-defs, --ignore-missing-imports]

CHANGELOG.md

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,22 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
66
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
77

8-
## [Unreleased]
8+
## [0.1.2] - 2024-05-29
99

1010
### Added
11+
- Added canonicalization with optimization approach.
12+
- Added evaluating transfer learning capabilities of canonicalizer.
13+
- Added pull request template.
14+
- Added test for discrete invert canonicalization.
1115

1216
### Fixed
17+
- Fixed segmentation evaluation for non-identity canonicalizers.
18+
- Fixed minor bugs in inverse canonicalization for discrete groups.
1319

1420
### Changed
15-
16-
### Removed
21+
- Updated `README.md` with [Improved Canonicalization for Model Agnostic Equivariance](https://arxiv.org/abs/2405.14089) ([EquiVision](https://equivision.github.io/), CVPR 2024 workshop) paper details.
22+
- Updated `CONTRIBUTING.md` with more information on how to run the code checks.
23+
- Changed the OS used to test Python 3.7 on GitHub actions (macos-latest -> macos-13).
1724

1825
## [0.1.1] - 2024-03-15
1926

CONTRIBUTING.md

Lines changed: 19 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -155,17 +155,29 @@ This can easily be done via [Anaconda] or [Miniconda] and detailed [here](https:
155155
`git log --graph --decorate --pretty=oneline --abbrev-commit --all`
156156
to look for recurring communication patterns.
157157

158+
#### Run code checks
158159

159-
5. Please check that your changes don't break any unit tests with:
160+
Please make sure to see the validation messages from pre-commit and fix any
161+
eventual issues. This should automatically use [flake8]/[black] to check/fix
162+
the code style in a way that is compatible with the project.
160163

161-
```
162-
tox
163-
```
164+
To run pre-commit manually, you can use:
165+
166+
```
167+
pre-commit run --all-files
168+
```
169+
170+
Please also check that your changes don't break any unit tests with:
171+
172+
```
173+
tox
174+
```
175+
176+
(after having installed [tox] with `pip install tox` or `pipx`).
164177

165-
(after having installed [tox] with `pip install tox` or `pipx`).
178+
You can also use [tox] to run several other pre-configured tasks in the
179+
repository. Try `tox -av` to see a list of the available checks.
166180

167-
You can also use [tox] to run several other pre-configured tasks in the
168-
repository. Try `tox -av` to see a list of the available checks.
169181

170182
### Submit your contribution
171183

README.md

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -121,6 +121,8 @@ You can clone this repository and manually install it with:
121121

122122
## Setup Conda environment for examples
123123

124+
The recommended way is to manually create an environment and install the dependencies from the `min_conda_env.yaml` file.
125+
124126
To create a conda environment with the necessary packages:
125127

126128
```
@@ -168,7 +170,7 @@ You can also find [tutorials](https://github.com/arnab39/equiadapt/blob/main/tut
168170

169171
# Related papers and Citations
170172

171-
For more insights on this library refer to our original paper on the idea: [Equivariance with Learned Canonicalization Function (ICML 2023)](https://proceedings.mlr.press/v202/kaba23a.html) and how to extend it to make any existing large pre-trained model equivariant: [Equivariant Adaptation of Large Pretrained Models (NeurIPS 2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/9d5856318032ef3630cb580f4e24f823-Abstract-Conference.html).
173+
For more insights on this library refer to our original paper on the idea: [Equivariance with Learned Canonicalization Function (ICML 2023)](https://proceedings.mlr.press/v202/kaba23a.html) and how to extend it to make any existing large pre-trained model equivariant: [Equivariant Adaptation of Large Pretrained Models (NeurIPS 2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/9d5856318032ef3630cb580f4e24f823-Abstract-Conference.html). An improved approach for designing canonicalization network, which allows non-equivariant and expressive models as equivariant networks is presented in [Improved Canonicalization for Model Agnostic Equivariance (CVPR 2024: EquiVision Workshop)](https://arxiv.org/abs/2405.14089).
172174

173175

174176
If you find this library or the associated papers useful, please cite the following papers:
@@ -197,6 +199,17 @@ If you find this library or the associated papers useful, please cite the follow
197199
}
198200
```
199201

202+
```
203+
@inproceedings{
204+
panigrahi2024improved,
205+
title={Improved Canonicalization for Model Agnostic Equivariance},
206+
author={Siba Smarak Panigrahi and Arnab Kumar Mondal},
207+
booktitle={CVPR 2024 Workshop on Equivariant Vision: From Theory to Practice},
208+
year={2024},
209+
url={https://arxiv.org/abs/2405.14089}
210+
}
211+
```
212+
200213
# Contact
201214

202215
For questions related to this code, please raise an issue and you can mail us at:
@@ -206,7 +219,7 @@ For questions related to this code, please raise an issue and you can mail us at
206219

207220
# Contributing
208221

209-
You can check out the [contributor's guide](https://github.com/arnab39/equiadapt/blob/main/CHANGELOG.md).
222+
You can check out the [contributor's guide](https://github.com/arnab39/equiadapt/blob/main/CONTRIBUTING.md).
210223

211224
This project uses `pre-commit`, you can install it before making any
212225
changes::

equiadapt/images/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,8 @@
2222
RotationEquivariantConvLift,
2323
RotoReflectionEquivariantConv,
2424
RotoReflectionEquivariantConvLift,
25+
WideResNet50Network,
26+
WideResNet101Network,
2527
custom_equivariant_networks,
2628
custom_group_equivariant_layers,
2729
custom_nonequivariant_networks,
@@ -51,6 +53,8 @@
5153
"OptimizedGroupEquivariantImageCanonicalization",
5254
"OptimizedSteerableImageCanonicalization",
5355
"ResNet18Network",
56+
"WideResNet50Network",
57+
"WideResNet101Network",
5458
"RotationEquivariantConv",
5559
"RotationEquivariantConvLift",
5660
"RotoReflectionEquivariantConv",

equiadapt/images/canonicalization_networks/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,8 @@
1616
from equiadapt.images.canonicalization_networks.custom_nonequivariant_networks import (
1717
ConvNetwork,
1818
ResNet18Network,
19+
WideResNet50Network,
20+
WideResNet101Network,
1921
)
2022
from equiadapt.images.canonicalization_networks.escnn_networks import (
2123
ESCNNEquivariantNetwork,
@@ -34,6 +36,8 @@
3436
"ESCNNWideBasic",
3537
"ESCNNWideBottleneck",
3638
"ResNet18Network",
39+
"WideResNet101Network",
40+
"WideResNet50Network",
3741
"RotationEquivariantConv",
3842
"RotationEquivariantConvLift",
3943
"RotoReflectionEquivariantConv",

equiadapt/images/canonicalization_networks/custom_nonequivariant_networks.py

Lines changed: 101 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ def __init__(
110110
out_vector_size (int, optional): The size of the output vector of the network. Defaults to 128.
111111
"""
112112
super().__init__()
113-
self.resnet18 = torchvision.models.resnet18(weights=None)
113+
self.resnet18 = torchvision.models.resnet18(weights="DEFAULT")
114114
self.resnet18.fc = nn.Sequential(
115115
nn.Linear(512, out_vector_size),
116116
)
@@ -128,3 +128,103 @@ def forward(self, x: torch.Tensor) -> torch.Tensor:
128128
torch.Tensor: The output of the network. It has the shape (batch_size, 1).
129129
"""
130130
return self.resnet18(x)
131+
132+
133+
class WideResNet101Network(nn.Module):
134+
"""
135+
This class represents a neural network based on the WideResNetNetwork architecture.
136+
137+
The network uses a pre-trained WideResNet model. The final fully connected layer of the WideResNet101 model is replaced with a new fully connected layer.
138+
139+
Attributes:
140+
resnet18 (torchvision.models.ResNet): The ResNet-18 model.
141+
out_vector_size (int): The size of the output vector of the network.
142+
"""
143+
144+
def __init__(
145+
self,
146+
in_shape: tuple,
147+
out_channels: int,
148+
kernel_size: int,
149+
num_layers: int = 2,
150+
out_vector_size: int = 128,
151+
):
152+
"""
153+
Initializes the ResNet18Network instance.
154+
155+
Args:
156+
in_shape (tuple): The shape of the input data. It should be a tuple of the form (in_channels, height, width).
157+
out_channels (int): The number of output channels of the first convolutional layer.
158+
kernel_size (int): The size of the kernel of the convolutional layers.
159+
num_layers (int, optional): The number of convolutional layers. Defaults to 2.
160+
out_vector_size (int, optional): The size of the output vector of the network. Defaults to 128.
161+
"""
162+
super().__init__()
163+
self.wideresnet = torchvision.models.wide_resnet101_2(weights="DEFAULT")
164+
self.wideresnet.fc = nn.Sequential(
165+
nn.Linear(2048, out_vector_size),
166+
)
167+
168+
self.out_vector_size = out_vector_size
169+
170+
def forward(self, x: torch.Tensor) -> torch.Tensor:
171+
"""
172+
Performs a forward pass through the network.
173+
174+
Args:
175+
x (torch.Tensor): The input data. It should have the shape (batch_size, in_channels, height, width).
176+
177+
Returns:
178+
torch.Tensor: The output of the network. It has the shape (batch_size, 1).
179+
"""
180+
return self.wideresnet(x)
181+
182+
183+
class WideResNet50Network(nn.Module):
184+
"""
185+
This class represents a neural network based on the WideResNetNetwork architecture.
186+
187+
The network uses a pre-trained WideResNet model. The final fully connected layer of the WideResNet50 model is replaced with a new fully connected layer.
188+
189+
Attributes:
190+
resnet18 (torchvision.models.ResNet): The ResNet-18 model.
191+
out_vector_size (int): The size of the output vector of the network.
192+
"""
193+
194+
def __init__(
195+
self,
196+
in_shape: tuple,
197+
out_channels: int,
198+
kernel_size: int,
199+
num_layers: int = 2,
200+
out_vector_size: int = 128,
201+
):
202+
"""
203+
Initializes the ResNet18Network instance.
204+
205+
Args:
206+
in_shape (tuple): The shape of the input data. It should be a tuple of the form (in_channels, height, width).
207+
out_channels (int): The number of output channels of the first convolutional layer.
208+
kernel_size (int): The size of the kernel of the convolutional layers.
209+
num_layers (int, optional): The number of convolutional layers. Defaults to 2.
210+
out_vector_size (int, optional): The size of the output vector of the network. Defaults to 128.
211+
"""
212+
super().__init__()
213+
self.wideresnet = torchvision.models.wide_resnet50_2(weights="DEFAULT")
214+
self.wideresnet.fc = nn.Sequential(
215+
nn.Linear(2048, out_vector_size),
216+
)
217+
218+
self.out_vector_size = out_vector_size
219+
220+
def forward(self, x: torch.Tensor) -> torch.Tensor:
221+
"""
222+
Performs a forward pass through the network.
223+
224+
Args:
225+
x (torch.Tensor): The input data. It should have the shape (batch_size, in_channels, height, width).
226+
227+
Returns:
228+
torch.Tensor: The output of the network. It has the shape (batch_size, 1).
229+
"""
230+
return self.wideresnet(x)

examples/images/classification/configs/canonicalization/opt_group_equivariant.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
canonicalization_type: opt_group_equivariant
2-
network_type: cnn # Options for canonization method 1) cnn 2) wideresnet
2+
network_type: cnn # Options for canonization method 1) cnn 2) non_equivariant_wrn_50 3) non_equivariant_wrn_101 4) non_equivariant_resnet18
33
network_hyperparams:
44
kernel_size: 7 # Kernel size for the canonization network
55
out_channels: 16 # Number of output channels for the canonization network

0 commit comments

Comments
 (0)