You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a template for making a pull-request. You can remove the text and sections and write your own thing if you wish, just make sure you give enough information about how and why. If you have any issues or difficulties, don't hesitate to open an issue.
2
+
3
+
4
+
# Description
5
+
6
+
The aim is to add this feature ...
7
+
8
+
# Proposed Changes
9
+
10
+
I changed the `foo()` function so that ...
11
+
12
+
13
+
# Checklist
14
+
15
+
Here are some things to check before creating the pull request. If you encounter any issues, don't hesitate to ask for help :)
16
+
17
+
-[ ] I have read the [contributor's guide](https://github.com/arnab39/equiadapt/blob/main/CONTRIBUTING.md).
18
+
-[ ] The base branch of my pull request is the `dev` branch, not the `main` branch.
19
+
-[ ] I ran the [code checks](https://github.com/arnab39/equiadapt/blob/main/CONTRIBUTING.md#implement-your-changes) on the files I added or modified and fixed the errors.
20
+
-[ ] I updated the [changelog](https://github.com/arnab39/equiadapt/blob/main/CHANGELOG.md).
Copy file name to clipboardExpand all lines: CHANGELOG.md
+10-3Lines changed: 10 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,15 +5,22 @@ All notable changes to this project will be documented in this file.
5
5
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
8
-
## [Unreleased]
8
+
## [0.1.2] - 2024-05-29
9
9
10
10
### Added
11
+
- Added canonicalization with optimization approach.
12
+
- Added evaluating transfer learning capabilities of canonicalizer.
13
+
- Added pull request template.
14
+
- Added test for discrete invert canonicalization.
11
15
12
16
### Fixed
17
+
- Fixed segmentation evaluation for non-identity canonicalizers.
18
+
- Fixed minor bugs in inverse canonicalization for discrete groups.
13
19
14
20
### Changed
15
-
16
-
### Removed
21
+
- Updated `README.md` with [Improved Canonicalization for Model Agnostic Equivariance](https://arxiv.org/abs/2405.14089) ([EquiVision](https://equivision.github.io/), CVPR 2024 workshop) paper details.
22
+
- Updated `CONTRIBUTING.md` with more information on how to run the code checks.
23
+
- Changed the OS used to test Python 3.7 on GitHub actions (macos-latest -> macos-13).
Copy file name to clipboardExpand all lines: README.md
+15-2Lines changed: 15 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -121,6 +121,8 @@ You can clone this repository and manually install it with:
121
121
122
122
## Setup Conda environment for examples
123
123
124
+
The recommended way is to manually create an environment and install the dependencies from the `min_conda_env.yaml` file.
125
+
124
126
To create a conda environment with the necessary packages:
125
127
126
128
```
@@ -168,7 +170,7 @@ You can also find [tutorials](https://github.com/arnab39/equiadapt/blob/main/tut
168
170
169
171
# Related papers and Citations
170
172
171
-
For more insights on this library refer to our original paper on the idea: [Equivariance with Learned Canonicalization Function (ICML 2023)](https://proceedings.mlr.press/v202/kaba23a.html) and how to extend it to make any existing large pre-trained model equivariant: [Equivariant Adaptation of Large Pretrained Models (NeurIPS 2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/9d5856318032ef3630cb580f4e24f823-Abstract-Conference.html).
173
+
For more insights on this library refer to our original paper on the idea: [Equivariance with Learned Canonicalization Function (ICML 2023)](https://proceedings.mlr.press/v202/kaba23a.html) and how to extend it to make any existing large pre-trained model equivariant: [Equivariant Adaptation of Large Pretrained Models (NeurIPS 2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/9d5856318032ef3630cb580f4e24f823-Abstract-Conference.html). An improved approach for designing canonicalization network, which allows non-equivariant and expressive models as equivariant networks is presented in [Improved Canonicalization for Model Agnostic Equivariance (CVPR 2024: EquiVision Workshop)](https://arxiv.org/abs/2405.14089).
172
174
173
175
174
176
If you find this library or the associated papers useful, please cite the following papers:
@@ -197,6 +199,17 @@ If you find this library or the associated papers useful, please cite the follow
197
199
}
198
200
```
199
201
202
+
```
203
+
@inproceedings{
204
+
panigrahi2024improved,
205
+
title={Improved Canonicalization for Model Agnostic Equivariance},
206
+
author={Siba Smarak Panigrahi and Arnab Kumar Mondal},
207
+
booktitle={CVPR 2024 Workshop on Equivariant Vision: From Theory to Practice},
208
+
year={2024},
209
+
url={https://arxiv.org/abs/2405.14089}
210
+
}
211
+
```
212
+
200
213
# Contact
201
214
202
215
For questions related to this code, please raise an issue and you can mail us at:
@@ -206,7 +219,7 @@ For questions related to this code, please raise an issue and you can mail us at
206
219
207
220
# Contributing
208
221
209
-
You can check out the [contributor's guide](https://github.com/arnab39/equiadapt/blob/main/CHANGELOG.md).
222
+
You can check out the [contributor's guide](https://github.com/arnab39/equiadapt/blob/main/CONTRIBUTING.md).
210
223
211
224
This project uses `pre-commit`, you can install it before making any
torch.Tensor: The output of the network. It has the shape (batch_size, 1).
129
129
"""
130
130
returnself.resnet18(x)
131
+
132
+
133
+
classWideResNet101Network(nn.Module):
134
+
"""
135
+
This class represents a neural network based on the WideResNetNetwork architecture.
136
+
137
+
The network uses a pre-trained WideResNet model. The final fully connected layer of the WideResNet101 model is replaced with a new fully connected layer.
138
+
139
+
Attributes:
140
+
resnet18 (torchvision.models.ResNet): The ResNet-18 model.
141
+
out_vector_size (int): The size of the output vector of the network.
142
+
"""
143
+
144
+
def__init__(
145
+
self,
146
+
in_shape: tuple,
147
+
out_channels: int,
148
+
kernel_size: int,
149
+
num_layers: int=2,
150
+
out_vector_size: int=128,
151
+
):
152
+
"""
153
+
Initializes the ResNet18Network instance.
154
+
155
+
Args:
156
+
in_shape (tuple): The shape of the input data. It should be a tuple of the form (in_channels, height, width).
157
+
out_channels (int): The number of output channels of the first convolutional layer.
158
+
kernel_size (int): The size of the kernel of the convolutional layers.
159
+
num_layers (int, optional): The number of convolutional layers. Defaults to 2.
160
+
out_vector_size (int, optional): The size of the output vector of the network. Defaults to 128.
x (torch.Tensor): The input data. It should have the shape (batch_size, in_channels, height, width).
176
+
177
+
Returns:
178
+
torch.Tensor: The output of the network. It has the shape (batch_size, 1).
179
+
"""
180
+
returnself.wideresnet(x)
181
+
182
+
183
+
classWideResNet50Network(nn.Module):
184
+
"""
185
+
This class represents a neural network based on the WideResNetNetwork architecture.
186
+
187
+
The network uses a pre-trained WideResNet model. The final fully connected layer of the WideResNet50 model is replaced with a new fully connected layer.
188
+
189
+
Attributes:
190
+
resnet18 (torchvision.models.ResNet): The ResNet-18 model.
191
+
out_vector_size (int): The size of the output vector of the network.
192
+
"""
193
+
194
+
def__init__(
195
+
self,
196
+
in_shape: tuple,
197
+
out_channels: int,
198
+
kernel_size: int,
199
+
num_layers: int=2,
200
+
out_vector_size: int=128,
201
+
):
202
+
"""
203
+
Initializes the ResNet18Network instance.
204
+
205
+
Args:
206
+
in_shape (tuple): The shape of the input data. It should be a tuple of the form (in_channels, height, width).
207
+
out_channels (int): The number of output channels of the first convolutional layer.
208
+
kernel_size (int): The size of the kernel of the convolutional layers.
209
+
num_layers (int, optional): The number of convolutional layers. Defaults to 2.
210
+
out_vector_size (int, optional): The size of the output vector of the network. Defaults to 128.
0 commit comments