Skip to content

Commit be37e77

Browse files
Merge branch 'main' into vision-perturbation
2 parents 1abb489 + 3094132 commit be37e77

File tree

3 files changed

+50
-2
lines changed

3 files changed

+50
-2
lines changed
+37-1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,38 @@
1-
from code_soup.common.vision.models.allconvnet import AllConvNet
1+
from torchvision.models import (
2+
alexnet,
3+
densenet121,
4+
densenet161,
5+
densenet169,
6+
densenet201,
7+
googlenet,
8+
inception_v3,
9+
mnasnet0_5,
10+
mnasnet0_75,
11+
mnasnet1_0,
12+
mnasnet1_3,
13+
mobilenet_v2,
14+
mobilenet_v3_large,
15+
mobilenet_v3_small,
16+
resnet18,
17+
resnet34,
18+
resnet50,
19+
resnet101,
20+
resnet152,
21+
resnext50_32x4d,
22+
resnext101_32x8d,
23+
shufflenet_v2_x0_5,
24+
shufflenet_v2_x1_0,
25+
shufflenet_v2_x1_5,
26+
shufflenet_v2_x2_0,
27+
squeezenet1_0,
28+
squeezenet1_1,
29+
vgg11,
30+
vgg13,
31+
vgg16,
32+
vgg19,
33+
wide_resnet50_2,
34+
wide_resnet101_2,
35+
)
36+
37+
from code_soup.common.vision.models.allconvnet import AllConvNet
238
from code_soup.common.vision.models.nin import NIN

code_soup/common/vision/models/readme.md

+11
Original file line numberDiff line numberDiff line change
@@ -4,3 +4,14 @@ List of implemented models
44
---
55
1. [AllConvNet](allconvnet.py), [Striving for Simplicity: The All Convolutional Net](https://arxiv.org/abs/1412.6806)
66
2. [Network in Network](nin.py), [Network In Network](https://arxiv.org/abs/1312.4400)
7+
8+
# Notes on existing Torchvision models
9+
10+
| Model | Input sizes |
11+
| ------------- | ------------- |
12+
| AlexNet | Pre-trained to work on RGB images of sizes 256 x 256. Input to the first layer is a random crop of size 227 x 227 (not 224 x 224 as mentioned in the paper). The required minimum input size of the model is 227x227.|
13+
| VGG-net | Pre-trained to work on RGB images of sizes 256 x 256, cropped to 224 x 224. IThe required minimum input size of the model is 224x224. |
14+
| ResNet | Pre-trained to work on RGB images of sizes 256 x 256, cropped to 224 x 244. IThe required minimum input size of the model is 224x224. |
15+
| Inception-v3 | Pre-trained to work on RGB images of sizes 299 x 299. The pre-trained model, with default aux_logits=True, would work for images of size>=299x299 (example: ImageNet), but not for images of size<299x299 (example: CIFAR-10 and MNIST).|
16+
17+
All other pre-trained models require a minimum input size of 224 x 224.
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
1-
from tests.test_common.test_vision.test_models.test_allconv import TestAllConvNet
1+
from tests.test_common.test_vision.test_models.test_allconv import TestAllConvNet
2+
from tests.test_common.test_vision.test_models.test_nin import TestNIN

0 commit comments

Comments
 (0)