-
-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pretrained models #28
Comments
No, weights just converted from 2D variant. You can train without freezing I guess. |
@ZFTurbo Yes, you right. I tried with two options, |
thank-you in advance, I have a question related to this: When loading models, do I need to specify particular input shape values for each model to get ImageNet weights? If so, could you please list the required input image shapes for each model? while doing: like in your paper you have used dense model with [96,128,128,3] |
If you don't include top, you can use any shape on input (with some limitations like divisible by 32 etc). |
Yes, I understand that, but I want to confirm whether the ImageNet weights are loaded correctly for my custom input shape. This way, I can fine-tune the model afterward instead of training it from scratch. |
Weights are not dependent on input shape. And they were just converted from 2D variant. Mostly from 224x224 version. So I suppose something like 224x224xN will be the best - but it's usually too much for 3D variant. So I'd propose to use something like 128x128x128 - not more. |
Thank you so much! That's exactly what I wanted to know. Thanks again! |
At first, thank you for your effort in this work. I would like to ask about the nature of these models, are those pretrained like 2D pretrained models that trained on imageNet or what, in other words, when I want to use them can i write this by freeze layers or what:
for params in self.model.parameters():
params.requires_grad=False
The text was updated successfully, but these errors were encountered: