-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pass preprocess_image function to generators in evaluate.py #1290
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1290 +/- ##
=======================================
+ Coverage 23% 23% +<1%
=======================================
Files 43 43
Lines 2535 2539 +4
=======================================
+ Hits 576 577 +1
- Misses 1959 1962 +3 |
@@ -141,6 +150,7 @@ def main(args=None): | |||
# load the model | |||
print('Loading model, this may take a second...') | |||
model = models.load_model(args.model, backbone_name=args.backbone) | |||
generator.compute_shapes = make_shapes_callback(model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can also be passed to the constructor of generator. It does mean that the order of creating generators followed by loading of the model has to be the other way around, but I don't think this is an issue.
Thank you for the PR! I haven't gotten around to do this so it is much appreciated. Does the callback during training work properly? I could imagine it needs a similar treatment. |
Do you mean the make_shapes_callback and that it should be passed directly to generators' constructors for all models except resnet50, instead of this: keras-retinanet/keras_retinanet/bin/train.py Lines 494 to 498 in e27df4c
? I'm not getting high mAP with my backbone during training, so it can indeed be an issue. But not for sure, as I'm just started and there can be other problems. |
Ah it looks like |
Yes I do, when running without the fix. I'm getting 20-30-40% mAP in the evaluation of the first epochs during training, but in the evaluate.py restored snapshots always give 0. It looks reasonable to apply the same image preprocessing during evaluation as in the training process, however I can't get why resnet50 works without it. May be the 'tf' preprocessing mode for mobilenet backbone instead of 'caffe' for resnet causes that? |
Oh yeah definitely, was just an oversight that this wasn't handled properly.
It's because the default value follows resnet50 style preprocessing. I was asking because I thought you meant that, even with this fix, you get different results whether you evaluate during training or evaluate after training. I get now that this isn't the case for you. Then everything looks fine to me, thanks again for the PR. |
@hgaiser Should there be vgg and densenet only or other backbones as well, except the resnet50? |
The shapes of the feature pyramid for resnet50 and probably some others are well defined, ie. you can predict how they will end up. For others, like vgg and densenet they were more difficult to predict so the decision was made to write a function that, given a model, extracts the sizes of the feature pyramid. I recall this from memory, so I could be wrong on some details. |
Wouldn't it be better to change the line: As I see, there is only one keras-retinanet/keras_retinanet/utils/anchors.py Lines 186 to 198 in a81b313
|
I'm pretty sure other backbones (like mobilenet and efficientnet) work similarly as resnet50, so you would get some list of backbones there either way. |
Ok, thank you! |
Pass preprocess_image function to generators in evaluate.py
I've encountered a problem similar to this issue: #647 (with custom backbone, mobilenetv3, not in the "zoo"). Here #1055 (comment) @mariaculman18 offered a solution that worked for me and seems to be reasonable. The author is not going to create a pull request, but I decided to do so, as other users with non-resnet50 backbones can be affected.