You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
### Issue with Evaluation while traing maskRCNN with detectron2 in colab
I'm training my maskRCNN model on the coco dataset with detectron 2 in colab. While evaluating inference is done but it stops right after
"[10/24 08:08:00 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=0.47s)
creating index...
index created!
[10/24 08:08:01 d2.evaluation.fast_eval_api]: Evaluate annotation type bbox"
No errors are diplayed, but the cell run icon is red, which states "cell execution was unsuccesfull.
### Issue with Evaluation while traing maskRCNN with detectron2 in colab
I'm training my maskRCNN model on the coco dataset with detectron 2 in colab. While evaluating inference is done but it stops right after
"[10/24 08:08:00 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=0.47s)
creating index...
index created!
[10/24 08:08:01 d2.evaluation.fast_eval_api]: Evaluate annotation type bbox"
No errors are diplayed, but the cell run icon is red, which states "cell execution was unsuccesfull.
code:
1. CONFIG CELL:
2. TRAINING CELL
Results (log):
After training:
[10/24 05:22:15 d2.utils.events]: eta: 0:00:11 iter: 41019 total_loss: 0.8259 loss_cls: 0.21 loss_box_reg: 0.3218 loss_mask: 0.1955 loss_rpn_cls: 0.02254 loss_rpn_loc: 0.02922 time: 0.5350 last_time: 0.5728 data_time: 0.0071 last_data_time: 0.0071 lr: 0.0005 max_mem: 3189M
[10/24 05:22:26 d2.utils.events]: eta: 0:00:00 iter: 41039 total_loss: 1.158 loss_cls: 0.3318 loss_box_reg: 0.3927 loss_mask: 0.2567 loss_rpn_cls: 0.04291 loss_rpn_loc: 0.05291 time: 0.5350 last_time: 0.4741 data_time: 0.0077 last_data_time: 0.0012 lr: 0.0005 max_mem: 3189M
[10/24 05:22:33 d2.utils.events]: eta: 0:00:00 iter: 41040 total_loss: 1.158 loss_cls: 0.3318 loss_box_reg: 0.3927 loss_mask: 0.2615 loss_rpn_cls: 0.04291 loss_rpn_loc: 0.06052 time: 0.5350 last_time: 0.5285 data_time: 0.0081 last_data_time: 0.0158 lr: 0.0005 max_mem: 3189M
[10/24 05:22:33 d2.engine.hooks]: Overall training speed: 41039 iterations in 6:05:55 (0.5350 s / it)
[10/24 05:22:33 d2.engine.hooks]: Total training time: 6:06:47 (0:00:51 on hooks)
[10/24 05:22:43 d2.data.datasets.coco]: Loading /content/drive/MyDrive/Thesis/coco_dataset/annotations/instances_val2014.json takes 10.08 seconds.
WARNING [10/24 05:22:43 d2.data.datasets.coco]:
Category ids in annotations are not in [1, #categories]! We'll apply a mapping for you.
After evaluation( inference)
[10/24 08:07:35 d2.evaluation.evaluator]: Inference done 40442/40504. Dataloading: 0.0031 s/iter. Inference: 0.2349 s/iter. Eval: 0.0061 s/iter. Total: 0.2442 s/iter. ETA=0:00:15
[10/24 08:07:40 d2.evaluation.evaluator]: Inference done 40464/40504. Dataloading: 0.0031 s/iter. Inference: 0.2349 s/iter. Eval: 0.0061 s/iter. Total: 0.2442 s/iter. ETA=0:00:09
[10/24 08:07:45 d2.evaluation.evaluator]: Inference done 40483/40504. Dataloading: 0.0031 s/iter. Inference: 0.2349 s/iter. Eval: 0.0061 s/iter. Total: 0.2442 s/iter. ETA=0:00:05
[10/24 08:07:50 d2.evaluation.evaluator]: Total inference time: 2:44:50.702556 (0.244221 s / iter per device, on 1 devices)
[10/24 08:07:50 d2.evaluation.evaluator]: Total inference pure compute time: 2:38:31 (0.234859 s / iter per device, on 1 devices)
[10/24 08:07:55 d2.evaluation.coco_evaluation]: Preparing results for COCO format ...
[10/24 08:07:56 d2.evaluation.coco_evaluation]: Saving results to coco_eval2/coco_instances_results.json
[10/24 08:08:00 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API...
Loading and preparing results...
DONE (t=0.47s)
creating index...
index created!
[10/24 08:08:01 d2.evaluation.fast_eval_api]: Evaluate annotation type bbox
Expected behavior:
it should actually give the following log
index created!
[09/15 19:01:52 d2.evaluation.fast_eval_api]: Evaluate annotation type bbox
[09/15 19:01:52 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.00 seconds.
[09/15 19:01:52 d2.evaluation.fast_eval_api]: Accumulating evaluation results...
[09/15 19:01:52 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.00 seconds.
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 1.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
I mean the log should continue of this format.
And I'm running it in google colab
The text was updated successfully, but these errors were encountered: