-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prefetch data error during training #29
Comments
You need to check whether your file name and file path is correct. |
Thanks for your quick reply. I have checked the file name and file path by adding LOG(INFO) in io.cpp to print the flow image names, and there are valid. On the other side, there will be some log info if any image is failed to open in io.cpp. The dataset works fine in the bench of temporal-segment-networks. |
This issue is solved by reducing the scale of ucf101 dataset by 0.3. However, it's a little weird. |
This issue is solved right now. It's the problem of the shot files. Some shot files have no proposal(Shot file is empty). |
Dear Limin,
During training the optical flow network, I got an error that training was quited at prefecting data in the layer of SequenceDatalayer.
The log is as follow:
*** Aborted at 1540548898 (unix time) try "date -d @1540548898" if you are using GNU date ***
PC: @ 0x7f46cdec8f3d caffe::SequenceDataLayer<>::InternalThreadEntry()
*** SIGFPE (@0x7f46cdec8f3d) received by PID 29760 (TID 0x7f467dc61700) from PID 18446744072869416765; stack trace: ***
@ 0x7f46cce4f4b0 (unknown)
@ 0x7f46cdec8f3d caffe::SequenceDataLayer<>::InternalThreadEntry()
@ 0x7f46cca015d5 (unknown)
@ 0x7f46cc7da6ba start_thread
@ 0x7f46ccf2141d clone
@ 0x0 (unknown)
the solver's setting is as below:
net: "../models/four_class/temporal_102_class_hard_bn_inception_train_val.prototxt"
testing parameter
test_iter: 2710
test_interval: 500
test_initialization: true
output
display: 100
average_loss: 100
snapshot: 500
snapshot_prefix: "../models/four_class/flow_finetune/temporal_untrimmednet_hard_bn_inception_average_seg3_top3"
debug_info: false
learning rate
base_lr: 0.001
lr_policy: "multistep"
gamma: 0.1
stepvalue: [10000, 15000, 20000]
max_iter: 40000
iter_size: 1
parameter of SGD
momentum: 0.9
weight_decay: 0.0005
clip_gradients: 20
GPU setting
solver_mode: GPU
device_id: [1,0]
richness: 200
I finetuned the network with weight: anet1.2_temporal_untrimmednet_hard_bn_inception.caffemodel, and batch_size = 5.
Could help indicate what's wrong in my settings?
Thanks very much!
The text was updated successfully, but these errors were encountered: