Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

overfitting very fast? #12

Open
asdfqwer2015 opened this issue Feb 21, 2019 · 2 comments
Open

overfitting very fast? #12

asdfqwer2015 opened this issue Feb 21, 2019 · 2 comments

Comments

@asdfqwer2015
Copy link

Hi, sorry to bother you once again. But I'm very interested in your model.

I found the model may overfitting in training very fast.

describe:
I tested a trained model, and it seems has a low accuracy in predict video. And I tried to feed the trained model with random cached pickle numpy(10, 8, 112, 112, 3) samples from both trainset and testset. It predict well when process samples from trainset, but predict incorrect result for almost all sampled samples from testsset. BTW, the sampled samples covers all 25 classes.
And I evaluated the testset, the loss is >> 1 while the loss for trainset is almost 0.

So I modified the script, and monitor val loss while training.
for epoch in range(int(200000 // steps_per_epoch) + 1): gesture_classifier.train(input_fn = train_input_fn,steps = steps_per_epoch,hooks = [logging_hook]); eval_results = gesture_classifier.evaluate(input_fn = eval_input_fn, steps=100);
eval samples are random samples from all the testset.
And I get curves like:

image
yellow curve - train loss
blue curve - val loss

And the edit distance for samples from testset likes:
image
val error is still very high.

Is the model overfitting and did I miss something?

@breadbread1984
Copy link
Owner

breadbread1984 commented Feb 21, 2019

one difference between my implement and the description of original paper is how the hidden layers are normalized. I used layer normalization to use small batch which can fit in my computing hardware. you can replace layer normalization with batch normalization and increase the batch size and train c3d and r3dcnn again to test whether the result improves.

besides, the original paper use multiple channel together to detect hand gesture. I couldn't read some channel from the dataset, so I haven't tried it.

furthermore, the nv gesture dataset is a relative small dataset. you may need early stopping during training.

@asdfqwer2015
Copy link
Author

ok, I'll try BN istead of LN now.
But I only have a single gpu to train the model, so the batchsize can be max at 4 for my scenario, which may leads BN unstable. Hope to find a mechanism for tf.estimator API similar to subdivision in yolo to calculate mean grad for several batch for large batch size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants