-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
no_action label in test mode #11
Comments
you should maintain an buffer of the time sequence and feed it into the model. the buffer serves as a fifo of the online frames. |
Thanks for your quick reply. The issue is how to process the frames without gesture in unsegmented input video. I found the model can only output 25 classes of gesture, but can not output |
you can feed data sequence of any length as long as you assign the input parameter 'sequence_lengths' in the input dictionary correctly. |
En, thanks for your reply again. But I'm still confused. Could you please help to see these? a. For b. Should classes count for ctc == len(classes) or len(classes) + len(['blank for ctc loss'])? |
for nv hand gesture dataset, every video clip contain one continuous occurring gesture definitely. so the label sequence contains only one class label (>=0). there is no label for no gesture. the output length of ctc == how many continuous occurring gestures are found in a video clip. one label in the output sequence represents one gesture. |
En, I'v understood. Thanks. But I met a new issue: overfitting. Could you please help to see? I'll create a new issue. |
@asdfqwer2015 |
@asdfqwer2015 @breadbread1984 |
Hi, in ActionRecognition.py, I tested some videos from nvgesture dataset(untrimmed video). And it outputs class no. in 0~24 in each frames. i.e. it didn't outputs any
blank
orno_action
label.If processing untrimmed video is online detection refer to trimmed video as offline detection. How to do online detection? Did I miss something?
Thanks.
The text was updated successfully, but these errors were encountered: