You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thanks for your excellent work. Could I ask how can I apply the model from the paper Adaptive Frame Rate on the new video dataset? The notebook provided doesn't elaborate on it explicitly. Thanks for your help.
The text was updated successfully, but these errors were encountered:
Hello! Thanks for your interest!
I provided two examples, one for frame-level labels (see subsection "Adaptive Frame Rate" at abaw3_train.ipynb) and video-level label (Subsection "Adaptive Frame Rate" at train_emotions-pytorch-afew-vgaf.ipynb).
To speedup experiments, we assume that the embeddings are extracted from all faces in a frame before running our method, but you could easily adapt our code to extract embeddings from test videos. At first, copy the part from the "Train" section of the ABAW notebook to train classifiers and estimate their threshold. You could use any classifier, we train MLP and SVM in different notebooks.
Second, copy a cell from "Inference" section with comment "#Complete example". You could use only one list of strides in all_strides, or provide several different strides. BTW, a cell before "#Complete example" is unnecessary, it just slightly speeds-up experiments
I personally used this "copy-paste" approach for the VGAF dataset recently. Hopefully, I will upload the code soon, but it is really similar to what I have for the AFEW dataset
Hi! Thanks for your excellent work. Could I ask how can I apply the model from the paper Adaptive Frame Rate on the new video dataset? The notebook provided doesn't elaborate on it explicitly. Thanks for your help.
The text was updated successfully, but these errors were encountered: