Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pertinence of evaluation method/metric #2

Open
jerome-labonte-udem opened this issue May 22, 2023 · 0 comments
Open

Pertinence of evaluation method/metric #2

jerome-labonte-udem opened this issue May 22, 2023 · 0 comments

Comments

@jerome-labonte-udem
Copy link

Hi,
I wanted to use you dataset and evaluation method to compare a similar cropping method I am working on.
I made a baseline by cropping a 1:3 and 3:1 aspect ratio window in the middle of each video and was surprise that the results where better than both auto-flip and your improved method on 3:1 aspect ratio (best and mean at least).
Here are the results:
baseline (center-crop) 1:3 : Worst: 43.052 Best: 49.505 Mean: 45.033
3:1 Worst: 73.059 Best: 77.300 mean: 75.409
The 1:3 results are also very close to your first paper results without the clustering part.
I guess the interesting parts are heavily concentrated in the middle of the frames.
What do you think that says about the metric/method ?
Do you know of any other dataset/method that would demonstrate the usefulness of your method or any other ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant