Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In testing. ValueError: need at least one array to stack. #4

Closed
ww1e opened this issue Dec 15, 2023 · 8 comments
Closed

In testing. ValueError: need at least one array to stack. #4

ww1e opened this issue Dec 15, 2023 · 8 comments

Comments

@ww1e
Copy link

ww1e commented Dec 15, 2023

Hi,thank you for your open source of your great work!
When I want to validate on HR-STC and UBnormal datasets, ValueError: need at least one array to stack. It could be an error caused by the dataset. What should I do? thank you!

@ww1e
Copy link
Author

ww1e commented Dec 15, 2023

File "/data/MoCoDAD-main/eval_MoCoDAD.py", line 28, in
model.test_on_saved_tensors(split_name=args.split)
File "/data/MoCoDAD-main/models/mocodad.py", line 456, in test_on_saved_tensors
auc_score = self.post_processing(tensors['prediction'], tensors['gt_data'], tensors['trans'],
File "/data/MoCoDAD-main/models/mocodad.py", line 410, in post_processing
clip_score = np.stack(error_per_person, axis=0)
File "/home/amax/anaconda3/envs/mocodad/lib/python3.11/site-packages/numpy/core/shape_base.py", line 445, in stack
raise ValueError('need at least one array to stack')
ValueError: need at least one array to stack

@ww1e ww1e changed the title ValueError: need at least one array to stack for testing. In testing. ValueError: need at least one array to stack. Dec 15, 2023
@aleflabo
Copy link
Owner

Hi!

Did you modify the config file for evaluation following the suggestions here?

@ww1e
Copy link
Author

ww1e commented Dec 15, 2023

Sure! I followed this document exactly, and it is worth mentioning that using the HR-Avenue dataset can be correctly evaluated.

@aleflabo
Copy link
Owner

Ok. That's strange.

Are you using the pretrained models or did you train it from scratch?

@ww1e
Copy link
Author

ww1e commented Dec 15, 2023

Both were used for the Avenue dataset, and the pretrained models of the remaining datasets could not be tested correctly. That's what puzzles me.

@18764008597
Copy link

Have you solved your problem? I have the same problem.

@stdrr
Copy link
Collaborator

stdrr commented Jul 12, 2024

Hi! Thank you for your interest in our work!

The error you're getting is probably due to figs_ids here being empty, causing this for loop to be skipped and the list error_per_person be empty as well. This can happen when there is a mismatch between the loaded GT masks and the data being evaluated.

Could you double-check the paths held by all_gts and whether gt here is properly loaded? Also, I kindly ask you to show the content of scene_idx, clip_idx here before the error happens.

Stefano

P.S.
I see that the issue reported here originates from test_on_saved_tensors here. This function should be called only when the same evaluation has been run at least once and the model's output has been properly saved on disk, which is not the case if you're performing a validation epoch after a training epoch. Can you assure that the error doesn't originate from here and that at least load_tensors is set to false in the config file? here and here

@stdrr stdrr reopened this Jul 12, 2024
@18764008597
Copy link

Hi! Thank you for your interest in our work!

The error you're getting is probably due to figs_ids here being empty, causing this for loop to be skipped and the list error_per_person be empty as well. This can happen when there is a mismatch between the loaded GT masks and the data being evaluated.

Could you double-check the paths held by all_gts and whether gt here is properly loaded? Also, I kindly ask you to show the content of scene_idx, clip_idx here before the error happens.

Stefano

P.S.
I see that the issue reported here originates from test_on_saved_tensors here. This function should be called only when the same evaluation has been run at least once and the model's output has been properly saved on disk, which is not the case if you're performing a validation epoch after a training epoch. Can you assure that the error doesn't originate from here and that at least load_tensors is set to false in the config file? here and here

Thank you very much for your reply to me. The reason you mentioned may also cause this problem, but the reason leading to this problem is that the use hr in the config file is not changed to true.
Thanks again for your answers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants