Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can we evaluate the model performance on CelebA dataset. #108

Open
Jiagengzhu opened this issue Jan 4, 2022 · 0 comments
Open

Can we evaluate the model performance on CelebA dataset. #108

Jiagengzhu opened this issue Jan 4, 2022 · 0 comments

Comments

@Jiagengzhu
Copy link

Jiagengzhu commented Jan 4, 2022

Hi, thanks for posting this very convenient tool to do experiments on disentanglement learning

When I trained the model, I met the following problem:

I use the --evaluate_metric mig sap_score irs factor_vae_metric dci during training the BetaVAE on CelebA dataset.

However, I get

anaconda3/lib/python3.7/site-packages/disentanglement_lib/data/ground_truth/named_data.py", line 65, in get_named_ground_truth_data
raise ValueError("Invalid data set name.")
ValueError: Invalid data set name.
In call to configurable 'dataset' (<function get_named_ground_truth_data at 0x7f8eed2a13b0>)
In call to configurable 'evaluation' (<function evaluate at 0x7f8e6e5558c0>)

I check the 'named_data.py' file and find out 'celebA' is not in the named data list (containing dSprites, 3dshapes, mpi3d, car3d, smallnorb).

Is there any ways that can let the model trained on 'celebA' be evaluated by disentanglement metrics?

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant