-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KeyError: 'fpbpn’ when training with PointNet as the image feature extractor #3
Comments
And I saw the room mask is mapped into a 64 dimensional vector embedding. To me, saving the room boundary polygon coordinates into a \R^{64} vector is much more simple and straight. |
Have you solved this problem? I have the same problem |
well,I suggest you just use resnet18 |
Most likely you are missing the step to convert the room mask image to This script first computes |
I switched from ResNet18 to PointNet, as said in your paper that PointNet better captures floor boundary. Besides, as presented in the paper, DDPM+PointNet has lower KL-divergence than DDPM+ResNet, which indicates that PointNet might help in fitting the underlying probability distribution. I was curious how mush PointNet helps in MiDiffusion, so I simply switched to PointNet. Unfortunately, KeyError: 'fpbpn' occured at the line 47 of networks\diffusion_scene_layout_mixed.py
room_feature = sample_params["fpbpn"]
May I ask if I missed any procedure to preprocess the data so as to train with PointNet as the image feature extractor?
The text was updated successfully, but these errors were encountered: