-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[enhancement] Pix2pix #17
Comments
Thank you for your attention. We have recently implemented an enhanced version of pix2pix, which is consistent with the paper. We will sort it out and let you know after uploading it in the next two days. |
Thanks in advance. |
There will be a test.py script for inference to show the result image. If your willing is the FCN score inference file, we are writing it. |
Hi, You can try this https://github.com/zhouwy19/XNN-Project/tree/main/pix2pix. |
I ran your pix2pix code in 200 epochs.
But I got too many bad results, shown as follows.
The generated image is in the middle. The ground truth is the left side. The conditional image is the right side.
I think the reason why there are bad results is that you didn't set the right patch. The patch meaning can be founded in the original paper in the discriminator section.
But I wouldn't find the patch setting in your code.
Would you please tell me where I can set the patch size.
The text was updated successfully, but these errors were encountered: