You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello i am using my own code and own dataset for super-resolution.
I have a question that in "Perceptual Losses for Real-Time Style Transfer and Super-Resolution: Supplementary Material" in the Table2, architectures take smaller image for input and generate bigger image for output. Means that smaller is low-resolution and bigger is high-resolution.
If input size of architecture is 72x72 then how could low and high-resolution images in training have same size (72x72) when output is 288x288?
This is confusing me how to make my training sets. Thank you
The text was updated successfully, but these errors were encountered:
Hello i am using my own code and own dataset for super-resolution.
I have a question that in "Perceptual Losses for Real-Time Style Transfer and Super-Resolution: Supplementary Material" in the Table2, architectures take smaller image for input and generate bigger image for output. Means that smaller is low-resolution and bigger is high-resolution.
If input size of architecture is 72x72 then how could low and high-resolution images in training have same size (72x72) when output is 288x288?
This is confusing me how to make my training sets. Thank you
The text was updated successfully, but these errors were encountered: