How to train a Restore model? #180
Replies: 3 comments
-
Hi, I don't have the code right now but if I remember correctly I used random degradations to simulate video artifacts. I also used a small degradation model that was trained on pairs of high resolution and degraded images (but it is optional and can be omitted as its effect is not great for such small models found in Anime4K) Applying one degradation works fine but you also need to occasionally give the network a combination of degradations, eg. (JPEG -> 0.7x Bicubic Downscale -> JPEG -> 1.43x Bilinear Upscale -> Ringing). These parameters are really hard to tweak as they are mostly perceptual and depends on what effect you want it to have. If you set the degradations too high the network will tend to "fix" intended artistic degradations, which doesn't look great. I'm soon going to work on this project again (after a 1-year hiatus due to my thesis), and I've got some ideas that can maybe fix this issue. |
Beta Was this translation helpful? Give feedback.
-
Hello, brother. I'm also trying to reproduce a Restore model. Would you like to share your experience? |
Beta Was this translation helpful? Give feedback.
-
I need to train the Upscale models and not restore. So there is no script to train that? |
Beta Was this translation helpful? Give feedback.
-
Hi, I am trying to reproduce the Restore model using the Train_Model.ipynb and SYNLA Plus dataset as a starting point. And thanks for sharing these openly, by the way. I was wondering if I could get some help or tips about how to do it correctly.
The problem I have is that the defaults don't work for training a "restore" model out of the box. So I tweaked it and experimented with processing the inputs in the past few days. And I got somewhat close to the "soft" version of the released shader, but it is not quite there yet. I can't achieve the sharp smooth lines like in the non-soft shader. Also my models usually add a tiny chromatic change to the image.
Pictures are in the order: unprocessed, my result, Anime4K Restore M Soft, Anime4K Restore M (regular)
Currently, to generate degraded images for training I downscale and then upscale them back, with the downscale factor being random between 0.5 and 0.8334; The upscale method is 50% bicubic, 50% 'area'. Then apply the chroma degradation and jpeg noise. The latter two seem to help against noise in the result. I can share the function code or the whole notebook if it makes it easier.
Beta Was this translation helpful? Give feedback.
All reactions