Replies: 1 comment
-
I’ll refer you to the docstring for the extraction method, and the readme that describes the parameters passed to generate methods. The reason it’s done this way is that you can get some FFMPEG-interpolation-based video smoothing and slow motion effects. Say you have a 30fps video. Extract 10 fps from it. Restyle it. Then encode with input fps=10, output=30 and it will interpolate two-thirds of the frames. Or you can get a slow-motion conversion by: When it’s not desirable to play speed-changing games, you may still want to generate a highly stylized long video where you don’t want to restyle 30fps of the original. It gives you the option to iterate faster by restyling fewer frames and then use ffmpeg to interpolate between them. It’s just a different look to the output. Edit: I’ll take a look at exporting video to png. That’s probably my preference I just didn’t think to set it up that way. In the mean time you can always plug in an os.system(‘ffmpeg -args’) call in your script instead of using my canned method. The generate.restyle methods take an input that’s just a list of file paths. PNGs should work fine. Edit 2: Given that we’re going to completely mangle these images by running them through VQGAN+CLIP it seems like it doesn’t add a lot of value to have a big emphasis on a high quality image data pipeline. It’s a different philosophy to me than if we were editing video. |
Beta Was this translation helpful? Give feedback.
-
I noticed in the code your that we define the source frame rate, which depending on how smart people are they might not know ifts 24, 30, 60 etc.. I know with ffmpeg we can get the fps with some python code.
Beta Was this translation helpful? Give feedback.
All reactions