-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
stylizeVideo_deepflow.sh produces video which is not stylized #7
Comments
In the past I often had issues with video that have a special color format (10 bit etc.). This software and it's libraries only work with normal, consumer-ready videos. You could check different videos from different sources. |
Unfortunately, I tried with a different video format (from an animation movie) but still get the same single-colored output files: About DeepFlow, are really sure? Because I did get one error message from the deepflow static file:
|
If this issue was caused by DeepFlow, at least the first frame would have been stylized correctly. The first frame is generated without dependency on a previous frame or optical flow. In fact, for the first frame the algorithm is equal to fast-neural-style. |
For what it's worth, we've had the same issue as @agilebean . We tried many different scenarios;
@manuelruder, it would be great if you could include a simple video in this repository that should work, as a sanity check. |
Having the same problem, |
I too am seeing these kind of results. I'm using ffmpeg, deepflow, half resolution. I would also like to have a known good input video, and parameters to run, to test things were all behaving. |
To further analyse, you could run fast-neural-style on the extracted video frames. If there is a case where fast-neural-style produces a correct stylization but mine fails let me know. As an example you could take the five video frames from here. Then run |
@manuelruder Thanks for the example frames. These do not work on my installation. I see results which look very similar to the frames in the first post. Can you suggest any steps to work out what's wrong? |
Yes, see my post above... |
P.S. I've seen that a lot of people are reporting similar issues for fast-neural-style, see for example here. There it was suggested that a recent torch update (or a package) caused this issue. Unfortunately, there are no official releases or even a simple changelog. Instead if you install torch you'll get whatever the current master is at that time. Therefore I have no idea what I would need to change in order to fix this. (I'm not actively using torch anymore, like probably most other people I switched to a more recent framework) |
What framework are you using instead? What would it take to port the torch elements to the new framework? |
I'm currently using pytorch, it's more actively developed, although there are also breaking changed from time to time. But at least it has proper versioning. There exists code for fast-neural-style in pytorch, one could use this as a base. |
I seem to have got this working. I used a clean install of Ubuntu 14.04 and CUDA 7.5. Aside from following the steps in README.md, I did the following:
I'm using the static binaries of DeepFlow and deepmatching . For reference, my previous attempt on a more powerful machine used Ubuntu 16.04, CUDA 9.2, cudnn 5.0. I had run torch/update.sh on this machine, and I still got poor results, similar to those in the first post on this issue. I tested this code on the few frames suggested by Manuel just above my last post. I get properly stylized results. |
Has anyone tried to set up this up on AWS? I have been trying for a few days I can't get a instance up and working. I get the same problems and results as @agilebean. I have gone through all the troubleshooting other people has done here and on https://github.com/jcjohnson/fast-neural-style/issues. I am using Ubuntu 14.04 CUDA 7.5 cudnn 5.0 and ran bash update.sh in the torch directory like @AndrewGibb said and that still gave me erroneous results. If you could share a working AMI that would also be appreciated. |
I am also seeing this issue. Attempted building on ubuntu 16 with various versions of cuda. Downgraded to cuda 7.5, which forced me into ubuntu 14. In the end, this may have been the wrong path because I initially got the exact same results regardless of lib versions. In running through the debugging steps mentioned above, I found that I could get it to work with some elbow grease. The issue appears to be related to how ffmpeg is handling the video > ppm conversion. I manually split the frames into pngs, then tested using an input like %05d.png and it produces stylized frames (although the output is a single png). After sending the frames back through ffmpeg (png > mp4), I get something that works: This is a little odd because the png > ppm conversion works, but not the mp4 > ppm. I wonder if there's some missing build flag in my ffmpeg version. For reference, I'm using the following lib versions:
Here's the ffmpeg build info, in case it helps track down the issue
I have a dockerfile that I can publish once I have time to clean it up a bit. |
This is what I observed with some videos having a higher bit depth. (see my first post) Converting to png and then to ppm could reduce the bit depth to 8 bit and this could be the reason why it worked for you. However, people also reported this issue with fast-neutral-style where they found that instance norm was not compatible with a specific cuda, cudnn or torch version, and they didn't use ffmpeg. Also note that AndrewGibb reported that the example images I provide didn't work for him. I think we have multiple distinct issues here. |
I'd love to see that dockerfile... my initial attempt I could not build flownet using |
It's a work in progress, but here it is: https://github.com/positlabs/fast-artistic-videos-docker I was able to get stylized videos from it last friday, but tried again today and it failed. I'll keep working on it. |
Even though I'd like to use the docker approach I can report that I was previously getting garbled images like the OP but can now get useful output (at least the first 100 frames, somehow it stopped getting output images after...) using CUDA 9.2 and cudnn 7. What I did was to set
|
My docker build is working now. The trick was to run torch's update.sh script AFTER all of the other dependencies were installed |
I see this issue with flownet, but not with deepflow. |
I have found that by re-exporting my videos in .mov format with PNG codec and 8bit depth I completely got rid of the issue |
#produce mov file with 8bit depth #produces the ppm frames from the video |
I can confirm that this solution works! I had the same issue with CUDA 8, cuDNN 7.1 and Ubuntu 16. Then I set up everything from scratch with CUDA 9.2 and cuDNN 7.6 and still had the same poor results until I updated torch as bafonso advised. Also I had to install 'cuDNN Library for Linux' additionally to 'cuDNN Runtime Library for Ubuntu16.04 (Deb)' and 'cuDNN Developer Library for Ubuntu16.04 (Deb)', because I couldn't have found /usr/local/cuda/lib64/libcudnn.so.7. Now I get the proper results. |
stylizeVideo_deepflow.sh runs smoothly without error message, i.e. produces out*.png files and the .mp4 file.
However, the video displays is not stylized and it mainly a single color (here red) but the original video displayed people. What is wrong? Is it due to deepflow?
Here are the first three frames for an impression:
Here's the stack:
The text was updated successfully, but these errors were encountered: