-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature request] NVENC support #116
Comments
Originally I had planned to look into it, but I'm not so sure now. I don't know if hardware video encoding is useful for the very few programs that can actually make use of this library. |
Sway/Nvidia user here. This is the only working solution for hardware accel for me. Sunshine comes to mind. Would be amazing to have NVENC. |
Since Firefox can use the driver, would supporting encoding also open up hardware-accelerated encoding for screen sharing or sharing webcam video in conferencing solutions such as Jitsi / Microsoft Teams / Google Meet / ...? |
It would also be useful for Virgl video encoding: https://www.phoronix.com/news/Virgl-Encode-H264-H265 |
Any news on this? I'll pitch in $50. |
I did take a quick look at it. NVENC looks pretty straight forward to use, but I struggled to understand how to use VA-API to encode. From what I can tell, the mapping between the two isn't as straight forward as the decoding side is. |
I also have a use for this, I use multiple different machines for hardware encoding, some of which have NVIDIA, others Intel and others again AMD GPUs, and having VA-API support on NVIDIA would drastically simplify the required ffmpeg command creation. |
Waypipe could also benefit from this |
I've been running a media server on my k8s cluster and configuring the current non-free NVIDIA drivers to work with containers/k8s was a nightmare, just to get basic hardware encoding working. It requires installing a custom container runtime (nvidia-container-toolkit), re-configuring (and in my case re-installing) Kubernetes, and installing extra operators in the cluster that take up compute resources and aren't documented that well. With VA-API GPUs, though, it's as simple as binding your workload to the appropriate dri path on the host, just like binding a directory. Voila, your GPU is exposed to the container. This is all to say, having full decode and encode support would be HUGE for those utilizing containers. I can't support development financially at this time, but I'm happy to help out in any way I can (seriously this driver would've saved me days of tinkering, though I may just be bad at managing Linux kernel stuff 😅). |
FWIW, the way forward with containers is CDI (standardised with OCI, so CRI agnostic), but presently only nvidia has official support for this (there is third-party generator for AMD iirc). With docker prior to CDI, effectively only nvidia was supported (officially), and only cuda (compute) and nvidia-smi (utilities) capabilities were enabled by default. Video encoding required an extra capability enabled and thus more verbose config similar to what you'd do for amd. Otherwise it's just Anyway, with CDI the host system needs a config for the hardware defined with mappings of what to mount/do. It is vendor agnostic and allows for effectively NixOS has a package for AMD and maybe Intel that's third-party/community managed iirc, and k8s AFAIK has equivalent via some plugins (I don't use k8s so I can't assist much there). I'm not sure how long broader adoption for CDI by gpu vendors will take. I don't think it requires the custom nvidia runtime, but if it does pretty sure it's not intended to stay that way, it does currently require a host package (nvidia ctk) to generate the CDI config however. |
id personally send el farto 200 bucks if nvenc support could be worked out |
Do you plan to add encoding support via NVENC?
The text was updated successfully, but these errors were encountered: