[Feature Request] Hyper V GPU PV - investigate HV Sockets for low latency transport #663
Unanswered
JemmyBubbles
asked this question in
Sunshine
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Select Topic Area
Feature Request
Body
With the implementation of GPU PV in Hyper V it has become feasible for users to create gaming VMs that are able to dynamically share the host GPU resources. Access to these GPU accelerated clients is made feasible through Sunshine/Moonlight or Parsec which rely on network communication and, while this approach is understandable for remote clients, for communication between host and guest it introduces a degree of latency.
In the context of a Linux host and Windows VM the looking glass project leverages IVSHMEM to create a shared memory buffer between the host and client. This is then used to provide non-compressed ultra-low latency to the client or windows VM.
https://looking-glass.io/
While there is no direct equivalent for IVSHMEM in hyper -v it seems as though HV Sockets could be investigated to achieve something similar. The X410 paid software and XRPA open-source project are both able to utilise vsock to provide the hyper v host with low latency access to the desktop and applications of distributions running under WSL2.
https://x410.dev/cookbook/wsl/using-x410-with-wsl2/
Xpra-org/xpra#3666
The Hyper V host - client equivalent is hv_sockets - could this be investigated as a possible transport method between Hyper V host and GPU PV accelerated Windows guest.
https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/make-integration-service
Beta Was this translation helpful? Give feedback.
All reactions