You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there any possibility, if i hook D3D API in VM guest and send to another Windows host which has physical GPU, when host finish render task, copy result data to guest. Guest and host, same Windows version, same D3D version, it seems can work out.
thanks for your time.
The text was updated successfully, but these errors were encountered:
Sorry for the late reply.
In theory yes, you could serialize the D3D calls, send them to another machine, and run the commands on this machine. This is called api-forwarding. The problem is "copy result data to guest".
serializing D3D api calls to run them as-is on another device is not efficient (latency, verbosity)
"copy result data to guest" is not trivial.
if we talk about compute jobs, yes, you could copy memory back by setting all memory areas as non-mappable/non-coherent (see the Vulkan compute proof of concept which is essentially api forwarding https://github.com/Keenuts/vulkan-virgl )
If you decide to allow memory-mappings, how do you keep them in sync? You'd need to have additional support to sync your dirty pages across the windows guest running on a linux host, and the remote windows machine.
if we talks about graphic jobs, it becomes way harder.
How do you "present" your framebuffer rendered on another host in your vm?
How do you read the said framebuffer? (Not all framebuffers can be read, in case of DRM content by example, you are now allowed to do some operations without trashing the whole data)
Hi,
It's me again, I read your article.
https://www.studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html.
Is there any possibility, if i hook D3D API in VM guest and send to another Windows host which has physical GPU, when host finish render task, copy result data to guest. Guest and host, same Windows version, same D3D version, it seems can work out.
thanks for your time.
The text was updated successfully, but these errors were encountered: