You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 5, 2022. It is now read-only.
Theoretically that is possible, but the implementation may not be trivial. Keeping them partitioned is the most straight forward way, do you have any problem with this design?
No problem with the design. I'm just thinking that it is cool if this solution can be applied with the un-modified guest GPU drivers. (i.e., full virtualization for GPU)
I don't understand why the implementation is not trivial. I think it is a typical save/restore of fence registers in GPU context switch. Please, read my thinking:
When a VM takes over the physical GPU, the GPU is only serving for this VM with its fence registers. At that moment, the other VMs only access their virtual fence registers. This is perfectly safe becuase the physical GPU is not serving those VMs, thusly no need to access the fence registers of theirs. Their virtual fence registers would be accessed whenever the VM gets scheduled on physical GPU.
Currently, fence registers are partitioned between VMs, thus a single VM has a subset of fence registers.
Is it possible to support a VM a full set of fence regiseters by saving and restoring fence registers when vGPU context switch time?
If it is possible, guest driver won't be modified for fence registers and also a VM can use a full set of fence registers.
The text was updated successfully, but these errors were encountered: