You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Basalt plugins have a pretty good pattern of MFR rendering on top of vulkan, but I want vulkanator to ultimately use something better.
The easiest way to have MFR implemented is to give each SmartRender context its own: CommandPool, DescriptorPool, DescriptorSet, and DeviceMemory for both the Input and Output and Uniforms. This is likely what "version 1" will have.
The improved pattern will utilize a pool of each type that is recycled and protected using a global VK_KHR_timeline_semaphore(Core in Vulkan 1.2) to indicate when a particular entry in each pool is available for re-use or not and allocating a new element in the case that all entries are in-use. This allows for resource allocation to scale with the amount of data that can be processed and acts as a sort of swapchain of GPU resources, scaling only by as fast as the system can let go of resources. A faster processor and GPU will have wide resource-pools. A slower and more serial processor and GPU will have narrower slots.
MoltenVK supports VK_KHR_timeline_semaphoreKhronosGroup/MoltenVK#1124 and can safely utilize this pattern as well.
The text was updated successfully, but these errors were encountered:
Basalt plugins have a pretty good pattern of MFR rendering on top of vulkan, but I want vulkanator to ultimately use something better.
The easiest way to have MFR implemented is to give each SmartRender context its own:
CommandPool
,DescriptorPool
,DescriptorSet
, andDeviceMemory
for both the Input and Output and Uniforms. This is likely what "version 1" will have.The improved pattern will utilize a pool of each type that is recycled and protected using a global VK_KHR_timeline_semaphore(Core in Vulkan 1.2) to indicate when a particular entry in each pool is available for re-use or not and allocating a new element in the case that all entries are in-use. This allows for resource allocation to scale with the amount of data that can be processed and acts as a sort of swapchain of GPU resources, scaling only by as fast as the system can let go of resources. A faster processor and GPU will have wide resource-pools. A slower and more serial processor and GPU will have narrower slots.
MoltenVK supports
VK_KHR_timeline_semaphore
KhronosGroup/MoltenVK#1124 and can safely utilize this pattern as well.The text was updated successfully, but these errors were encountered: