-
-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for WebGPU #547
Comments
I am considering working based on the three-mesh-bvh project, but the data format and layout of three-mesh-bvh are not well documented. Could more detailed documentation be provided for this aspect? Thank you for this project. @gkjohnson |
Hello! Yes a compute shader is much more well-suited for this kind of rendering. If you'd like to help make a WebGPU compute shader version of the path tracer in this project that would be great and I can help that along, as well. As you've mentioned adding support for raytracing against the three-mesh-bvh structure in a compute shader would be the first step. It would be best to create an issue in the three-mesh-bvh repo for that change so we can track progress there, but here's a breakdown of the flattened bvh data. One node in the BVH structure is stored using 32 bytes with the following layout: Common Data
If the node is a parent node then the following 8 bytes contain the following. The left child node is assumed to be the child immediately after the parent node:
If the node is a leaf (ie it directly contains the triangles) then the following four bytes contain the following. The "leaf node" flag must be checked first to determine what kind of node is encountered:
|
I studied the following parts: 1.bvh request for intercourse What other parts need to be added, the workload is not small. |
It's not a small amount of work but I think taking this one step at a time is best and perhaps others will start helping, as well. The first step will be getting raytracing working on WebGPU compute shaders in a simple case. Something like this three-mesh-bvh normals path tracing example or this lambert one but with compute shaders. Once that's prepared we can start looking into adding a new WebGPU path tracer in this project and work up to all the same features in the WebGL version. If you're looking for more references on path tracing in general I can recommend Peter Shirley's Raytracing in One Weekend online book series. And the PBR Book for more advanced concepts. |
It seems like there are some issues with the translation. Regarding the data structure, it should probably be like this:
f a geometric body is non-indexed, then the "Offset" would refer to? |
The data order stored in the CPU array buffer is offset, count, leaf node. You can see that data set here: const offset = node.offset;
const count = node.count;
uint32Array[ stride4Offset + 6 ] = offset;
uint16Array[ stride2Offset + 14 ] = count;
uint16Array[ stride2Offset + 15 ] = IS_LEAFNODE_FLAG;
return byteOffset + BYTES_PER_NODE; This is the representation of the BVH used for storing an traversing the tree on the CPU. When the data is packed into a texture it is stored as you have laid it out - in offset, leaf node, count. You can see that here: const count = COUNT( nodeIndex16, uint16Array );
const offset = OFFSET( nodeIndex32, uint32Array );
const mergedLeafCount = 0xffff0000 | count;
contentsArray[ i * 2 + 0 ] = mergedLeafCount;
contentsArray[ i * 2 + 1 ] = offset; This is done because we cannot read Uint32 and Uint16s directly from the same texture so the leaf node and count values are manually packed into a single UInt32. "count" is stored in the later two bytes so no extra bit shifting is required to get the count value.
There are two scenarios. The first is if a BVH is generated for geometry that has no index then one is automatically generated so it can be rearranged. Second, if the When rendering this on the GPU the "indirect" buffer is converted to a regular index buffer (see here). For the initial implementation of WebGPU support I think it's okay to only handle the simple case and not worry about the "indirect" flag. Feel free to ask other questions. I know some of these things are not obvious. If there are things you think could be more clear in the code please feel free to make a PR to add some comments, as well. |
I am considering building a two-level BVH. Is there such code for a three-mesh-BVH? |
I assume you mean something like a scene-level BVH or a top level acceleration structure (TLAS)? This is not currently available in three-mesh-bvh but it would also be a good a good addition to the project, though. Long term I wanted to create an abstraction of the I just ask that the first contributions to three-mesh-bvh be reasonably sized - so focusing on a single feature first would be best. |
just curious is there any update on this? |
By testing normals, the compute shaders of WebGPU are found to be about 20 times faster than WebGL's GPGPU, so rendering with WebGPU will be much quicker.
I discovered the following project: https://lgltracer.com/editor/index.html. Its rendering speed is a bit faster than that of three-gpu-pathtracer, but it's not open-source, which makes it more difficult for us to make custom modifications when needed.
Therefore, I think the support for WebGPU will have significant value
So, is there any consideration for three-gpu-pathtracer to support WebGPU?
The text was updated successfully, but these errors were encountered: