-
-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize Occlusion culling jitter #99974
Conversation
@lawnjelly this is the improvement we touched base earlier on. |
So this is effectively jittering the translation rather than the camera rotation, as discussed in #99941 (comment) ? I did think of a snag with translation though, so sorry I may have led you on a wild goose chase here ... if my thinking is correct, rotation does have the advantage that it automatically compensates for distance: The effective gap which will cause the problem described in #53288 does depend on distance, because it's the screen space distance that determines whether two adjacent walls will "merge" in the occlusion buffer. Rotation of the camera means the screen space change of jittered objects will be the same with distance I think which is what we want (the absolute jitter in view space will increase). Translation will cause the opposite effect if I'm correct (with perspective projection):
I suspect we need to make sure the test case works correctly at different distances. If this is correct, then rotation may be better, even if it produced the undesirable effect that there is more jitter in the centre than the edges, due to perspective. I guess in that case we just calibrate to the centre, and live with slightly less occlusion than ideal at the edges. Another alternative as I suggested in the original PR is to jitter each ray individually. Even with uniform jitter, you could adjust the magnitude to compensate for the edge effect. Although jittering or adjusting per pixel may be overkill, and an unnecessary expense on CPU for some possible benefit that may not be realised. UPDATE: |
So this is the opposite : this PR jitters more like a rotation, while it's a translation right now. You're right that translating moves closer objects more. The current implementation (with my last PR merged) translates the whole scene relatively to the rays (equivalently the rays relatively to the scene) in clip space. What this PR does is actually the alternative you mention above. Will take a look at #53288 and see if this PR solves the problem. |
8404d84
to
d1192ab
Compare
Rebased on top of #99941 |
If this is the case, this should indeed be an improvement. Yup some decent testing over the distance range will be good. If I remember right, ideally we should like to see objects jitter by around a pixel (just under in fact) no matter the distance from the camera. |
d1192ab
to
a61bd84
Compare
I updated the PR to raise the jitter magnitude to 0.99 pixel. With these settings I can confirm that :
The more to the right, the farther :
I can also confirm that this seems to greatly improve the situation described in #53288.
|
That looks great. From memory I suspect the ideal value (less than 1) for the current pattern may be 0.66, so this is worth testing too. The jitter pattern I think is one in the centre, a medium square and a distance square (9 in total). At 1, I suspect it would be mimicking the situation where the whole scene was shifted to the left by a whole pixel, so I'm not absolutely sure we gain as much in terms of resolution. Effectively by jittering we are aiming to increase the resolution of the occlusion buffer, in this case by 9x (with 3 x the resolution on each axis), therefore the fractions should be 0.0 (centre), 0.333 (mid square), 0.666 (far square). The multiplier we are changing here is for the far square as the mid square is multiplied by 0.5. Essentially the way I was thinking is that by regularizing this, we are maximizing the chance of a gap occurring in the occlusion buffer for these gaps. The alternative of a more random pattern is potentially even better, but suffers from difficulty in repeatability for the timeout (of course you could use a fixed random pattern as an alternative). We can increase the resolution higher with more samples, but that increases the timeout required for stability and makes it more liable to pop in. A priori I'm not absolutely sure what the best value would be here as we are in uncharted territory - I'm not sure I've seen anyone else using this technique with raster occlusion culling, so it's just a combination of some theory with some empirical testing. We could equally well say that say, 0.5 would deal with even smaller gaps (at the risk of missing some larger ones maybe). |
a61bd84
to
1dc3aaf
Compare
Done. You can review now. |
Sounds good. I suppose if the occlusion buffer was a certain fixed integer multiple proportion of the screen resolution it should be possible to remove the problem entirely by this super sampling, but the choice of occlusion buffer size is currently pretty complex to do with how many cores. I think we'd need there to be people reporting problems for us to do things like start altering the occlusion buffer sizes. e.g. at 1920x1080 you would use e.g. 480x270, and use 3 extra temporal jittered samples on each axis. Might be worth doing on e.g. a console, but probably not for general use at the moment. |
1dc3aaf
to
c7895ca
Compare
I'm not super familiar with the 4.x release process now, but will bump to 4.4 and add to 4.4 merge queue as this should be a good addition. |
Thanks! |
This PR ensures the jittering is applied in a uniform manner across the whole viewport.
Previously, pixels
closer to the centerwith farther Z used to get less jitter than those on the edge, as a side effect of perspective division.This is achieved by jittering the near plane corners instead of the projection matrix, as only those are ultimately passed to embree in the end anyway.
Test project : occlusion_jitter.zip
This improvement supersedes the fix in #99941.
Edit : Resolves #53288