Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can this allow for deferring of camera output on mobile devices? #7

Closed
AndrewJDR opened this issue Aug 30, 2021 · 5 comments
Closed

Comments

@AndrewJDR
Copy link

AndrewJDR commented Aug 30, 2021

Hi,

Thanks for putting this together, it's exciting. Can this indirectly also allow for deferring the output of particular camera frames? And if so, what would be the best way to use the API in order to achieve this goal?

There's a full thread about the usecase for deferring output of camera frames is here - immersive-web/webxr-ar-module#44

But I think @nbutko 's usecase explanation here is the most succinct I've seen:

Delayed drawing - on mobile phones an application might need to synchronize the vision with the camera feed.
-- Need to be able to hold an XR frame for ~2-10 animation frames before it is presented to the user with other results from vision.
-- from immersive-web/proposals#4 (comment)

(With the caveat is that we're doing remote rendering and need to sync that to the camera output, rather than syncing CV stuff. But the requirements are the same for both.)

@AndrewJDR AndrewJDR changed the title Can this allow for deferring of camera output Can this allow for deferring of camera output? Aug 30, 2021
@AndrewJDR AndrewJDR changed the title Can this allow for deferring of camera output? Can this allow for deferring of camera output on mobile devices? Aug 30, 2021
@nbutko
Copy link

nbutko commented Aug 31, 2021

Based on my experience with chrome's current implementation of this API, I believe it would be possible. You'd need to hold the camera texture and then draw it as an opaque quad at the back plane of clip space, which would effectively occlude the real time video frame underneath. You'd have to take care to make sure that the camera position you use when drawing is the same as the camera position from the time the texture was originally captured.

@AndrewJDR
Copy link
Author

AndrewJDR commented Aug 31, 2021

Thanks @nbutko, I suspected it would work something like that. My hope is that the held texture (which I assume could be a copy into a ring-buffer of textures or the like) can remain in webgl-driverland, rather than requiring a roundtrip readback out of webgl-driverland into JS-land and a subsequent writeback from js-land into webgl-driverland, as performance becomes more of a question with that type of arrangement. If someone could confirm there's a fast path for the texture copy, I'd be interested. The "opaque texture" concept still eludes me a bit, so I'm not sure if this would lead to any hindrances to a fast copy path.

One slightly unfortunate effect of using raw camera access for this usecase is that UA permission prompt will be required, even though access to the actual pixels isn't required... only delay of the output of those pixels is desired. If there were room in this API for delaying of camera output without actual access to the camera output, it could be quite useful, though I'm not sure if it's the best fit for this API? I do see that the Spec mentions caching here - https://immersive-web.github.io/raw-camera-access/#xr-web-gl-binding-section

Something like this caching mechanism combined with a manualPlayout(CachedUnreadableFrameHandle) type function could allow for controlled delayed playout of an dev-unreadable cached camera frame texture, which would mean no additional UA permission prompts.

@bialpio
Copy link
Contributor

bialpio commented Aug 31, 2021

Based on my experience with chrome's current implementation of this API, I believe it would be possible. You'd need to hold the camera texture and then draw it as an opaque quad at the back plane of clip space, which would effectively occlude the real time video frame underneath. You'd have to take care to make sure that the camera position you use when drawing is the same as the camera position from the time the texture was originally captured.

Correct, this is how I'd imagine this would work, with a small clarification that an explicit texture copy is required since we (= WebXR) want to have full control over the texture lifetime. We could always hand out copies, but this would cause a performance hit for use cases that do not need the copy.

If someone could confirm there's a fast path for the texture copy, I'd be interested. The "opaque texture" concept still eludes me a bit, so I'm not sure if this would lead to any hindrances to a fast copy path.

I believe that as long as you issue appropriate WebGL commands from within the requestAnimationFrame callback, the copy should work fine. The "opaque texture" concept is there mainly to signal to the developers that they are not owning it so WebXR can (and, at least in Chrome, will) sweep the carpet from under them if the reference outlives the XRFrame.

I do see that the Spec mentions caching here - https://immersive-web.github.io/raw-camera-access/#xr-web-gl-binding-section

The caching that you refer to here is to ensure that we do not perform the same work over and over if the texture is requested for the same camera within the same rAFcb. The spec could've been silent about caching, in which case the UAs could still employ some caching, but I thought that explicitly calling out that parameter / state validation would be helpful to other implementers.

@AndrewJDR
Copy link
Author

AndrewJDR commented Sep 1, 2021

@bialpio Just a heads up for you, I opened a bug over on webkit - https://bugs.webkit.org/show_bug.cgi?id=229752
Obviously this is still just a draft, but I think it's good to raise visibility for all browser development even at this early stage. Thanks again.

@AndrewJDR
Copy link
Author

(posted incorrect link in OP, post edited - https://bugs.webkit.org/show_bug.cgi?id=229752 )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants