Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is Volume Viewport CPU memory redundant? #492

Closed
Ouwen opened this issue Mar 18, 2023 · 14 comments
Closed

Is Volume Viewport CPU memory redundant? #492

Ouwen opened this issue Mar 18, 2023 · 14 comments
Labels
enhancement New feature or request

Comments

@Ouwen
Copy link
Contributor

Ouwen commented Mar 18, 2023

This is related to #452

Looking at the code, it looks like a 3D texture is created by the following chain of vtk calls:

https://github.com/Kitware/vtk-js/blob/00a46c083f4a637eb9433ea06bfc3b63ba262189/Sources/Rendering/OpenGL/Texture/index.js#L1408-L1415

https://github.com/Kitware/vtk-js/blob/00a46c083f4a637eb9433ea06bfc3b63ba262189/Sources/Rendering/OpenGL/Texture/index.js#L1301-L1338

In cornerstone3D, the streaming viewport will use texSubImage3D to send frame offsets
https://github.com/cornerstonejs/cornerstone3D-beta/blob/47d0671b38e22c6509a1f02cce0489b6512e33fd/packages/core/src/RenderingEngine/vtkClasses/vtkStreamingOpenGLTexture.js#L160-L164

It doesn't seem like 3D requires one continuous memory array in order to display the image. Instead would it be possible to keep individual slices same as the stack viewport and feed in calculated offsets for the 3D case?

This would allow CPU memory to be relatively light and not duplicate data kept in the GPU memory.

@Ouwen
Copy link
Contributor Author

Ouwen commented Mar 18, 2023

@sedghi @JamesAPetts, would love to get your thoughts on this if you have the time.

@Ouwen
Copy link
Contributor Author

Ouwen commented Mar 18, 2023

image

Here is a quick POC on the yarn run example volumeAPI
we can run the following in the console and still have most of the volume functionality:

const v = cornerstone.cache.getVolume('cornerstoneStreamingImageVolume:CT_VOLUME_ID')
v.imageData.getPointData().getScalars().delete()
delete v.scalarData

@swederik
Copy link
Member

swederik commented Mar 18, 2023 via email

@JamesAPetts
Copy link
Member

JamesAPetts commented Mar 18, 2023

Haven't worked on CS3D in a while (hope to come back sometime!) But yeah there is no reason to keep the whole volume on the CPU.

I think Erik and I actually had this discussion about a year ago. Basically 3D came first, and then stack viewport after. Hindsight is 20/20.

I think we should eventually do as you say, and have slices on CPU and 3D volumes in vram.

Not sure of timeline for this, but I agree on the long term goal.

@Ouwen
Copy link
Contributor Author

Ouwen commented Mar 18, 2023

Thanks for the quick responses guys!

@Ouwen
Copy link
Contributor Author

Ouwen commented Mar 18, 2023

One idea for volume post processing:

https://js.tensorflow.org/api/latest/?_gl=1*yode7u*_ga*NjU4NjE3MDMwLjE2NjQzOTM1OTg.*_ga_W0YLR4190T*MTY3OTE1OTIyMy4xNC4xLjE2NzkxNTkyMjkuMC4wLjA.#tensor

Seems like tfjs allows us to bind a texture in GPU and avoid CPU/GPU transfers.

@sedghi
Copy link
Member

sedghi commented Mar 21, 2023

@Ouwen interesting idea. The only problem we see is the tools. Probably rendering should work, but then segmentation tools or any other tools that rely on the coordinates of each voxel need to be proxied somehow since each slice is separated. What is the original source of problem that led you to this idea?

@Ouwen
Copy link
Contributor Author

Ouwen commented Mar 21, 2023

@sedghi, the main problem was memory usage on mobile systems which have limits on cpu memory. It seemed like WebGL has around 1GB memory available for textures, even on mobile.

@sedghi
Copy link
Member

sedghi commented Mar 21, 2023

that is too low :(
I would be happy to discuss it more, but we need to have tools in mind when developing this

@Ouwen
Copy link
Contributor Author

Ouwen commented Mar 21, 2023

Yeah definitely interested!

that is too low :(

1GB memory on iOS, likely more on desktop of course (limited to device GPU). CPU memory seems to limit to 300MB so 1GB feels like a big upgrade ha

Initial thoughts are to use tfjs to bind the texture. Then any tools/functions which require imaging data can also be GPU accelerated + there is a nice interface to pull data from GPU => CPU if needed.

For example, I think prescaling should likely not occur on worker. If the data is on GPU the scale + add can be broadcasted in parallel.

@JamesAPetts
Copy link
Member

The reason we do scaling in the worker is for cases like PET. Each frame needs to be scaled differently to make one coheesive volume. As such we found it easier to scale the whole thing in one go, or you'll have to stick a large complicated array + logic in the shader. When you only really need to scale it once.

With that in mind, if you have a good solution for that, all ears.

bionoone pushed a commit to bionoone/cornerstone3D-beta that referenced this issue Mar 27, 2023
)

* fix(webworker):Make webworkers asynchronous to fix errors

Webworkers used a callback, but that failed to leave a webworker
available every time a webworker failed, and that eventually hangs
the client.  This change fixes that by being much more reliable
catching the results.

* fix:Update with most recent changes from master

* PR review changes
@sedghi sedghi added the enhancement New feature or request label Oct 13, 2023
@sedghi
Copy link
Member

sedghi commented Nov 2, 2023

seems like these two are taking too long for the create3DFromRaw

const pixData = updateArrayDataType(dataType, dataArray, is3DArray);
const scaledData = scaleTextureToHighestPowerOfTwo(pixData);

and maybe we should not do them for the empty volume

@sedghi
Copy link
Member

sedghi commented Jan 2, 2024

This PR kind of address this

@sedghi
Copy link
Member

sedghi commented Dec 12, 2024

Cornerstone 3D 2.0 addresses this

@sedghi sedghi closed this as completed Dec 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants