-
Notifications
You must be signed in to change notification settings - Fork 679
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port OpenVDB Python bindings from pybind11 to nanobind #1753
Port OpenVDB Python bindings from pybind11 to nanobind #1753
Conversation
e347344
to
b9e51f6
Compare
I have no reason to think we would want to move away from pybind11. Can you explain your thinking here |
b9e51f6
to
427a2cc
Compare
We discussed this in last week's TSC meeting. To summarize, I'm concurrently developing Python bindings for NanoVDB which expose GPU interop via nanobind, a feature which does not exist in pybind11. Since we need to pass objects between OpenVDB and NanoVDB at the C++ level even under a Python abstraction (e.g. in From the OpenVDB perspective, the port to nanobind provides no change in the Python interface (i.e. Python code that works will continue to work from the user's standpoint), and furthermore nanobind provides faster runtime, compile time, and smaller library files than pybind11. |
Mainly for @Idclip's benefit - we also discussed what we see as the main issue with this proposed change, which is the dependencies. Nanobind requires robinmap which is a new dependency compared to pybind11 and is not header-only when compared to pybind11. While many of the ASWF projects are actively switching from boost python to pybind11, there's not much adoption of nanobind currently so this proposed migration does introduce some additional barriers. One of the points of discussion were whether it might be feasible to embed a version of nanobind similar to what we do with half to reduce this barrier. Of course, Git submodules and similar mechanisms are not as convenient when behind a firewall. Embedding doesn't seem as straight-forward a decision as for half, but Matt was going to think a bit more about what that means in terms of symbols, namespaces, CMake integration, etc. Hopefully we can find a good time to discuss with you as well to get your thoughts. The benefits are clear so it's worth considering how exactly we might go about doing this. We haven't completely ruled out having pybind11 for OpenVDB and nanobind for NanoVDB as well. |
Git subtree can be an alternative to managing dependency. During our TSC meeting last week, we also talked about NanoBind's dependency to Moving to NanoBind also seems to future-proofing Python binding given the reasons that @matthewdcong explained. |
While nanobind for NanoVDB and pybind11 for OpenVDB might be a good interim solution, I'm not a huge fan of it long-term as it prevents a user from using Open and Nano synergistically in Python. For example, I think we'd want to support a workflow where host-side modifications are made in OpenVDB, followed by a conversion to NanoVDB for device-side computation, and a conversion back to OpenVDB for more host-side co-processing if possible. |
Thank you Dan, this has exactly described my concern. Faster compiler times/smaller binaries and "faster" runtime (I'd be surprised to see this having a significant impact the the types of bindings we expose - is the binding lookup really that slow compared to the VDB operations we're invoking? happy to be proven wrong but would like to see results) are great, but IMO are really not much of a win when compared to the added potential build complexity and ASWF homogenization. We just moved away from boost python for exactly this reason.
Nanobind is newer, this does not necessarily mean it's more future proof and I don't see enough evidence to reflect this. I do however appreciate the mentioned limitation with GPU interop - @matthewdcong does this pybind11 issue reflect the problem or is it more involved? pybind/pybind11#3858 I would like to understand this more and whether there are any other solutions available to us with pybind11 |
Yeah, the pybind11 issue outlines part of the problem. As they mention in the discussion, backporting a basic typecaster for a DLPack struct to pybind11 is doable, but it becomes much more involved to transparently handle all the frameworks (NumPy, PyTorch, Tensorflow, etc.) in a unified manner with datatype casting, shape checking, and host and device memory management. |
While an additional dependency may introduce a small barrier to adoption, Python-level host and device interoperability with widely-used frameworks such as PyTorch and Tensorflow has the potential to significantly increase the audience for the VDB data structure within the large and growing ML community. |
04f10e7
to
363728d
Compare
a1430d9
to
ad2bc2c
Compare
ad2bc2c
to
3adb5c9
Compare
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
Signed-off-by: Matthew Cong <[email protected]>
2587f02
to
bf5713d
Compare
Closing in favor of duplicate PR #1916. |
No description provided.