Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support all tensor types #20

Open
radu-matei opened this issue Jun 24, 2021 · 1 comment
Open

Support all tensor types #20

radu-matei opened this issue Jun 24, 2021 · 1 comment
Assignees
Labels
help wanted Extra attention is needed

Comments

@radu-matei
Copy link
Member

radu-matei commented Jun 24, 2021

Currently, if input and output tensors are not F32, both runtimes will panic.
This stems from the way we handle and store input and output tensors, particularly when using the native ONNX runtime - where both input and output tensors are Vec<Array<f32, Dim<IxDynImpl>>>.

The additional tensor types defined by WASI NN are F16, U8, and I32.

We need to either:

  • make both tensors generic over the data type. This would potentially make the entire State struct generic, which could introduce difficulties in how we manage it.
  • keep storing every tensor type as F32, introduce fields for input and output tensor types (Question: how to handle input model not containing tensor shapes? #12) and transform them just-in-time (when compute is called, for input tensors, and when get_output is called, for output tensors). This would however introduce an additional performance penalty at inference time.
@Mossaka
Copy link
Member

Mossaka commented Jun 25, 2021

When I get back from vacation I will look into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants