-
Notifications
You must be signed in to change notification settings - Fork 463
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[models] Ensure all PyTorch models are ONNX exportable #789
Comments
@charlesmindee classification complete 👍 |
I'm relatively green when it comes to ML engineering, but I've been attempting to look at exporting the DBNet models to ONNX. I think we're relatively close already and just need to convert inputs/outputs to a data structure compatible with ONNX (specifically not numpy.ndarray). Here's some sample code and the resulting error. I'm slowly working on solving this on my end, but I wanted to post it here in case this is a really easy problem to solve for someone else more familiar with the project.
|
Hi @frytoli 👋, thanks for working on this 🤗 To clearify where the problem is:
problem: Currently the models forward pass includes the postprocessing call. |
Hi @felixdittrich92 and @frytoli, |
@charlesmindee @frytoli Sounds good to me, but we have to take care of increasing complexity and the differences between TF and PT implementations. So i would suggest maybe to try it with on model but on both implementations so that we can provide an equivalent implementation which solves the ONNX export problem on PT and the SavedModel export on TF side. Wdyt ? |
Yes it would be nice to do a POC on 1 model first, but we may want to make it exportable in ONNX for both pytorch and tensorflow |
So maybe if we can handle both with Onnx do we really need TF's SavedModel also ? Otherwise i would say let us modify this issue and close the counterpart issue for TF SavedModel 👍 (I will test tf2onnx this week with the classification models) |
@charlesmindee wdyt ? |
Hi all 👋 So, yes, we're very aware of the JIT incompatibility with some data structures & numpy operations. Having conditional executions in call methods could become tricky for maintenance later on This is a big topic, hence this issue. Here is a suggestion:
What do you think? |
@frgfm @charlesmindee
Keep in mind as a user i think the most than want also to use this exported models inside docTR.
I know this would be a lot of refactoring but maybe it could be a good match #530 What do you think ? 😄 |
My bad, I should clarify my previous thought:
About the training & alignment with TF, this is a serious problem. Moving between TF & ONNX is much more troublesome than between PyTorch & ONNX. I fully agree that we should do our best to go into the right direction though 😅 |
short update: left: |
update: with #941 PyTorch is done ❤️ One ref: microsoft/onnxruntime#10994 MASTER (PT) model is exportable but doesn't work with onnxruntime currently until this ticket is closed
|
@frgfm @charlesmindee I think this issue is only about the export i would open another issue for the loading part wdyt ? :) |
@felixdittrich92 yes precisely 👍 |
@frgfm @charlesmindee I think it is ok if we close this all models (without postprocessing) can be exported |
Most users of the library are more interested in existing pretrained models to use for inference rather than training. For this reason, it's important to ensure we can easily export those trained models.
General
Classification
Text Detection
Text Recognition
NOTE: master is exportable but in fact of an onnxruntime internal issue (Could not find an implementation for Trilu node - ORT Model Optimizer microsoft/onnxruntime#10994) the loading fails
The text was updated successfully, but these errors were encountered: