You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a user after exporting my models in onnx format i want to easily load these in docTR
Something like:
#-> same way as ocr_predictor
predictor = onnx_ocr_predictor(
det_model='path/db_mobilenet_v3_small.onnx,
reco_model='path/crnn_mobilenet_v3_small.onnx,
provider='gpu' (default='cpu')
)
#-> same way as ocr_predictor
The text was updated successfully, but these errors were encountered:
Thanks for the suggestion!
So I have experimented with that and there are a few things I think we should consider:
exporting to ONNX require way more than loading it : the original DL backend framework, the architecture, the parameter values and whatever dependencies allow the export
loading ONNX is made for light environments & fast inference : we dont need the original DL backend framework, only ONNX and the archi+param file
So my point is that loading ONNX file with docTR is doable but it will be significantly heavier than loading in a separate lighter environments 👍
That being said, even if it's heavier, we could do it for developers to play with it. If so, we need to add a new supported backend 🤷♂️
🚀 The feature
We need to:
ocr_predictor
To solve before:
Other related Issues:
#789 #790
Related discussion:
#981
Motivation, pitch
As a user after exporting my models in onnx format i want to easily load these in docTR
Something like:
The text was updated successfully, but these errors were encountered: