-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use the MODNet demo? #30
Comments
I have not yet solved how to use MODNet. |
@benbro https://drive.google.com/file/d/1Y6B6l72hib9t5ammT-7ya-muBzRKZ7xq/view?usp=sharing |
@w-okada does the new model helps? I still don't understand how to use it with tfjs, |
I tried. But very slow. |
Can you please explain how to test the code? |
yes please wait. Above comment is just a quick report. |
I commited the demo. But as I said, it is very heavy. If you have idea to make it fast, teach me. |
@w-okada thank you for the demo. @PINTO0309 do you have idea why the worker demo result doesn't look as good as the original demo? Is there something wrong in the model translation or use? |
|
@PINTO0309 have created the low resolution models (256x256, 192x192, 128x128). They are much faster than the original one. But A little less quality. Please try with my demo page. In my environment (with GTX 1660), |
Thank you for the new models, as you said they are much faster but quality isn't good. |
@benbro |
@PINTO0309 is there a U^2-Net demo for background matting instead of portrait drawing? |
Maybe there is no javascript demo. I think it is also slow, because the portrait drawing model is so slow and the background matting is basically the same model as the portrait one. Can I clonse this issue? |
All models in the MODNet demo gives me bad result compared with the original project demo. Something is probably broken with the model translation. |
Wow! Very insteresting!! First of all, I'll download and save model-card!! :P |
@benbro |
Thank you for adding a demo so fast. I'm getting bad result with this video even with the 256x256 model. Any way to improve it? What's the difference between the original model and pinto_x models? Should I expect better quality or performance? |
These model from https://github.com/PINTO0309/PINTO_model_zoo
In my demo, resizing image is done in wasm with opencv. "Interpolation" is used for the interpolation.
Yes. Just I forgot to erase it.
If you enable strict, input image to model is kept and output from model is applied to it |
Thanks! |
In addition, I converted Selfie Segmentation to various models and committed them. TFLite Float32/Float16/INT8, TFJS, TF-TRT, ONNX, CoreML, OpenVINO IR FP32/FP16, Myriad Inference Blob |
The MODNet demo is missing from the README.
How is the wasm model created? Is there a WebGL version?
I've tried to clone and run the demo locally but not sure what URL do load in the browser.
The text was updated successfully, but these errors were encountered: