-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stable Diffusion image-to-image and inpaint using onnx. #552
Conversation
The documentation is not available anymore as the PR was closed or merged. |
Think image-to-image is a very nice addition for ONNX. In-paint is a bit hacky at the moment. cc @anton-l |
Trying to get this to work. Not sure I understood the instruction: 'one needs to take/copy vae folder from "standard"'. And I'm not sure if I'm using it properly. `from diffusers import StableDiffusionImg2ImgOnnxPipeline print("starting...") init_image = Image.open(r"start2.jpg").convert("RGB") guidance = 7 image = pipe(prompt=prompt, init_image=init_image, strength=0.1, guidance_scale=guidance, num_inference_steps=25).images[0] #
|
@wonderly2 , this means that you need to copy |
I confirm this works. Just needs to use the onnx vae and add some code to the tests i guess. Thank you @zledas for doing this. I actually did the exact same thing 1 day b4 you but didn't figure out how to work with the timesteps. |
Excellent. I have it working now as well. Thanks @zledas ! |
This pull request is failed to check. Why have you left it unattended for 20 days? |
Very sorry about being so late here! @anton-l could you please take a look here? |
Hi @zledas, very sorry for the late reply! If you don't mind, I'll add some commits to your PR to make it work again :) |
Any updates upon this? |
After adding the appropriate entry into model_index.json I was able to get |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zledas I've updated the SD checkpoint to include vae_encoder
now, and brought the pipelines up to date with the latest features from their pytorch counterparts.
Let me know if everything works for you, and if all goes well we can merge it tomorrow! 🚀
@anton-l thanks for looking into and updating it! I tested the newest commit and I get this error:
I'm using |
I can confirm, that img2img works as a charm. I simply copied the content of the src folder to my virtualenv/.../diffusers folder, adjusted scripts to new pipelines and (for some reason) added vae_encoder to onnx's model.json.
|
@zledas looks like it's related to what's discussed here: #791 I'll look out for any developments regarding that bug though, and feel free to open an issue/PR and ping me if you find a potential solution! |
@anton-l thanks for the link. Weird that it worked 3 times with the old commit and failed 2 times with the new (while changing commits in-between), but yeah, it looks like a deeper issue. And thanks for cleaning, updating and committing this! |
Hello, I founed a error in scripts/convert_stable_diffusion_checkpoint_to_onnx.py . We should use OnnxStableDiffusionPipeline instead of StableDiffusionOnnxPipeline |
) * * Stabe Diffusion img2img using onnx. * * Stabe Diffusion inpaint using onnx. * Export vae_encoder, upgrade img2img, add test * updated inpainting pipeline + test * style Co-authored-by: anton-l <[email protected]>
) * * Stabe Diffusion img2img using onnx. * * Stabe Diffusion inpaint using onnx. * Export vae_encoder, upgrade img2img, add test * updated inpainting pipeline + test * style Co-authored-by: anton-l <[email protected]>
I have same issue. Did this get fixed? I'm using AMD Radeon RX 570 8GB so I don't see how I'm lacking resources |
) * * Stabe Diffusion img2img using onnx. * * Stabe Diffusion inpaint using onnx. * Export vae_encoder, upgrade img2img, add test * updated inpainting pipeline + test * style Co-authored-by: anton-l <[email protected]>
…uggingface#1191) * Restore compatibility with old ONNX pipeline. I think it broke in huggingface#552. * Add missing attribute `vae_encoder`
Hi,
I added Stable Diffusion image-to-image and inpaint using onnx. I used non-onnx versions as "templates" and translated them according to existing text-to-image onnx.
The only problem is that this is my first time working with Python, so I was not able to use
vae_encoder
from onnx, so these scripts usevae
that comes from "standard" Stable Diffusion for encoding. That is why if one wants to test or use these scripts, one needs to take/copyvae
folder from "standard" and addto "onnx" version of
model_index.json
As this is around 2.5x faster on my AMD GPU, so I think it might be useful for others as well.
Related issue: #510