-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can diffusion model be used into image to image translation ? #98
Comments
Yes, just concatenate your image to the noised-image input and change the input-channel size. |
@lianggaoquan yea, what Robert said i can add it later this week |
That depends | would say for paired i2i you can do what @robert-graf mentioned however if you for example have segmentation maps as one pair you might be better of adding a SPADE normalization layer into your UNet and don't attach the segmentation map as input. However for unpaired i2i I think this current framework most likely will not work as I can't see how the current training signal would be enough but maybe I am wrong |
Hi, any update for the paired image translation in the repo? |
@robert-graf Where exactly should I perform concatenation operation? Could you please give more details? I tried to do it very beginning of the Unet forward, but did not work.
|
@huseyin-karaca I did it before the forward call of the U-Net and only updated the input size of the first Con-Block. # Conditional p(x_0| y) -> p(x_0)*p(y|x_0) --> just added it to the input
if not x_conditional is None and self.opt.conditional:
x = torch.cat([x, x_conditional], dim=1)
# -------------- Here is the rest for context my Image2Image Code under /img2img2D/diffusion.py. I hope lucidrains is fine with linking my Code here. If you are looking for the paper referenced, the preprint is coming out on Tuesday. |
@robert-graf Thank you for your kind reply! |
Hi, so to do i2i using this repo, is it okay to use the Unet self_condition=True, or we have to do the cat manually and change in another place? |
@heitorrapela You would have to manually change the code written in this repo to achieve i2i. By the way, diffusion model often achieve better results from pre-trained model when applying to i2i, maybe you could take a look at HuggingFace's diffusers: https://github.com/huggingface/diffusers |
@FireWallDragonDarkFluid, thanks for the response. I was trying with the self_condition, but yes, it was not what I wanted, and in the end, it was still adding artifacts to the translation process. I will see if I can implement myself with this library or the diffusers. Using diffusers, I just tried simple things, but I still need to train, so I must investigate. Due to my task restrictions, I also cannot use a heavy model, such as SD. |
I did a quick implementation, but I am not 100% sure; I am training some models with it; here are my modifications if anyone wants to try also:
|
Can diffusion model be used into image to image translation ?
The text was updated successfully, but these errors were encountered: