-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pretrained stylegan2 models #9
Comments
You can download the official styleGAN2 weights from here: https://nvlabs-fi-cdn.nvidia.com/stylegan2/networks/ The official styleGAN2 weights are in TensorFlow, so I converted the weights into PyTorch weights using this script: https://github.com/rosinality/stylegan2-pytorch/blob/master/convert_weight.py Hope this helps! |
Thank you very much! The tensorflow weights have been converted into pytorch weights on gtx2060...Thanks!!! And how could I convert/project user sketch into fixed_z (expect a .pth file) in generate.py? |
We first provide some certain vehicle sketches with similar poses for training customized vehicle sketchgan model, and in the generation stage, we must put a random z or a real projected image rather than a user sketch into this customized gan model to generate real car images with similar poses of the input training sketches. Do different vehicle poses correspond to different customized gan models? @PeterWang512 |
Inference time the new model takes in the latent vector, and it can be sampled from a Gaussian or a latent projected from an image. Yes, different vehicle poses correspond to different customized gan if the training sketches consist of just one pose. We haven't tried training on sketches with multiple poses yet, but it is possible that, for example, you can get a sports car model by using sport car sketches of different poses as training inputs. Hope this helps! |
For projection, we use pix2latent to get the latent z vector. The repo is here: https://github.com/minyoungg/pix2latent |
I really appreciate your help! |
Can you please tell if we have our own model trained in stylegan2 how can we generate netG and netD files |
@qingqingisrunning Can you tell me how can I put my sketch into the trained model? I have completed the training, but don't know how to generate images according to a new sketch. |
Thanks for sharing this repo. I would like to train my own sketchgan for generating pictures of vehicles. Could you provide the pytorch pretrained stylegan2-car (and train, bus etc.) netD and netG model? Thanks.
The text was updated successfully, but these errors were encountered: