You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(hairstyle) C:\Users\prabhas\Desktop\Style-Your-Hair>python main.py --input_dir ./ffhq_image/ --im_path1 source.png --im_path2 target.png --output_dir ./style_your_hair_output/ --warp_loss_with_prev_list delta_w style_hair_slic_large --save_all --version final --flip_check
Loading StyleGAN2 from checkpoint: pretrained_models/ffhq.pt
torch.Size([512])
Setting up Perceptual loss...
Loading model from: C:\Users\prabhas\Desktop\Style-Your-Hair\losses\lpips\weights\v0.1\vgg.pth
...[net-lin [vgg]] initialized
...Done
flip is better, kp_diff : 29.72156524658203 > kp_diff_flip : 19.58245277404785
Number of images: 2
Images: 0%| | 0/2 [00:00<?, ?it/s]source
Images: 50%|█████████████████████████████████████████████████████████████▌ | 1/2 [04:48<04:48, 288.43s/it]target_flip
Images: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [09:37<00:00, 288.89s/it]
Number of images: 2
Images: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [02:01<00:00, 60.58s/it]
Loading StyleGAN2 from checkpoint: pretrained_models/ffhq.pt
torch.Size([512])
Setting up Perceptual loss...
Loading model from: C:\Users\prabhas\Desktop\Style-Your-Hair\losses\lpips\weights\v0.1\vgg.pth
...[net-lin [vgg]] initialized
...Done
Setting up Perceptual loss...
Loading model from: C:\Users\prabhas\Desktop\Style-Your-Hair\losses\masked_lpips\weights\v0.1\vgg.pth
...[net-lin [vgg]] initialized
...Done
Warp Target Step 1: 0%| | 0/100 [00:00<?, ?it/s]C:\Users\prabhas\anaconda3\envs\hairstyle\lib\site-packages\torch\nn\functional.py:3500: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
"The default behavior for interpolate/upsample with float scale_factor changed "
Traceback (most recent call last):
File "main.py", line 162, in
main(args)
File "main.py", line 72, in main
align.align_images(im_path1, im_path2, sign=args.sign, align_more_region=False, smooth=args.smooth)
File "C:\Users\prabhas\Desktop\Style-Your-Hair\models\Alignment.py", line 290, in align_images
save_intermediate=save_intermediate, is_downsampled = is_downsampled)
File "C:\Users\prabhas\Desktop\Style-Your-Hair\models\Alignment.py", line 158, in create_target_segmentation_mask
im2, warped_latent_2, warped_down_seg = self.warp_target(img_path2, src_kp_hm, None, img_path1) # Warping !!
File "C:\Users\prabhas\Desktop\Style-Your-Hair\models\Alignment.py", line 566, in warp_target
latent_in, warped_down_seg = self.optimize_warping(pbar, optimizer_warp_w, latent_W_optimized, latent_F_optimized, mode, is_downsampled, src_kp_hm, im_name_1, im_name_2, cur_check_dir, img_path1, img_path2)
File "C:\Users\prabhas\Desktop\Style-Your-Hair\models\Alignment.py", line 782, in optimize_warping
loss.backward()
File "C:\Users\prabhas\anaconda3\envs\hairstyle\lib\site-packages\torch\tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "C:\Users\prabhas\anaconda3\envs\hairstyle\lib\site-packages\torch\autograd_init_.py", line 147, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: no valid convolution algorithms available in CuDNN
The text was updated successfully, but these errors were encountered:
(hairstyle) C:\Users\prabhas\Desktop\Style-Your-Hair>python main.py --input_dir ./ffhq_image/ --im_path1 source.png --im_path2 target.png --output_dir ./style_your_hair_output/ --warp_loss_with_prev_list delta_w style_hair_slic_large --save_all --version final --flip_check
Loading StyleGAN2 from checkpoint: pretrained_models/ffhq.pt
torch.Size([512])
Setting up Perceptual loss...
Loading model from: C:\Users\prabhas\Desktop\Style-Your-Hair\losses\lpips\weights\v0.1\vgg.pth
...[net-lin [vgg]] initialized
...Done
flip is better, kp_diff : 29.72156524658203 > kp_diff_flip : 19.58245277404785
Number of images: 2
Images: 0%| | 0/2 [00:00<?, ?it/s]source
Images: 50%|█████████████████████████████████████████████████████████████▌ | 1/2 [04:48<04:48, 288.43s/it]target_flip
Images: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [09:37<00:00, 288.89s/it]
Number of images: 2
Images: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [02:01<00:00, 60.58s/it]
Loading StyleGAN2 from checkpoint: pretrained_models/ffhq.pt
torch.Size([512])
Setting up Perceptual loss...
Loading model from: C:\Users\prabhas\Desktop\Style-Your-Hair\losses\lpips\weights\v0.1\vgg.pth
...[net-lin [vgg]] initialized
...Done
Setting up Perceptual loss...
Loading model from: C:\Users\prabhas\Desktop\Style-Your-Hair\losses\masked_lpips\weights\v0.1\vgg.pth
...[net-lin [vgg]] initialized
...Done
Warp Target Step 1: 0%| | 0/100 [00:00<?, ?it/s]C:\Users\prabhas\anaconda3\envs\hairstyle\lib\site-packages\torch\nn\functional.py:3500: UserWarning: The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
"The default behavior for interpolate/upsample with float scale_factor changed "
Traceback (most recent call last):
File "main.py", line 162, in
main(args)
File "main.py", line 72, in main
align.align_images(im_path1, im_path2, sign=args.sign, align_more_region=False, smooth=args.smooth)
File "C:\Users\prabhas\Desktop\Style-Your-Hair\models\Alignment.py", line 290, in align_images
save_intermediate=save_intermediate, is_downsampled = is_downsampled)
File "C:\Users\prabhas\Desktop\Style-Your-Hair\models\Alignment.py", line 158, in create_target_segmentation_mask
im2, warped_latent_2, warped_down_seg = self.warp_target(img_path2, src_kp_hm, None, img_path1) # Warping !!
File "C:\Users\prabhas\Desktop\Style-Your-Hair\models\Alignment.py", line 566, in warp_target
latent_in, warped_down_seg = self.optimize_warping(pbar, optimizer_warp_w, latent_W_optimized, latent_F_optimized, mode, is_downsampled, src_kp_hm, im_name_1, im_name_2, cur_check_dir, img_path1, img_path2)
File "C:\Users\prabhas\Desktop\Style-Your-Hair\models\Alignment.py", line 782, in optimize_warping
loss.backward()
File "C:\Users\prabhas\anaconda3\envs\hairstyle\lib\site-packages\torch\tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "C:\Users\prabhas\anaconda3\envs\hairstyle\lib\site-packages\torch\autograd_init_.py", line 147, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: no valid convolution algorithms available in CuDNN
The text was updated successfully, but these errors were encountered: