Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where to download the trained model 'weights.26-1.07.h5' #29

Open
YingqianWang opened this issue Apr 19, 2019 · 10 comments
Open

Where to download the trained model 'weights.26-1.07.h5' #29

YingqianWang opened this issue Apr 19, 2019 · 10 comments

Comments

@YingqianWang
Copy link

In Notebook 5, the model is loaded by using:
model.load(r"C:\Users\Mathias Felix Gruber\Documents\GitHub\PConv-Keras\data\logs\imagenet_phase2\weights.26-1.07.h5", train_bn=False)

But where can I download the trained model? I only want to run the test and do not want to perform training.

@ttxxr
Copy link

ttxxr commented Apr 19, 2019

In Notebook 5, the model is loaded by using:
model.load(r"C:\Users\Mathias Felix Gruber\Documents\GitHub\PConv-Keras\data\logs\imagenet_phase2\weights.26-1.07.h5", train_bn=False)

But where can I download the trained model? I only want to run the test and do not want to perform training.

have u got the pre-trained model or weights ? i use the pconv_imagenet model to test , but i got a bad prediction.

@ghost
Copy link

ghost commented Apr 21, 2019

You can find the link to download weights in readme.
`Pre-trained weights

I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch.

Ported VGG 16 weights
PConv on Imagenet
PConv on Places2 [needs training]
PConv on CelebaHQ [needs training]`

@ttxxr
Copy link

ttxxr commented Apr 23, 2019

You can find the link to download weights in readme.
`Pre-trained weights

I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch.

Ported VGG 16 weights
PConv on Imagenet
PConv on Places2 [needs training]
PConv on CelebaHQ [needs training]`

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

@leeqiaogithub
Copy link

You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

hello,when i tried to test with the pre-trained weights(pconv_imagenet.h5),i got an error at this function:
model=PConvUnet()
model.load('./data/model/pconv_imagenet.h5')

ValueError:Layer #0 (named "p_conv2d_17" in the current model) was found to correspond to layer p_conv2d_49 in the save file. However the new layer p_conv2d_17 expects 3 weights, but the saved weights have 2 elements. it sames like the pre-trained model and the PConvUnet() have different structures.but i am not sure.can you help me to figure it out?thanks.

@ykeremy
Copy link

ykeremy commented May 16, 2020

You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

hello,when i tried to test with the pre-trained weights(pconv_imagenet.h5),i got an error at this function:
model=PConvUnet()
model.load('./data/model/pconv_imagenet.h5')

ValueError:Layer #0 (named "p_conv2d_17" in the current model) was found to correspond to layer p_conv2d_49 in the save file. However the new layer p_conv2d_17 expects 3 weights, but the saved weights have 2 elements. it sames like the pre-trained model and the PConvUnet() have different structures.but i am not sure.can you help me to figure it out?thanks.

The possible cause could be the TF version. What version do you currently use?

@Kirstihly
Copy link

You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

hello,when i tried to test with the pre-trained weights(pconv_imagenet.h5),i got an error at this function:
model=PConvUnet()
model.load('./data/model/pconv_imagenet.h5')

ValueError:Layer #0 (named "p_conv2d_17" in the current model) was found to correspond to layer p_conv2d_49 in the save file. However the new layer p_conv2d_17 expects 3 weights, but the saved weights have 2 elements. it sames like the pre-trained model and the PConvUnet() have different structures.but i am not sure.can you help me to figure it out?thanks.

Based on @mrkeremyilmaz reply, I tried pip install -r requirements.txt with all versions specified by the author. And this problem no longer exists.

@sariva03
Copy link

You can find the link to download weights in readme.
`Pre-trained weights

I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch.

Ported VGG 16 weights
PConv on Imagenet
PConv on Places2 [needs training]
PConv on CelebaHQ [needs training]`

Hello I have tried to train model with(pconv_imagenet.h5) but got this error:
ValueError Traceback (most recent call last)
in ()
1 # Instantiate the model
2 model = PConvUnet(vgg_weights='./data/logs/pytorch_vgg16.h5')
----> 3 model.load("/content/gdrive/MyDrive/Partial_Conv/pconv_imagenet.h5", train_bn=False)

in load(self, filepath, train_bn, lr)
238
239 # Load weights into model
--> 240 epoch = int(os.path.basename(filepath).split('.')[1].split('-')[0])
241 assert epoch > 0, "Could not parse weight file. Should include the epoch"
242 self.current_epoch = epoch

ValueError: invalid literal for int() with base 10: 'h5'.

is there any way to correct this error?

@marza1993
Copy link

You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Hello I have tried to train model with(pconv_imagenet.h5) but got this error:
ValueError Traceback (most recent call last)
in ()
1 # Instantiate the model
2 model = PConvUnet(vgg_weights='./data/logs/pytorch_vgg16.h5')
----> 3 model.load("/content/gdrive/MyDrive/Partial_Conv/pconv_imagenet.h5", train_bn=False)

in load(self, filepath, train_bn, lr)
238
239 # Load weights into model
--> 240 epoch = int(os.path.basename(filepath).split('.')[1].split('-')[0])
241 assert epoch > 0, "Could not parse weight file. Should include the epoch"
242 self.current_epoch = epoch

ValueError: invalid literal for int() with base 10: 'h5'.

is there any way to correct this error?

It seems that the file name of the saved weights must be of the form "<N_of_epoch>-.h5", such as "weights.26-1.07.h5".

@theWaySoFar-arch
Copy link

You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

My situation is the same as yours. Have you found a solution?

@hariouat
Copy link

You can find the link to download weights in readme.
**_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

My situation is the same as yours. Have you found a solution?

Hello, i have the same problem. did u find a solution? thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants