-
Notifications
You must be signed in to change notification settings - Fork 414
multilabel configuration #194
Comments
modify to see this ... |
Thank you sxjzwq! I followed your comments and it works. However, I met another error: Segmentation fault (core dumped) Is there anyway to fix this? Thanks a lot! |
I guess it is caused by the input data. What's the size of your input ? For example, If it's 224_224_3, and if the size of some images in your data are smaller than 224, you will meet such problem. You should resize your image when applying im2rec. Check the im2rec help and you will find those parameters. |
Hi sxjzwq, |
I am not sure. May be you should check the format of your image list and re-generate the .rec file using the parameter resize=512. I only met this error when I include a subset of my data. After I check the subset I found some image size is smaller than my network input shape. So I resize them and the error gone. But there might be some other reasons in your case. Please check the input carefully. Good luck! |
We will carefully check the input. Thanks a million! |
You're welcome! Please let me know your multi-label classification performance if it works. I am also working on training a multi-label classification network but it seems that my network parameter can not converge. |
Sure! We are trying some simple settings and see what happens. We will let you know the performance if the setting works!! Thank you! |
Hi Qi, I think it would be good to have an example in cxxnet. |
Hi |
Hi sxjzwq, thank you so much for your suggestion. We tried both l2 and softmax as the loss function. We will definitely try your suggestion and let you know if there is an improvement. Thanks again! |
start from vgg16.model round 0:[ 2466] 11686 sec elapsed[1] train-logloss:0.092616 train-rmse:6.30993 start from 0006.model round 6:[ 2466] 11681 sec elapsed[7] train-logloss:-nan train-rmse:5.33734 start from 0012.model round 12:[ 2466] 11686 sec elapsed[13] train-logloss:-nan train-rmse:4.60376 start from 0018.model round 18:[ 2466] 11674 sec elapsed[19] train-logloss:-nan train-rmse:3.93583 Using RMSE metric will be helpful. |
Hi Qi, |
Hi Yashu Yes, I don't know how to avoid the NAN problem when using logloss evaluation metric, but the RMSE metric seems works fine. I finally got the train-rmse 1.32312 on my data. And my multi-label classification mAP is bigger than 0.7, much better than using fc7-feature+multi_label_SVM. I wish this information is helpful. Best |
Hi Qi,
changed the RMSE metric. However, the speed seems extreme slow.. We've Thank you very much, On Sun, Jul 12, 2015 at 10:22 PM, Qi Wu [email protected] wrote:
|
Hi Yashu I am using the pre-trained VGGNet16 (trained on ImageNet of course) as the initial model. And then fine tune the last FC layer (fc7) and the classification layer (change 1000 to 256, which is my label width). Also, I change the loss layer from softmax to multi_logistic. For all the other layers, I keep learning rate as 0, so the parameters will be fixed as the VGGNet. I start my training with the learning rate = 0.001 and decrease it when the train-RMSE error doesn't decrease any more. I only trained 36 rounds and because my learning rate has become 0.000001, I stopped the training. The following is my training log: start from vgg16.model round 0:[ 2466] 11686 sec elapsed[1] train-logloss:0.092616 train-rmse:6.30993 start from 0006.model round 6:[ 2466] 11681 sec elapsed[7] train-logloss:-nan train-rmse:5.33734 start from 0012.model round 12:[ 2466] 11686 sec elapsed[13] train-logloss:-nan train-rmse:4.60376 start from 0018.model round 18:[ 2466] 11674 sec elapsed[19] train-logloss:-nan train-rmse:3.93583 start from 0024.model round 24:[ 2466] 11671 sec elapsed[25] train-logloss:-nan train-rmse:3.27728 start from 0030.model round 30:[ 2466] 11675 sec elapsed[31] train-logloss:-nan train-rmse:2.81689 |
Hi Qi,
fine-tuning so we decide to train the net directly. However, the toolbox Best Regards, On Monday, July 13, 2015, Qi Wu [email protected] wrote:
|
Hi,
We are trying to learn to use cxxnet for a multi-label problem.
We made the following settings:
Thanks a lot,
YS
The text was updated successfully, but these errors were encountered: