Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

i got enormous loss function #15

Open
oneOfThePeople opened this issue Jan 26, 2017 · 7 comments
Open

i got enormous loss function #15

oneOfThePeople opened this issue Jan 26, 2017 · 7 comments

Comments

@oneOfThePeople
Copy link

hi i run the code DeconvNetPipeline.py and while the traning i got something like this:
2017-01-26 15:01:00.156146: step 85, loss = 39709622646341632.00 (3.6 examples/sec; 2.809 sec/batch) 2017-01-26 15:01:03.019399: step 86, loss = 34950307108618240.00 (3.5 examples/sec; 2.863 sec/batch) 2017-01-26 15:01:05.870122: step 87, loss = 37934860555255808.00 (3.5 examples/sec; 2.851 sec/batch)
its error?
Also he break in the middle and because the line:
except tf.errors.OutOfRangeError:
what its mean?
thenk you

@oneOfThePeople
Copy link
Author

maybe it's because you don't have normalization like in DeconvNet.py?
ground_truth = (ground_truth / 255) * 20

@AngusG
Copy link
Collaborator

AngusG commented Feb 1, 2017

Note that this does occur in read_and_decode in https://github.com/fabianbormann/Tensorflow-DeconvNet-Segmentation/blob/master/utils.py, however you made made me aware of another issue I have been dealing with elsewhere, which is that I don't believe there is a great way to handle void labels (such as 255 in PASCAL VOC) yet in TensorFlow.

@fabianbormann When implementing the pipeline I basically maintained the original functionality of DeconvNet.py, however now that I'm looking at this again, it doesn't make sense to normalize in this way. This was probably from an initial attempt to make the model run without errors, but doesn't make sense as it's squashing all the labels to be from 0-20, when in fact they should be left as plain integers 0-20, with one label at 255.

What's really needed is for the sparse_softmax_cross_entropy_with_logits function to support ignore
labels like the caffe SoftmaxWithLossLayer used for the original FCN. Note that as-is,
if you try to leave in the void label sparse_softmax_cross_entropy_with_logits will complain that the shape of the logits doesn't match the shape of the labels.

I am exploring different ways of dealing with this since I am publishing my own dataset, in some of which I also use void labels. The easiest solution for now if you just want to see this model run properly is to use data without void labels. I could upload some of my own TFRecords that don't use void labels to my fork if you want.

@oneOfThePeople
Copy link
Author

there is no need, thank you.
I wont to try this in my own dataset and step before is to run your code properly
My dataset is binary classes so if i understand right i just need to create {0,1} image (size 1xhxw) and not ignore non of the label?

@zlpsophina
Copy link

@oneOfThePeople I got the same result and error as you, for example: step 0 finished in 54.10 s with loss of 34369567400656896.0000,did you have solve this problem?
I also run this in my own dataset and my dataset is binary classes,others,when i try to print the accuracy of the result follow the loss in the same way like:
print('step {} finished in {:.2f} s with accuracy of {:.6f}'.format(
i,time.time()-start,self.accuracy.eval(session=self.session,feed_dict={self.x:[image],self.y: [ground_truth_pred]})))
however the errors appear like:ValueError: setting an array element with a sequence.
i had try to deal with those problem for a long time,but didn't get success,if you have any advice,please inform me , i will be great appreciate for your help.

@oneOfThePeople
Copy link
Author

Hi,
I sorry it was two years ago so i dont remember alot.
I think (not sure) that my solution was to cut the data set in a way that my classes will be more balanced, but maybe it was solution to something else.
Sorry that i can not help much.

@zlpsophina
Copy link

@oneOfThePeople It's dosen't matter, thank you for your response,i am a student and noviciate in CNN
field,so I always meet a lot of problem,but i try to deal with it, in any case, I am grateful for your response

@fabianbormann
Copy link
Owner

please try the latest version e4d59e9

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants