-
Notifications
You must be signed in to change notification settings - Fork 58
Mapping normalized images to real images #3
Comments
Hi, I'm not totally sure I understand your question, but I will try to answer and you can tell me if it was helpful: In general the MNIST digits are integers in the range [0,255]. a) that was never really the goal, we are concerned about the accuracy of the quantized model under unseen inputs, which may come with more precision The check you found is for a different reason: |
Thanks for your clarification. May I ask what precisely do you define
unseen inputs? Are they in original mnist format or normalized images? If
they were defined as the latter, will they be guaranteed to map back to
mnist format which are also unseen?
…On Wed, Sep 19, 2018, 13:04 Augustus Odena ***@***.***> wrote:
Hi, I'm not totally sure I understand your question, but I will try to
answer and you can tell me if it was helpful:
In general the MNIST digits are integers in the range [0,255].
When you train vision models on them, generally you cast those ints to
floats and normalize the floats to live in [-1,1].
If you find a disagreement on an particular input and that input uses more
precision than
the original MNIST dataset has, then technically you will not then be able
to map that input back to any original MNIST digit, but that's ok because:
a) that was never really the goal, we are concerned about the accuracy of
the quantized model under unseen inputs, which may come with more precision
b) the precision may go away when you feed the input through a quantized
model anyway, depending on the implementation
c) we already checked that no disagreements were found on the test set for
our example model.
The check you found is for a different reason:
The classifier may give outputs that differ between the original and the
quantized version simply due to stochasticity in e.g. the tensorflow matrix
multiply implementation.
Thus, when we find a disagreement, we check that it persists across
multiply tries of the same inference.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AFnqX0JUapbaGrI-Ly1r-3cRjXFMX5tGks5ucpUmgaJpZM4Wuxqi>
.
|
Hi tensorfuzz developers,
Thank you for making this tool public. I have a quick question about the quantization example. It seems that tensorfuzz works on a normalized image where each entry in the matrix is a fp value between [-1, 1]. So it appears to me that a mutated normalized image, despite having different prediction, could not map to a different image in original MNIST format where entries are integers.
I noticed that there's a piece of code that double check the validity of the mutated image. Is it related to this question?
I may miss something. Please let me know if it makes sense.
The text was updated successfully, but these errors were encountered: