Skip to content
This repository has been archived by the owner on May 15, 2023. It is now read-only.

Mapping normalized images to real images #3

Open
shaobo-he opened this issue Sep 18, 2018 · 2 comments
Open

Mapping normalized images to real images #3

shaobo-he opened this issue Sep 18, 2018 · 2 comments

Comments

@shaobo-he
Copy link

Hi tensorfuzz developers,

Thank you for making this tool public. I have a quick question about the quantization example. It seems that tensorfuzz works on a normalized image where each entry in the matrix is a fp value between [-1, 1]. So it appears to me that a mutated normalized image, despite having different prediction, could not map to a different image in original MNIST format where entries are integers.

I noticed that there's a piece of code that double check the validity of the mutated image. Is it related to this question?

I may miss something. Please let me know if it makes sense.

@DoctorTeeth
Copy link

Hi, I'm not totally sure I understand your question, but I will try to answer and you can tell me if it was helpful:

In general the MNIST digits are integers in the range [0,255].
When you train vision models on them, generally you cast those ints to floats and normalize the floats to live in [-1,1].
If you find a disagreement on an particular input and that input uses more precision than
the original MNIST dataset has, then technically you will not then be able to map that input back to any original MNIST digit, but that's ok because:

a) that was never really the goal, we are concerned about the accuracy of the quantized model under unseen inputs, which may come with more precision
b) the precision may go away when you feed the input through a quantized model anyway, depending on the implementation
c) we already checked that no disagreements were found on the test set for our example model.

The check you found is for a different reason:
The classifier may give outputs that differ between the original and the quantized version simply due to stochasticity in e.g. the tensorflow matrix multiply implementation.
Thus, when we find a disagreement, we check that it persists across multiply tries of the same inference.

@shaobo-he
Copy link
Author

shaobo-he commented Sep 19, 2018 via email

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants