Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calculation of variance in the super resolution loss #11

Open
SorourMo opened this issue Jul 29, 2021 · 0 comments
Open

Calculation of variance in the super resolution loss #11

SorourMo opened this issue Jul 29, 2021 · 0 comments

Comments

@SorourMo
Copy link

SorourMo commented Jul 29, 2021

Hi,

I have a question about variance in sr_loss. Could you please elaborate on why variance is calculated this way?

# Mean var of predicted distribution:

        var = K.sum(masked_probs * (1.0 - masked_probs), axis=(1, 2)) / (
            c_mask_size * c_mask_size
        )  # (16x5) / (16,1) --> shape 16x5`

Basically, in the above line, we calculate sigma(x(1-x)) / (n^2). However, since we had calculated mean (mu) in the previous line, we could use mu in the calculation of variance.

Definition of variance is sigma((x-mu)^2) / n. So I was thinking something like below:
mean = K.sum(masked_probs, axis=(1, 2)) / c_mask_size # (16x5) / (16,1) --> shape 16x5
var = K.sum((K.pow(masked_probs - mean, 2) * c_mask), axis=(1, 2)) / ( c_mask_size ) # (16x5) / (16,1) --> shape 16x5
In the above line, * c_mask sets to zero the irrelevant values outside the current class's mask.

So why not stick to the exact variance formula here? Am I missing something?

Thanks,

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant