Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement gradient for UnbinnedNLL #64

Closed
2 tasks
spflueger opened this issue May 8, 2020 · 0 comments · Fixed by #222
Closed
2 tasks

Implement gradient for UnbinnedNLL #64

spflueger opened this issue May 8, 2020 · 0 comments · Fixed by #222
Labels
❔ Question Discuss this matter in the team

Comments

@spflueger
Copy link
Member

Implement the analytic gradient.

  • Clarify how to use the gradient in the Estimator with Intensity which hides the backend (I think this comes back to the "ComPWA math language"). So currently I think it is necessary to have a graph structure that describes the computation, which can be converted to an actual computation. Same concept that tensorflow follows. This would solve many problems at once.
  • Actually implement the gradient

Some tensorflow specific code might look like this (copied from amplitf):

for i, p in enumerate(float_pars):
     p.update(par[i])
with tf.GradientTape() as gradient:
    gradient.watch(float_pars)
    nll_val = nll(*args)
    g = gradient.gradient(
        nll_val, float_pars, unconnected_gradients=tf.UnconnectedGradients.ZERO)
    g_val = [i.numpy() for i in g]
    return g_val
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
❔ Question Discuss this matter in the team
Projects
None yet
2 participants