You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Reading through and trying to refactor code for the three MAML models I might've come across a bug in the logic - when warmup is used for HyperMAML, and gradients are computed for the fast_parameters list in _update_network_weights, it isn't updated with the gradient values in the same way it is in classic MAML's set_forward method, after each gradient step.
I'm 90% sure this isn't the intended behaviour, hope to hear your thoughts on this.
The text was updated successfully, but these errors were encountered:
Reading through and trying to refactor code for the three MAML models I might've come across a bug in the logic - when warmup is used for HyperMAML, and gradients are computed for the
fast_parameters
list in_update_network_weights
, it isn't updated with the gradient values in the same way it is in classic MAML'sset_forward
method, after each gradient step.I'm 90% sure this isn't the intended behaviour, hope to hear your thoughts on this.
The text was updated successfully, but these errors were encountered: