-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bnb/dev #186
Conversation
new_ds[var].standard_name = standard_name | ||
if hasattr(old_var, 'long_name'): | ||
new_ds[var].long_name = old_var.long_name | ||
for var in keep_vars: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this semantic naming much better than working with unnamed list comp's. Nice!
@@ -1421,358 +1418,3 @@ def get_next(self, temporal_weights=None, spatial_weights=None): | |||
temporal_weights=temporal_weights, spatial_weights=spatial_weights) | |||
observation = self.data[self.current_obs_index] | |||
return observation | |||
|
|||
@classmethod | |||
def get_file_times(cls, file_paths, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where did all these methods go?? I dont see them moved in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these were redundant since they were being overridden in nc_data_handling.py
and only being used there
sup3r/utilities/loss_metrics.py
Outdated
x1_div = tf.stack( | ||
[self._compute_md(x1, fidx=i) for i in range(2 * hub_heights)]) | ||
x2_div = tf.stack( | ||
[self._compute_md(x2, fidx=i) for i in range(2 * hub_heights)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be range(0, 2*hh, 2)
? Right now you're iterating from 0 to 2*hh and when i=1 I think that self._compute_md()
will use i=(1,2) as u10m/v10m but i=(1,2) is really v10m/u100m
Actually i don't think thats quite right because uidx = 2 * (fidx // 2)
but still, won't you be double calculating if you do i in range(2*hh)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep! good catch. looks like I was essentially weighting this by a factor 2 by accident
(n_observations, spatial_1, spatial_2, temporal, features) | ||
x2 : tf.tensor | ||
high resolution data | ||
(n_observations, spatial_1, spatial_2, temporal, features) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
somewhere you should assert len(shape) == 5
sup3r/utilities/loss_metrics.py
Outdated
https://en.wikipedia.org/wiki/Material_derivative | ||
""" | ||
|
||
MAE_LOSS = MeanAbsoluteError() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For some of these loss functions we're still assuming MAE is the go-to metric but it is not always. Consider making the class attribute simply LOSS_METRIC
or ERROR_METRIC
. Personally I'm 100% fine with manipulating the class attributes at runtime but the name is presumptious.
elif axis == 3: | ||
return tf.concat([x[..., 1:2] - x[..., 0:1], | ||
(x[..., 2:] - x[..., :-2]) / 2, | ||
x[..., -1:] - x[..., -2:-1]], axis=3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ignorant question: why are you doing a [2:]
offset to calculate the derivative instead of just doing a 1-pixel difference?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm doing something closer to a central difference instead of a backward or forward difference approx. It's just a little more accurate/stable.
n.b. apparently this is exactly what np.gradient
does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha that makes sense, maybe consider just adding this note for posterity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah good idea
@@ -117,6 +117,103 @@ def __call__(self, x1, x2, sigma=1.0): | |||
return mmd | |||
|
|||
|
|||
class MaterialDerivativeLoss(tf.keras.losses.Loss): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you should add a test for this
…ith spatial chunks
…fined in DataHandlerDCforNC already.
…ify vars to remove instead of all the ones to keep
While we're adding loss functions...