-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding cumsum_ #10487
Adding cumsum_ #10487
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, thanks for the contribution! I've left a couple of comments requesting changes, let me know if you need anything!
@@ -525,6 +525,11 @@ def flatten(self, start_dim, end_dim): | |||
def cumsum(self, dim, dtype): | |||
return torch_frontend.cumsum(self._ivy_array, dim, dtype=dtype) | |||
|
|||
@with_unsupported_dtypes({"1.11.0 and below": ("float16",)}, "torch") | |||
def cumsum_(self, dim, dtype): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dtype
should have None
as the default value (https://pytorch.org/docs/1.11/generated/torch.Tensor.cumsum_.html?highlight=cumsum#torch.Tensor.cumsum_)
def test_torch_instance_cumsum_( | ||
dtype_value, | ||
dim, | ||
dtypes, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems like this is already generated in dtype_value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@guillesanbri So we need to remove dtype_value from the argument right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, dtypes
has to be removed from the args. You are generating dtype_value
and dim
in the decorator (which is fine), but the function is expecting dtype
, dim
and dtypes
method_input_dtypes=dtypes, | ||
method_all_as_kwargs_np={ | ||
"dim": dim, | ||
"dtype": dtypes[0], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dtypes
is not being generated, instead, use input_dtype
@guillesanbri sorry for the trouble,made the changes but still failing! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just left a couple of changes more, should be good to go after this! thanks!
init_tree="torch.tensor", | ||
method_name="cumsum_", | ||
dtype_value=helpers.dtype_and_values( | ||
available_dtypes=helpers.get_dtypes("valid"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems like the ground truth does not support bool
, you can change "valid" to "numeric" here
@@ -525,6 +525,11 @@ def flatten(self, start_dim, end_dim): | |||
def cumsum(self, dim, dtype): | |||
return torch_frontend.cumsum(self._ivy_array, dim, dtype=dtype) | |||
|
|||
@with_unsupported_dtypes({"1.11.0 and below": ("float16",)}, "torch") | |||
def cumsum_(self, dim, dtype=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dtype
has to be a keyword argument, in order for the test to detect this automatically, you should add *,
between dim
and dtype
(i.e. def cumsum_(self, dim, *, dtype=None):
)
@guillesanbri Done! Thanks |
lgtm! I'll merge it now, thanks for the contribution! 😄 |
Co-authored-by: guillesanbri <[email protected]>
Co-authored-by: guillesanbri <[email protected]>
Co-authored-by: guillesanbri <[email protected]>
#10231