Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support scipy special function with tuple output #3139

Merged
merged 13 commits into from
Jun 16, 2022

Conversation

RandomY-2
Copy link
Contributor

@RandomY-2 RandomY-2 commented Jun 12, 2022

What do these changes do?

We can use ExecutableTuple and TensorTupleElementOp to support mars execution of scipy special functions with tuple returns. Specifically, the following syntaxes can be supported:

from mars.tensor import special
import mars.tensor as mt
x = mt.linspace(-3, 3)

S, C = special.fresnel(x)
S.execute()
C.execute()
special.fresnel(x).execute()
(S + C).execute()

Also since we invoke scipy.special.fresnel during tensor executions, a cache is added in core.py so if we execute multiple elements of the same function during one execute() call, we don't need to do the scipy computation for each element. Later executes can just use the cached scipy outputs.

Related issue number

Check code requirements

  • tests added / passed (if needed)
  • Ensure all linting tests pass, see here for how to run them

@RandomY-2 RandomY-2 requested a review from a team as a code owner June 12, 2022 17:48
@wjsi
Copy link
Member

wjsi commented Jun 13, 2022

Given your implementation, it is not hard to use one single operand for two outputs. You may take a look at TensorSVD for an example, where we use output_limit property to define the number of outputs:

@property
def output_limit(self):
return 3

use new_tensors(..., kws[kw1, kw2]) to create multiple output objects:

U, s, V = self.new_tensors(
[a],
order=TensorOrder.C_ORDER,
kws=[
{"side": "U", "dtype": tiny_U.dtype, "shape": U_shape},
{"side": "s", "dtype": tiny_s.dtype, "shape": s_shape},
{"side": "V", "dtype": tiny_V.dtype, "shape": V_shape},
],
)
return ExecutableTuple([U, s, V])

and use ctx[out.key] = xxx to store execute results.

with device(device_id):
u, s, v = xp.linalg.svd(a, full_matrices=False)
uc, sc, vc = op.outputs
ctx[uc.key] = u
ctx[sc.key] = s
ctx[vc.key] = v

@RandomY-2
Copy link
Contributor Author

Given your implementation, it is not hard to use one single operand for two outputs. You may take a look at TensorSVD for an example, where we use output_limit property to define the number of outputs:

@property
def output_limit(self):
return 3

use new_tensors(..., kws[kw1, kw2]) to create multiple output objects:

U, s, V = self.new_tensors(
[a],
order=TensorOrder.C_ORDER,
kws=[
{"side": "U", "dtype": tiny_U.dtype, "shape": U_shape},
{"side": "s", "dtype": tiny_s.dtype, "shape": s_shape},
{"side": "V", "dtype": tiny_V.dtype, "shape": V_shape},
],
)
return ExecutableTuple([U, s, V])

and use ctx[out.key] = xxx to store execute results.

with device(device_id):
u, s, v = xp.linalg.svd(a, full_matrices=False)
uc, sc, vc = op.outputs
ctx[uc.key] = u
ctx[sc.key] = s
ctx[vc.key] = v

Thank you for your feedback! Will take a look.

Copy link
Member

@wjsi wjsi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code structure LGTM. Do not use for i in range(len(list_obj)). Using for i, obj in enumerate(obj_list) when indices are needed or for obj in obj_list is more pythonic.

Comment on lines 167 to 168
for i in range(len(op.outputs)):
ctx[op.outputs[i].key] = ret[i]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Loop over op.outputs itself, not indices.

Suggested change
for i in range(len(op.outputs)):
ctx[op.outputs[i].key] = ret[i]
for output in op.outputs:
ctx[output] = ret[i]

return ExecutableTuple(res_tensors)

for i in range(len(res_tensors)):
out[i].data = res_tensors[i].data
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

"dtype": res[i].dtype,
"shape": res[i].shape,
}
for i in range(len(res))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

assert isinstance(r, ExecutableTuple)
assert len(r) == 2

for i in range(len(r)):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

assert len(out) == 2
assert len(r_out) == 2

for i in range(len(r_out)):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

result = r.execute().fetch()
expected = sp_func(raw)

for i in range(len(result)):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

@_register_special_op
class TensorFresnel(TensorTupleOp):
_func_name = "fresnel"
_func_outputs = 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename to _n_outputs might be better.

@wjsi wjsi added mod: tensor type: feature New feature to be backported Indicate that the PR need to be backported to stable branch labels Jun 15, 2022
@wjsi wjsi added this to the v0.10.0a2 milestone Jun 15, 2022
Copy link
Member

@wjsi wjsi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@hekaisheng hekaisheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@hekaisheng hekaisheng merged commit a8cb40e into mars-project:master Jun 16, 2022
wjsi pushed a commit to wjsi/mars that referenced this pull request Jun 16, 2022
wjsi added a commit that referenced this pull request Jun 20, 2022
@wjsi wjsi added backported already PR has been backported and removed to be backported Indicate that the PR need to be backported to stable branch labels Jun 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backported already PR has been backported mod: tensor type: feature New feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants