Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add backend dispatchers for operations in funsor.ops #61

Closed
neerajprad opened this issue Mar 8, 2019 · 5 comments
Closed

Add backend dispatchers for operations in funsor.ops #61

neerajprad opened this issue Mar 8, 2019 · 5 comments
Labels
discussion enhancement New feature or request

Comments

@neerajprad
Copy link
Member

It seems to me that the simplest way to support multiple backends in funsor.ops would be to use multiple dispatch as follows:

@singledispatch
def abs(x):
    return _builtin_abs(x)

@abs.register(torch.Tensor)
def abs_torch(x):
    return x.abs()

@abs.register(np.ndarray)
def abs_np(x):
    return np.abs(x)

What do you think? Will that cause any inadvertent complications otherwise?

@neerajprad neerajprad added discussion enhancement New feature or request labels Mar 8, 2019
@eb8680
Copy link
Member

eb8680 commented Mar 8, 2019

This seems OK as long as ops remains restricted to the subset of functionality of each backend for which the APIs already overlap exactly - we shouldn't be in the business of codifying a generic tensor API, that's Numpy's job now.

@neerajprad
Copy link
Member Author

This only gives us flexibility to do things like clamping or have some custom defined behavior, when it might be needed. When the API is exactly the same e.g. in the case of abs or sqrt we can just return np.abs(x) where x could be a tensor, ndarray or a python numeric type and numpy would dispatch to the correct implementation.

@fritzo
Copy link
Member

fritzo commented Mar 9, 2019

This seems like a good idea, also using multipledispatch for binaries and an (object, object) default.

Note that #55 and #63 implement an Op wrapper, and you may need to add a .register and/or .dispatch property to get this to work correctly with singledispatch and multipledispatch. Sorry for the added complexity!

@fritzo
Copy link
Member

fritzo commented Mar 9, 2019

Another thing I like about this approach is that it would allow us to avoid importing torch and numpy in the low-level ops module, since we could move backend registrations to the respective backend modules. More generally I think we should be able to avoid importing both numpy and torch in any one module, so they would eventually be optional dependencies, as in opt_einsum.

@eb8680
Copy link
Member

eb8680 commented Mar 13, 2019

Closed by #74

@eb8680 eb8680 closed this as completed Mar 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants