-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a function to get device from array (type)? #268
Comments
CC @lmh91 |
Yes now that we have split the packages between front-end and backend, I think this is feasible. |
Thanks! Should I do a PR? |
Yes please. |
Will do :-) |
@vchuravy should this go into a single PR on this repo, or should I to a PR with the changes to KernelAbstractions first, then you guys tag KernelAbstractions, and then the changes to CUDAKernels and ROCKernels (so CI ran run)? |
You should be able to do all at once. CI is run across the commit |
Oh, neat! Oh, here we go: #269 |
Note that julia> using ArrayInterface
julia> ArrayInterface.device([1.2])
ArrayInterface.CPUPointer() |
Ah, right! But to write generic code like if size(a)[1] != size(b)[2] || size(a)[2] != size(b)[1]
println("Matrix size mismatch!")
return nothing
end
device = KernelAbstractions.get_device(a)
n = device isa GPU ? 256 : 4
kernel! = naive_transpose_kernel!(device, n)
kernel!(a, b, ndrange=size(a))
end We need the device as a subtype of Having What do you think @vchuravy ? |
Ah, no, I don't think that could work, currently: julia> ArrayInterface.device(cu(rand(5)))
ArrayInterface.GPU() |
Yeah I am not a fan to take on another dependency for this. The long term development goal is to use KA in GPUArrays. |
Do you plan to move some code into additional packages like |
I've never really done anything with GPUs / didn't really think about what's actually needed. |
Personally, I think it would be nice if these things would converge longer term, though. Bit OT: Why does ArrayInterface have a (comparatively) long load time, actually? For an interface package, I mean? |
Discussion: JuliaArrays/ArrayInterface.jl#211 (comment) Personally, I think the library is too big. But a big problem w/ respect to load times is that it uses Requires.jl to add methods for some other libraries. Requires.jl's load times have been improved drastically as of Julia 1.7, so it'll get better.
What's the benefit of converging? |
I think it would be nice to have a single API to query what kind of device an array lives on, across the ecosystem. I think a truly light-weight package that defines "array traits" would be a good thing to have, long term. Package like CUDA, AMDGPU and oneAPI could than implement those traits together with the array types. And KernelAbstractions (for example) could implement support the devices types in the vendor-specific kernel packages. IMHO the occurrence of many Just long-term ramblings ... for now I'll be happy to have |
It would be nice to have a function that returns a KernelAbstractions.jl device given an array (type), e.g.
so that users can write device-agnostic code like
Would that fit into the KernelAbstractions concept?
The text was updated successfully, but these errors were encountered: