-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference relies on eltype(::AbstractArray) being correct #20518
Comments
We should just make the inference code more conservative. It can use |
Maybe we should merge #12579? |
Ah, yes, #12579 is exactly what I meant with option 4. Thanks for the pointer! |
In practice, restricting this to |
Closes #20518 and improves inference when splatting genetal iterables.
Closes #20518 and improves inference when splatting genetal iterables.
Closes #20518 and improves inference when splatting genetal iterables.
Demo:
Why is that?
But inference relies on splatting an instance of
T<:AbstractArray
giving aVararg{eltype(T)}
. I.e. inference determines thatfoo(::Int...)
would be called in the example, inferring a return type ofVector{Int}
, but the actual call goes tofoo(::Any...)
, returning anInt
instead. Thus the wrongsum
specialization gets called.Now what?
Possible ways forward:
eltype
must not lie. (Or probably even better, that one has to subtype the correctAbstractArray
, i.e.AbstractArray{Union{DataArrays.NAtype, Int64}}
for the above example.) Go and fix DataArrays.AbstractArray
are in fact of the expected type. Would throwTypeError: typeassert: expected Int64, got DataArrays.NAtype
or similar for the example above.AbstractArray
as aVararg{Any}
, ignore itseltype
.My opinion: I don't like 1. I can't judge 2; I think the behavior would be ok, but I have no idea what an implementation would look like in terms of code complexity and run-time cost. 3. would be an easy fix, but at the cost of worsening inference precision. My favorite is 4, although I'm not sure it's actually feasible with a reasonable amount of code and extra time for inference.
Thoughts?
The text was updated successfully, but these errors were encountered: