-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Better inference of _apply()
(splatting)
#20343
Conversation
base/inference.jl
Outdated
tvnew = Vector{Any}[] | ||
for tv in tp, t in tvs[1] | ||
push!(tvnew, push!(copy(tv), t)) | ||
if length(tvnew) > 7 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should probably make this cutoff a named parameter
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely. And I have no clue whether 7 is a reasonable choice.
e1d95b6
to
30f080b
Compare
This looks like a great start. I'm not quite sure if |
Thanks!
Neither am I, but it was the simplest thing to do to test/showcase the improved
I'm just refactoring things a bit. Probably it will look something like having a |
Maybe so. Doing only part of what's possible, all tests still pass locally without significant increase in run time. With one exception, that is:
Now this does seem a bit excessive. Let me double-check whether I got something wrong... |
0fa6ada
to
b1e0de8
Compare
958993d
to
e33f165
Compare
Hm, 32bit travis failed on one the new tests. I wonder whether it's build environment or test/worker association dependent. Build log is backed up here, will restart the travis job. |
Looks like the failure is platform dependent. Time to set up a 32bit VM. Unless someone has a good idea what might be causing this... |
If a Union is splatted, inference tries the individual types, unless this would exceed MAX_APPLY_UNION_ENUM=8 total tries. As a stopgap in that case, a Union of several Tuples of the same length is rewritten to a Tuple of Unions.
e33f165
to
6e6a29f
Compare
Amazing that it took me so long to realize that the Travis 32bit build does not use a 32bit (virtual) machine but just cross-compiles... Anyway, the bug that caused the failure should be fixed (a missing initialization). Also, I added the specialization for I'm still unsure about the last commit, which adds the |
_apply(..., Union{Tuple...}, ...)
_apply()
(splatting)
base/inference.jl
Outdated
end | ||
elseif tti <: AbstractArray && i == n | ||
result[i] = Any[Vararg{eltype(tti)}] | ||
# `type` is the inferred type for expression `arg`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typ
instead of type
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, will fix.
This is some great work! The huge union in Seems like this fixes #19957? Could you add a test for that? |
Loop over the arguments in `abstract_apply` and invoke the modified/renamed `precise_container_type` for each individual argument. Translate some cases where the container type cannot be inferred precisely into Vararg's (instead of bailing out and inferring `f(::Any...)`) and fuse Vararg's with further arguments into a single trailing Vararg.
Closes #20518 and improves inference when splatting genetal iterables.
6e6a29f
to
a434dfc
Compare
_apply()
(splatting)_apply()
(splatting)
Let's see whether this has any effect on the benchmarks: |
Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels |
Two of the improvements look real, everything else is noise? |
the subarray slowdowns look pretty consistent though |
Easiest to run again: @nanosoldier |
It looks quite consistent indeed, as the same operations seem to be benchmarked twice: JuliaCI/BaseBenchmarks.jl@da9fda3#commitcomment-20858355 |
Oh boy, |
Your benchmark job has completed - no performance regressions were detected. A full report can be found here. cc @jrevels |
fair enough then |
Presently, something like
will be inferred as
f(::Any...)
, which is likely suboptimal. This PR tries to improve the situation in two ways:1 If a
Union
is splatted, inference tries the individual types. In case multipleUnion
s are splatted, this would result in many possible combinations; I just limit to an arbitrarily chosen maximum of 7. Otherwise...2. a
Union
of severalTuple
s of the same length is rewritten to aTuple
ofUnion
s, e.g.Tuple{Union{Foo, Bork}, Union{Bar, Baz}}
for the example above, if it wasn't handled by 1. already.As an example, I have asserted the return type of
setindex(::Tuple, ...)
to be a largeUnion
. This allowswhich are the tightest bounds as
slicedim
is actually not type-stable. On master, the first one givesUnion{Array{Float64,1},Array{Float64,2},Float64}
, while for more than 2 dimensions it's justAny
. (The modification tosetindex
on master would just always giveAny
here).The code certainly needs some revising, but as I'm completely new to inference, I'd like some feedback on the general idea first. (Blame @vtjnash for encouraging me to even try this!)
Edit: This PR has evolved and grown a bit. It now further enhances cases where
f(1, ones(n::Int)..., "bar")
is now treated likef(::Int, ::Union{Float64,String}...)
, while on master it is treated likef(::Any...)
.Any
.