-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add interface to use symmetric eigendecomposition with different lapack method #49355
Conversation
This is super nice to have! It also paves the way for a :auto option if we find out a good heuristics for what method to use by default. Is there a good reason to have syevr by default? It's not super accurate, so it's a questionable default unless it brings significant performance benefits (maybe for small matrices?) Why not have :syevr as the default value of the argument? It'd remove one method, no? Also would nicely document what method is used by default (which is not clear from the existing docstrings), although we might want to add that might be subject to changes in the future. I'm slightly concerned about the proliferation of docstrings (one for each method), but that's a generic julia issue and I don't have a good answer to that... |
I agree with the |
The issue is that which method is better (for speed, ignoring completely memory usage and accuracy) seems to be highly BLAS/hardware-dependent. Ideally, the BLAS library would have more information and be able to choose the right method, but it doesn't look like that's happening anytime soon. MKL helpfully contradicts itself, recommending syevd in its decision chart http://cali2.unilim.fr/intel-xe/mkl/mklman/GUID-A37AAAD0-DBF8-48BD-91AB-446CCAB4537F.htm#GUID-A37AAAD0-DBF8-48BD-91AB-446CCAB4537F, but then saying "Note that for most cases of real symmetric eigenvalue problems the default choice should be [syevr function as its underlying algorithm is faster and uses less workspace. ?syevd requires more workspace but is faster in some cases, especially for large matrices." in the syevd doc... |
function eigvals!(A::RealHermSymComplexHerm{<:BlasReal,<:StridedMatrix}, lapack_method::Symbol=:syevr; sortby::Union{Function,Nothing}=nothing) | ||
if lapack_method === :syevr | ||
vals = LAPACK.syevr!('N', 'A', A.uplo, A.data, 0.0, 0.0, 0, 0, -1.0)[1] | ||
elseif lapack_method === :syev | ||
vals = LAPACK.syev!('N', A.uplo, A.data) | ||
elseif lapack_method === :syevd | ||
vals = LAPACK.syevd!('N', A.uplo, A.data) | ||
else |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This creates type instability in eigvals!
due to type instability of LAPACK.syev!
and LAPACK.syevd!
.
When computing both the eigenvalues and eigenvectors, all routines return a vector (eigenvalues) and a matrix (eigenvectors).
When computing only the eigenvalues, syevr!
return the eigenvalues and a dummy matrix which makes it type stable.
https://github.com/JuliaLang/julia/blob/master/stdlib/LinearAlgebra/src/lapack.jl#L5263
But, syev!
and `syevd! returns only the eigenvalues.
https://github.com/JuliaLang/julia/blob/master/stdlib/LinearAlgebra/src/lapack.jl#L5201
- Option 1: Change output of
LAPACK.syev!
andsyevd!
to return dummy array and make they type stable. (Is it breaking?) - Option 2: Add type assertion
- Option 3: Don't add a
lapack_method
interface foreigvals
What do you think? Are there any other solutions?
Thanks very much for these timings! I think we should switch to dsyevd by default: dsyevr is inaccurate, and dsyevd appears to be faster in pretty much all cases with MKL. Even on OpenBLAS, it's not too bad, but OpenBLAS's performance is very often erratic, and I don't think we should base any decisions on that... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor things. Regarding type stability: isn't the [1]
indexing inferable? If not, does a simple type assertion help? Actually, perhaps like this?
vals::Vector{eltype{A}} = if lapack_method === :syevr
LAPACK.syevr!(...)
elseif ...
...
end
# sort if necessary
return vals
end
The problem is not with I adopted the I also changed the default from |
Very nice, I like the way you did the docstring: makes it clear what the default is and what lapack methods are called. Maybe do the same for svd while you're at it? |
Hey everyone (involved here)! Should we try to make an effort and push this over the finish line? What exactly is still needed? I saw some discussion on the default algorithm, is that resolved? What was the current status regarding "type instability"? |
I think it can just be merged? Syevd is the winner both for accuracy and speed (see @jaemolihm 's speed tests, my accuracy comments in the dftk pr linked, and the paper cited), so the default choice is clear. Not sure about type instability |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One last detail... together with a quick announcement this should finalize the PR.
function eigen!(A::RealHermSymComplexHerm{<:BlasReal,<:StridedMatrix}, alg::Algorithm = default_eigen_alg(A); sortby::Union{Function,Nothing}=nothing) | ||
if alg === DivideAndConquer() | ||
Eigen(sorteig!(LAPACK.syevd!('V', A.uplo, A.data)..., sortby)...) | ||
elseif alg === QRIteration() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't better to check for type?
elseif alg isa QRIteration
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know. One could perhaps check with @code_typed eigen!(A, QRIteration())
and see if that constant is propagated and the other branches are eliminated. If not, then the type check should help.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is fine:
julia> @code_typed eigen!(A, LinearAlgebra.QRIteration())
CodeInfo(
1 ─ %1 = Base.getfield(A, :uplo)::Char
│ %2 = Base.getfield(A, :data)::Matrix{Float64}
│ %3 = invoke LinearAlgebra.LAPACK.syev!('V'::Char, %1::Char, %2::Matrix{Float64})::Union{Tuple{Vector{Float64}, Matrix{Float64}}, Vector{Float64}}
│ %4 = Core._apply_iterate(Base.iterate, LinearAlgebra.sorteig!, %3, (nothing,))::Tuple{Vector{Float64}, Matrix{Float64}}
│ %5 = Core.getfield(%4, 1)::Vector{Float64}
│ %6 = Core.getfield(%4, 2)::Matrix{Float64}
│ %7 = %new(Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}, %5, %6)::Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}
└── return %7
) => Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}
julia> @code_typed eigen!(A, LinearAlgebra.RobustRepresentations())
CodeInfo(
1 ─ %1 = Base.getfield(A, :uplo)::Char
│ %2 = Base.getfield(A, :data)::Matrix{Float64}
│ %3 = invoke LinearAlgebra.LAPACK.syevr!('V'::Char, 'A'::Char, %1::Char, %2::Matrix{Float64}, 0.0::Float64, 0.0::Float64, 0::Int64, 0::Int64, -1.0::Float64)::Tuple{Vector{Float64}, Matrix{Float64}}
│ %4 = Core.getfield(%3, 1)::Vector{Float64}
│ %5 = Core.getfield(%3, 2)::Matrix{Float64}
│ %6 = %new(Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}, %4, %5)::Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}
└── return %6
) => Eigen{Float64, Float64, Matrix{Float64}, Vector{Float64}}
Even the sorteig!
is compiled away.
As a followup to #49262, I added an interface to use different LAPACK methods for symmetric/Hermitian eigendecomposition.
Usage:
eigen(A::RealHermSymComplexHerm, lapack_method::Symbol)
,eigvals(A::RealHermSymComplexHerm, lapack_method::Symbol)
.lapack_method
can be:syev
,:syevr
, or:syevd
.Note that
A
must be aSymmetric
orHermitian
by type not just by value to use these. (i.e. aA::Matrix
withishermitian(A) == true
is not sufficient.)