-
-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Function to return any eigenvector #1009
Comments
see JuliaLang/julia#49487 which was my attempt to write something similar to this for making |
Packages that offer partial eigendecompositions: I am not sure that such iterative methods fit into LinearAlgebra when we already have specialized packages for this. |
Definitely true for some of these! Fitting the entirety of KrylovKit.jl into LinearAlgebra (including the GPUArrays dependency) would definitely not be the best idea 😅 But in some cases, I think they'd fit in perfectly. The power method seems basic, small, and well-established enough that I think it'd be a good fit for LinearAlgebra; I was actually very surprised to learn it isn't included. |
Also mostly obsolete. Krylov methods are generally strictly better, even with restarting. |
Huh, that's surprising, I thought the power method was still standard for the largest eigenvalue problem. |
If your matrix is symmetric, kindly take a look at
If deemed sufficient, a basic version of the power algorithm can be easily implemented by users themselves. Please take a look at the code in JuliaLang/julia#49487 (mentioned in the earlier comments) for an example implementation. I feel it might be a great idea to add this algorithm to the Julia documentation. |
Definitely can be, but didn't @stevengj just say it's obsolete? The largest eigenvalue problem is common enough that I'd think LinearAlgebra should include it, but I'm not sure what algorithm it should use. |
It's a first algorithm taught in numerical linear algebra classes. The advantages and disadvantages are well known, please see, e.g., the Wiki page and references therein. The method works well when the eigenvalues are unique and sufficiently separated. However, since it is only beneficial for a relatively narrow class of matrices and non-convergent (or very slowly convergent) for others, it may not be suitable for Edit: R has a slightly modified version with scaling in the |
Yeah, if it's only really useful for educational reasons, it should go in a package. But the largest eigenvalue problem is common enough that I'd guess there should be a good default option in |
We used to have an I don't really see us reversing this decision and putting Krylov iterative methods back into Base. There are now several good packages for this. The algorithms are qualitatively rather different from the dense linear algebra in LinearAlgebra, or even from the sparse-direct methods in SparseArrays. |
Sometimes it's useful to compute just a single eigenvector (e.g. by the power method), and any eigenvector will do, e.g. I have an irreducible Markov chain, where the eigenvector is unique. I couldn't find any functions for this; should we add one to
LinearAlgebra
?The text was updated successfully, but these errors were encountered: