-
-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
At_mul_B! has different methods for sparse and dense matrices #160
Comments
@andreasnoack has commented on this before. I think it would be good to have |
I thought we were trying to get rid of |
I know it has been complained about, but those complaints are just wrong. 😜 While it is sometimes used to work around the (formerly) high cost of memory allocation, and that might be considered a "performance hack," at its root that is only one of this function's many uses. Here's one I've used myself: A = load_10^12-by-2_matrix_from_disk_by_mmap()
B = rand(2,2)
C = A*B # oops, crash---not enough memory to hold the result
C = mmap_array(Float64, (10^12, 2), iostream)
A_mul_B!(C, A, B) # works just dandily Or, what if you want the result to be a SharedArray? A DArray? The issue is that using the inputs to control the type of the output---possibly dispatching to different algorithms---is too powerful a trick to throw away. I'll go farther and say most nontrivial computational routines should be written in |
To clarify, the issue with And if you want to do sparse-sparse in-place matmul, then you want a different low-level API. You would need to split the symbolic phase from the numeric phase, since most applications of sparse matrices are repeatedly using arrays with the same nonzero structure but different values. |
I'd be perfectly happy to have |
I personally do not dislike And I agree, that it would be nice to have a unified naming convention for linear algebra. That is what I wanted to point out. Thanks for the quick response. |
Ah, sure, name-condensation is a perfectly reasonable thing to wish for. I'd be +1 for that.
Perhaps, although one wonders whether the output-argument could be a container that included the results of the symbolic phase. But I haven't thought about this particular case seriously. |
Related: #57 I believe that for dense matrices and vectors (and |
@hofmannmartin I think you are complete right and that all the dense multiplication methods should take @tkelman Is right that Finally, just to clarify the "get rid of" part of |
IMO the most logical signature would be |
+1 |
I like that one too, but (s)he who writes the code gets at least two votes, three if it's an odd-numbered Thursday. |
@simonster is it intended to be a paradigm to name the variable to be changed first by @ALL Love programming in Julia. Thanks to all. |
@hofmannmartin Yes, generally Julia functions that perform operations in place take the destination array first (further discussion in JuliaLang/julia#8246), although function-valued arguments have precedence since they need to come first for |
@hofmannmartin We have had the discussion a couple of places, but in short my arguments are that
So my conclusion is that having a strict rule for the position of the output is not desirable. |
I think it's not even possible to have a strict rule. As a separate point, there's a part of me that gets really annoyed at the two-argument forms |
What about a macro:
to call the right function treating transposes and avoiding temporaries ? |
See https://github.com/simonbyrne/InplaceOps.jl, but to my knowledge, it is not possible to do the fourth calculation in place. |
I think we should do #160 because it aligns with the three argument version we already have |
we should consider giving the scalar multiples more descriptive names, and possibly making them keyword arguments |
We've done this, right? |
Just recently I wanted to implement an algorithm. I used the method
At_mul_B!(α::Number,A::SparseMatrixCSC{Tv,Ti<:Integer},x::AbstractArray{T,1},β::Number, y::AbstractArray{T,1})
, which works fine with sparse matrices but has no counterpart for dense matrices. Its dense counterpartBLAS.gemv!
does not work with sparse matrices. As workaround I dispatched theAt_mul_b!
function.Is this behaviour intended (using 0.3.6), or did I got terribly lost?
In my opinion it would be great to have basic linear algebra working on dense and sparse matrices with the same syntax.
The text was updated successfully, but these errors were encountered: