-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
operator*
for Tensor of Tensors (ToT)
#282
Comments
problem is that the expression layer already supports some ToT products, namely where inner or outer index product is a pure contraction (free and contracted indices only) or a pure Hadamard (fused indices only). New |
@evaleev I think ToT times ToT can go through Basically I was hoping to write a generic orbital transform function which superficially looks like: template<typename ResultType, typename TransformType, typename TensorType>
auto transform(TransformType&& C, TensorType&& t) {
// function which works out what the annotations are
auto [result_annotation, lhs_annotation, rhs_annotation] = make_annotations();
ResultType result;
result(result_annotation) = C(lhs_annotation) * t(rhs_annotation);
return result;
} I can write it in terms of |
New einsum defers to operator* all it can, so only mixed hadamard-contract
products go through it.
You should be able to use it no problem
…On Jun 16, 2021 9:35 AM, "Ryan Richard" ***@***.***> wrote:
@evaleev <https://github.com/evaleev> I think ToT times ToT can go
through operator*, but non-ToT times ToT can't. Regardless, I forgot that
operator* already worked for some cases so my redirection solution won't
work.
Basically I was hoping to write a generic orbital transform function which
superficially looks like:
template<typename ResultType, typename TransformType, typename TensorType>auto transform(TransformType&& C, TensorType&& t) {
// function which works out what the annotations are
auto [result_annotation, lhs_annotation, rhs_annotation] = make_annotations();
ResultType result;
result(result_annotation) = C(lhs_annotation) * t(rhs_annotation);
return result;
}
I can write it in terms of einsum, but assumed that wouldn't be as
efficient for non-ToTs.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#282 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABH62KA4DWQ2EN4CPCNATFTTTCSBBANCNFSM46YBKBMQ>
.
|
Presently multiplying ToT requires calling
einsum
. Unfortunately that makes it hard to write generic functions. I originally addedeinsum
because I couldn't figure out how to get ToT multiplication to slide into the existing expression layer. I haven't prototyped it, but maybe you have aToTMultiplication
class which is returned when either tensor is a tot (you can deduce if either side ofoperator*
is a ToT based on the tile types). It could then calleinsum
when it is assigned to aTsrExpr
. The reason I'm thinking a new class is because the left and right sides of the expression generating theToTMultiplication
instance would have to just be annotated tensors, and you would have to immediately assign it to aTsrExpr
(so it doesn't fully participate in the expression layer).This could be somewhat related to #224 in that with general tensor contractions you may also be restricting non-ToT multiplications in a similar manner.
If the above plan sounds reasonable I could try taking a stab at this.
The text was updated successfully, but these errors were encountered: