-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ITensors] [ENHANCEMENT] In-place addition of a product A .+= B .* C. #1154
Comments
I would have assumed this was already defined, but should be easy enough to add. Note that is just special syntax for |
I see, thanks. By the way, when I try it with a more relevant for me example,
returns
|
I guess that is a different issue particular to outer products. A workaround for that could be to add dummy (dimension-1) indices shared between tensors |
OK, I see, thank you, this should work, though it seems a bit inconvenient in practice. |
Yeah, ideally we would fix that limitation, of course. |
Presumably, I am doing something wrong, but I am still getting an error with this trick:
even though |
In the above, the data of tensor A = ITensor(0.0, indices) Though the original way you did it should work, that looks like another bug that is particular to using unallocated tensors (ITensors with an However, it then make me wonder why can't just use: A = ITensor(indices)
B = randomITensor([indices[1], extra_index])
C = randomITensor([indices[2:3], extra_index])
ITensors.contract!(A, B, C) if |
I see, thank you. Well, in general A is non-zero. By the way, maybe I missed it, could you please point me to the documentation of |
In particular, what are alpha and beta in |
It is the same convention as Julia's |
Is your feature request related to a problem? Please describe.
For improving performance, reducing memory allocations is one of a key techniques. This often requires pre-allocating memory and then doing updates in-place. ITensors already provides functionality like
A .= B .* C
, whileA .+= B .* C
, which would allow to add (rather than to write) the output ofB * C
toA
in-place, is missing. This would be useful for performant ML applications, in particular for gradient accumulation.Describe the solution you'd like
Describe alternatives you've considered
A simple alternative would be to introduce a buffer tensor, which would double the amount of pre-allocated memory though:
The text was updated successfully, but these errors were encountered: