Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unifying and generalizing hcat, vcat, hvcat, blockdiag etc. #64

Open
JeffFessler opened this issue Aug 29, 2019 · 4 comments
Open

Unifying and generalizing hcat, vcat, hvcat, blockdiag etc. #64

JeffFessler opened this issue Aug 29, 2019 · 4 comments

Comments

@JeffFessler
Copy link
Member

JeffFessler commented Aug 29, 2019

This is a rough idea that I want to sketch out now, even though I don't have time to work on it.
Suppose A is a MxN linear operator and we think of it being embedded with zeros around it to make a bigger linear operator B = [0 0 0; 0 A 0; 0 0 0] where the zeros around it are matrices some of which might be of size 0x0. All one needs to specify B is the size of the upper left 0 and the lower right 0. Call this a OffsetMap or such, where the operation y = B * x is simply

y = zeros(T,size(B,1))
rowrange = (1:M) .+ nrow_upper_left_zero
colrange = (1:N) .+ ncol_upper_left_zero
y[rowrange] .= A * x[colrange]

We cache the ranges at construction time, very much in the spirit of how hvcat is done.
All of the hcat and vcat and hvcat objects, I mean BlockMap objects, are simply sums of these OffsetBlock objects with appropriate offsets for each block.
A BlockDiagonalMap is also just a sum of such OffsetMap objects.
One could imagine other block sparse matrices made by summing several such objects.
Instead of having to define separate types for each pattern, all of these are just sums (LinearCombinationMap) of OffsetMap objects.
Wish I had thought of this earlier. The only question I have is whether the summation in a LinearCombinationMap will be as fast as the loops for a BlockMap. Or am I overlooking any other drawback?

@dkarrasch
Copy link
Member

This is a great idea! I think the summation loop won't be a problem. A mild challenge is multiply-and-add, because we will rely on LinearCombination multiplication. But we will need to work on that anyway, to make use of 5-arg mul! for matrices, which can do multiplication and addition all in one go, allocation-free. Our current generic 5-arg mul! does allocate an intermediate vector since that is unavoidable for FunctionMaps, for instance. So we should introduce some clever multiple dispatch to do that only when really necessary. We already do have a test that requests that there is at most one allocation in LinearCombination multiplication, and we should get that down to zero when all involved linear maps are matrix-based.

I may give it a try when I'm done with the current PRs.

@JeffFessler
Copy link
Member Author

I've got it 90% done. Famous saying about the last 10%... Will send something soon.

@dkarrasch
Copy link
Member

🤣 just like me: I also keep telling I won't have time, and then I'm too curious wether it works and start working on it immediately.

@JeffFessler JeffFessler mentioned this issue Aug 31, 2019
@JeffFessler
Copy link
Member Author

Exactly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants