Skip to content

✨ Migrate and Refactor Approximation Functionality from DDSim Package #908

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 58 commits into
base: main
Choose a base branch
from

Conversation

MatthiasReumann
Copy link
Contributor

@MatthiasReumann MatthiasReumann commented Apr 6, 2025

Description

This PR adds DD approximation functionality. Most of the functions originate from the DDSim Package. However, instead of simply copying and gluing the packages together, I re-implemented and refactored most of it.

Checklist:

  • The pull request only contains commits that are related to it.
  • I have added appropriate tests and documentation.
  • I have made sure that all CI jobs on GitHub pass.
  • The pull request introduces no new warnings and follows the project's style guidelines.

@burgholzer burgholzer added refactor Anything related to code refactoring feature New feature or request DD Anything related to the DD package c++ Anything related to C++ code labels Apr 7, 2025
@burgholzer burgholzer added this to the DD Package Improvements milestone Apr 7, 2025
Copy link

codecov bot commented Apr 7, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

📢 Thoughts on this report? Let us know!

Copy link
Member

@burgholzer burgholzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple of thoughts because I skimmed over the PR 😌
I also approved the CI so that the corresponding checks should run now.

Copy link

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.

@MatthiasReumann MatthiasReumann marked this pull request as ready for review April 30, 2025 06:42
Copy link
Member

@burgholzer burgholzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really really like where this is going. Thanks for your continued work on this.
The structure of the approximation has already become way cleaner and clearer.
As you will see from the comments, there are a couple of complexity aspects that are still suboptimal and where I believe improvements make sense.
Most of these should be fairly easy to incorporate. The hardest one will probably be avoiding the two-stage approximation traversal. But, at the same time, it's also the most drastic performance blocker.

I would be tempted to say that some tests that track the complexity of the approximation would be in order, but I know from experience that these are near impossible to write. Or they require some kind of additional tracking, which results in an overhead memory as well as runtime wise.
Some kind of middle ground could be to run the tests of the test suite through a profile and check the call counts for certain critical functions. This would probably reveal the exponential complexity of the current rebuild function.
Maybe these thoughts trigger an idea on your side for how to further improve the tests (which are already really great!)


// Test: If the budget is 0, no approximation will be applied.
//
// |state⟩ = 0.866|0⟩ + 0.5|1⟩
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not the state created by this test circuit😉

I suspect you might want to change the circuit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks & sorry🤦🏼

Comment on lines 56 to 57
auto p = approximate(state, fidelity, *dd);
auto approx = p.first;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably just a matter of taste, but this could look a little cleaner using structured bindings and an additional assertion that the returned fidelity is 1.
Something like

const auto& [approx, fid] = approximate(state, fidelity, *dd);
EXPECT_EQ(fid, fidelity);

Similar reasoning can be applied to the other tests.


// Test: If the budget is too small, no approximation will be applied.
//
// |state⟩ = 0.866|0⟩ + 0.5|1⟩
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also not the state being prepared in the test.

Comment on lines 135 to 139

dd->decRef(p.first);
dd->garbageCollect(true);

EXPECT_EQ(dd->vUniqueTable.getNumEntries(), 0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd argue that this could be combined with the previous test.
The description comment is still helpful and could also be copied over.

Comment on lines 193 to 196
qcRef.h(0);
qcRef.x(1);
qcRef.cx(0, 1);
qcRef.cx(1, 0);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I am stupid, but the resulting state is a product state, so single-qubit operations are completely sufficient to construct the reference state.
Specifically,

Suggested change
qcRef.h(0);
qcRef.x(1);
qcRef.cx(0, 1);
qcRef.cx(1, 0);
qcRef.x(0);
qcRef.h(1);

Should just work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are absolutely right.

Comment on lines 62 to 84
if (contribution <= budget) {
exclude.emplace_front(e);
budget -= contribution;
} else if (!e->isTerminal()) {
const vNode* n = e->p;

for (const vEdge& eChildRef : n->e) {
const vEdge* eChild = &eChildRef;

if (!eChild->w.exactlyZero()) {
const double childContribution =
contribution * ComplexNumbers::mag2(eChild->w);
const auto it = std::find_if(
next.begin(), next.end(),
[&eChild](const auto& p) { return p.first == eChild; });
if (it == next.end()) {
next.emplace_front(eChild, childContribution);
} else {
(*it).second += childContribution;
}
}
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might just be me and my personal programming preference, so there is definitely no need for you to change that, but just so that I have said it.

I am a big fan of reducing the levels of nesting in code.
I would probably place a continue statement in the first if block and de-nest the subsequent else if, by flipping the if condition and continuing if the condition is not met.
That reduces at least one level of indirection, if not two.
At times, this can really help to make the code a little cleaner and seam less complex.
The exactlyZero check could also be flipped to de-nest the subsequent code.

Comment on lines 74 to 76
const auto it = std::find_if(
next.begin(), next.end(),
[&eChild](const auto& p) { return p.first == eChild; });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about the complexity here, I believe this could also benefit from using an unordered_set.
Iteration speed is similar. Lookup is way faster.
Should be a no brainer.

}

VectorDD approx = rebuild(state, exclude, dd);
approx.w = dd.cn.lookup(approx.w / std::sqrt(ComplexNumbers::mag2(approx.w)));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was about to write that this computation is likely to be redundant because you could simply set the top weight to one, but that would discard the global phase.

What could work though:
Before starting the approximation, safe the top edge weight of the incoming DD.
Then set the weight to exactly one and start the approximation procedure.
At the end, set approx.w to the initial edge weight.

I am fairly certain that the top edge weight is not affected by approximation, so the above simplification should be valid.

Comment on lines +24 to +26
* @details Traverses the decision diagram layer by layer in a breadth-first
* manner (iterative deepening algorithm) and eliminates edges greedily until
* the budget (1 - @p fidelity) is exhausted.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Once all the details of the implementation are clarified, I believe it would be helpful to add a brief description of the complexity of the approximation operation to the docstring here.

* @brief Recursively rebuild @p state. Exclude edges contained in @p exclude.
* @return Rebuilt VectorDD.
*/
VectorDD rebuild(const VectorDD& state,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment, this method traverses the full DD down to the terminals. And it does so without checking whether a certain successor has already been processed. As such, this function will necessarily scale exponentially in the number of qubits since it will visit every possible path through the decision diagram (which, in a decision diagram without zero terminals, are always exponentially many).
This is undesirable and seems unnecessary.

Furthermore, with this function, there is a second top-down traversal of the decision diagram in the approximation routine. One for collecting the nodes to be eliminated, and another one for eliminating the nodes themselves. I am fairly certain that there is a way to directly eliminate the nodes as part of the main descent through the decision diagram.
This might not be as apparent because the main approximation loop is implemented iteratively instead of recursively (like this function is).
However, I think this can be made to work even in the iterative case.
This would also avoid one of the drawbacks here that the current method really rebuilds the entire DD; even when there is nothing to eliminate.
Just as an example for a potential optimization (which will probably be obsolete if this two-step pass is avoided), if I look at the set of edges to be excluded an none of them has a variable index smaller than X, then I do not need to descend further than to level X+1 because nothing will change below that level.

@burgholzer
Copy link
Member

Ah, and don't be worried about the failing Python tests in the CI. These are unrelated to your changes and will be more or less automatically resolved soon 😌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c++ Anything related to C++ code DD Anything related to the DD package feature New feature or request refactor Anything related to code refactoring
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

2 participants