-
Notifications
You must be signed in to change notification settings - Fork 35
✨ Migrate and Refactor Approximation Functionality from DDSim Package #908
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
✨ Migrate and Refactor Approximation Functionality from DDSim Package #908
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅ 📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple of thoughts because I skimmed over the PR 😌
I also approved the CI so that the corresponding checks should run now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really really like where this is going. Thanks for your continued work on this.
The structure of the approximation has already become way cleaner and clearer.
As you will see from the comments, there are a couple of complexity aspects that are still suboptimal and where I believe improvements make sense.
Most of these should be fairly easy to incorporate. The hardest one will probably be avoiding the two-stage approximation traversal. But, at the same time, it's also the most drastic performance blocker.
I would be tempted to say that some tests that track the complexity of the approximation would be in order, but I know from experience that these are near impossible to write. Or they require some kind of additional tracking, which results in an overhead memory as well as runtime wise.
Some kind of middle ground could be to run the tests of the test suite through a profile and check the call counts for certain critical functions. This would probably reveal the exponential complexity of the current rebuild function.
Maybe these thoughts trigger an idea on your side for how to further improve the tests (which are already really great!)
test/dd/test_approximations.cpp
Outdated
|
||
// Test: If the budget is 0, no approximation will be applied. | ||
// | ||
// |state⟩ = 0.866|0⟩ + 0.5|1⟩ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not the state created by this test circuit😉
I suspect you might want to change the circuit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks & sorry🤦🏼
test/dd/test_approximations.cpp
Outdated
auto p = approximate(state, fidelity, *dd); | ||
auto approx = p.first; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably just a matter of taste, but this could look a little cleaner using structured bindings and an additional assertion that the returned fidelity is 1.
Something like
const auto& [approx, fid] = approximate(state, fidelity, *dd);
EXPECT_EQ(fid, fidelity);
Similar reasoning can be applied to the other tests.
test/dd/test_approximations.cpp
Outdated
|
||
// Test: If the budget is too small, no approximation will be applied. | ||
// | ||
// |state⟩ = 0.866|0⟩ + 0.5|1⟩ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also not the state being prepared in the test.
test/dd/test_approximations.cpp
Outdated
|
||
dd->decRef(p.first); | ||
dd->garbageCollect(true); | ||
|
||
EXPECT_EQ(dd->vUniqueTable.getNumEntries(), 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd argue that this could be combined with the previous test.
The description comment is still helpful and could also be copied over.
test/dd/test_approximations.cpp
Outdated
qcRef.h(0); | ||
qcRef.x(1); | ||
qcRef.cx(0, 1); | ||
qcRef.cx(1, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I am stupid, but the resulting state is a product state, so single-qubit operations are completely sufficient to construct the reference state.
Specifically,
qcRef.h(0); | |
qcRef.x(1); | |
qcRef.cx(0, 1); | |
qcRef.cx(1, 0); | |
qcRef.x(0); | |
qcRef.h(1); |
Should just work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are absolutely right.
src/dd/Approximation.cpp
Outdated
if (contribution <= budget) { | ||
exclude.emplace_front(e); | ||
budget -= contribution; | ||
} else if (!e->isTerminal()) { | ||
const vNode* n = e->p; | ||
|
||
for (const vEdge& eChildRef : n->e) { | ||
const vEdge* eChild = &eChildRef; | ||
|
||
if (!eChild->w.exactlyZero()) { | ||
const double childContribution = | ||
contribution * ComplexNumbers::mag2(eChild->w); | ||
const auto it = std::find_if( | ||
next.begin(), next.end(), | ||
[&eChild](const auto& p) { return p.first == eChild; }); | ||
if (it == next.end()) { | ||
next.emplace_front(eChild, childContribution); | ||
} else { | ||
(*it).second += childContribution; | ||
} | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might just be me and my personal programming preference, so there is definitely no need for you to change that, but just so that I have said it.
I am a big fan of reducing the levels of nesting in code.
I would probably place a continue
statement in the first if block and de-nest the subsequent else if, by flipping the if condition and continuing if the condition is not met.
That reduces at least one level of indirection, if not two.
At times, this can really help to make the code a little cleaner and seam less complex.
The exactlyZero
check could also be flipped to de-nest the subsequent code.
src/dd/Approximation.cpp
Outdated
const auto it = std::find_if( | ||
next.begin(), next.end(), | ||
[&eChild](const auto& p) { return p.first == eChild; }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking about the complexity here, I believe this could also benefit from using an unordered_set.
Iteration speed is similar. Lookup is way faster.
Should be a no brainer.
src/dd/Approximation.cpp
Outdated
} | ||
|
||
VectorDD approx = rebuild(state, exclude, dd); | ||
approx.w = dd.cn.lookup(approx.w / std::sqrt(ComplexNumbers::mag2(approx.w))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was about to write that this computation is likely to be redundant because you could simply set the top weight to one, but that would discard the global phase.
What could work though:
Before starting the approximation, safe the top edge weight of the incoming DD.
Then set the weight to exactly one and start the approximation procedure.
At the end, set approx.w
to the initial edge weight.
I am fairly certain that the top edge weight is not affected by approximation, so the above simplification should be valid.
* @details Traverses the decision diagram layer by layer in a breadth-first | ||
* manner (iterative deepening algorithm) and eliminates edges greedily until | ||
* the budget (1 - @p fidelity) is exhausted. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once all the details of the implementation are clarified, I believe it would be helpful to add a brief description of the complexity of the approximation operation to the docstring here.
src/dd/Approximation.cpp
Outdated
* @brief Recursively rebuild @p state. Exclude edges contained in @p exclude. | ||
* @return Rebuilt VectorDD. | ||
*/ | ||
VectorDD rebuild(const VectorDD& state, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the moment, this method traverses the full DD down to the terminals. And it does so without checking whether a certain successor has already been processed. As such, this function will necessarily scale exponentially in the number of qubits since it will visit every possible path through the decision diagram (which, in a decision diagram without zero terminals, are always exponentially many).
This is undesirable and seems unnecessary.
Furthermore, with this function, there is a second top-down traversal of the decision diagram in the approximation routine. One for collecting the nodes to be eliminated, and another one for eliminating the nodes themselves. I am fairly certain that there is a way to directly eliminate the nodes as part of the main descent through the decision diagram.
This might not be as apparent because the main approximation loop is implemented iteratively instead of recursively (like this function is).
However, I think this can be made to work even in the iterative case.
This would also avoid one of the drawbacks here that the current method really rebuilds the entire DD; even when there is nothing to eliminate.
Just as an example for a potential optimization (which will probably be obsolete if this two-step pass is avoided), if I look at the set of edges to be excluded an none of them has a variable index smaller than X, then I do not need to descend further than to level X+1 because nothing will change below that level.
Ah, and don't be worried about the failing Python tests in the CI. These are unrelated to your changes and will be more or less automatically resolved soon 😌 |
Description
This PR adds DD approximation functionality. Most of the functions originate from the DDSim Package. However, instead of simply copying and gluing the packages together, I re-implemented and refactored most of it.
Checklist: