Skip to content

Conversation

@emlynsg
Copy link
Contributor

@emlynsg emlynsg commented Dec 9, 2025

PR correcting issues identified in discussion of issue #363 . Opening PR with a couple of remaining todos to allow for discussion about the merits of removing the keep_inputs field across the Graphix codebase.

This PR removes the keep_inputs field and related code from Pauli presimulation components of patterns and tests.
This also results in the removal of any inputs passed through the Pauli presimulator, and new inputs being generated for output patterns.

Todo:

  • Discuss change to FFT test - number of nodes has changed compared to previous code, but the rest of the tests still pass. Some artefact of the change from input to created nodes.

@codecov
Copy link

codecov bot commented Dec 9, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 85.02%. Comparing base (c648681) to head (28397fa).
⚠️ Report is 1 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #392      +/-   ##
==========================================
+ Coverage   84.87%   85.02%   +0.14%     
==========================================
  Files          46       46              
  Lines        6639     6649      +10     
==========================================
+ Hits         5635     5653      +18     
+ Misses       1004      996       -8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

@matulni matulni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @emlynsg for this PR, it addresses a crucial issue!

I made some small code remarks and some comment about the API below. We can discuss it.

We can think of a strongly deterministic pattern $P$ with $m$ inputs and $n$ ouputs as unitary mapping between two Hilbert spaces with dimensions $m$ and $n$,

$$P: \mathcal{H}^{\otimes m} \rightarrow \mathcal{H}^{\otimes n},$$

with $m \leq n$.

With the current API, Pattern.perform_pauli_measurements can be understood as a transformation $T$, such that $T[P] = P'$, where

$$P': \tilde{0} \rightarrow \mathcal{H}^{\otimes n},$$

where $\tilde{0}$ denotes a zero-dimensional Hilbert space.

This implies that $P$ and $P'$ represent the same unitary if and only if $m=0$ in the first place.

I understand that this is a limitation of the Pauli presimulator, and it's a longterm goal to improve it, but my concern is that this dimensionality reduction occurs inadvertently. In particular, the example in #168 (comment) still fails. At the same time, I think it's natural to expect that the Pauli presimulation preserves the unitary the pattern represents.

I propose two alternative solutions:

  • Defensive approach: Pattern.perform_pauli_measurements raises an exception if called on a pattern with a finite number of inputs.

  • Flexible approach: Pattern.perform_pauli_measurements raises an warning when the number of inputs is reduced.

In either case, this behavior should be clearly explained in the docstring.

@emlynsg
Copy link
Contributor Author

emlynsg commented Dec 10, 2025

Thanks @emlynsg for this PR, it addresses a crucial issue!

I made some small code remarks and some comment about the API below. We can discuss it.

We can think of a strongly deterministic pattern P with m inputs and n ouputs as unitary mapping between two Hilbert spaces with dimensions m and n ,

P : H ⊗ m → H ⊗ n ,

with m ≤ n .

With the current API, Pattern.perform_pauli_measurements can be understood as a transformation T , such that T [ P ] = P ′ , where

P ′ : 0 ~ → H ⊗ n ,

where 0 ~ denotes a zero-dimensional Hilbert space.

This implies that P and P ′ represent the same unitary if and only if m = 0 in the first place.

I understand that this is a limitation of the Pauli presimulator, and it's a longterm goal to improve it, but my concern is that this dimensionality reduction occurs inadvertently. In particular, the example in #168 (comment) still fails. At the same time, I think it's natural to expect that the Pauli presimulation preserves the unitary the pattern represents.

I propose two alternative solutions:

* Defensive approach: `Pattern.perform_pauli_measurements` raises an exception if called on a pattern with a finite number of inputs.

* Flexible approach: `Pattern.perform_pauli_measurements` raises an warning when the number of inputs is reduced.

In either case, this behavior should be clearly explained in the docstring.

Thanks Mateo. I agree, and was thinking along the same lines (but forgot to mention anything in the commit text for discussion. It seems in general that we have taken the defensive approach for other similar parts of the API. One idea is to add some sort of remove inputs function that we suggest users apply first (in much the same way as we say that standardization needs to be done before some other operations).

@shinich1
Copy link
Contributor

could you also update the tutorial before merging? thanks.

Comment on lines 1669 to 1670
if pat._pauli_preprocessed is True:
return pat
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is fragile:

  • If copy is True, pat._pauli_preprocessed will always be False, even if pattern has been Pauli-processed.
  • Even if copy is False, if we Pauli-preprocess a pattern and then add an element to it, _pauli_preprocessed will be reset, but results still contains measurements that will be discarded.

I think properly handling the former results contents is the way to go.

Copy link
Contributor

@matulni matulni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! In the future we can reconsider removing self._pauli_preprocessed altogether, but I think that's outside the scope of this PR.
LGTM!

@thierry-martinez
Copy link
Collaborator

Regarding e9eeafd (disabling the QASM parser): we can keep the QASM parser enabled with PR TeamGraphix/graphix-qasm-parser#7, which makes graphix-qasm-parser use graphix master branch while the next version is not released. This will fix the graphix CI without disabling any tests. Currently, the graphix-qasm-parser main branch fails to pass the CI, and this issue will be fixed as well.

thierry-martinez added a commit to matulni/graphix that referenced this pull request Jan 6, 2026
This commit updates the baseline while TeamGraphix#392 is not merged.
Copy link
Collaborator

@thierry-martinez thierry-martinez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@emlynsg emlynsg merged commit 53c4fde into master Jan 7, 2026
25 checks passed
@emlynsg emlynsg deleted the remove-pp-inputs branch January 7, 2026 03:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants