Skip to content

Conversation

@a4894z
Copy link
Collaborator

@a4894z a4894z commented Jun 25, 2025

When using the probe power constraint, because of the way PtyChi does fft scaling, the power the probe gets scaled to is P / N^2 (where the array size of the probe is N x N) instead of P.

This small fix should account for this; alternatively we can initialize the probe power constraint to be P * N^2 and not use this small fix, but that seems like a clunky way of doing things?

Another way to account for this is to not use the propagate_forward() part of the code, directly compute the probe norm e.g. torch.sum(probe_composed.abs() ** 2), and then rescale the probe and sample using this power correction?

fft scaling, the power the probe gets scaled to is P / N^2 (where the
array size of the probe is N x N) instead of P. This small fix should
account for this; alternatively we can initialize the probe power
constraint to be P * N^2 and not use this small fix, but that seems like
a clunky way of doing things?
@a4894z a4894z requested review from Copilot and mdw771 June 25, 2025 21:35
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes the probe power constraint by compensating for the unnormalized FFT in the forward propagator, ensuring the probe’s power remains at the intended value.

  • Divide the summed squared magnitude by the number of elements to correct FFT scaling
  • Recompute power_correction using the normalized probe power
Comments suppressed due to low confidence (2)

src/ptychi/data_structures/probe.py:369

  • Add an inline comment explaining that dividing by the number of elements compensates for the FFT’s lack of normalization, preserving the intended total probe power.
        propagated_probe_power = torch.sum(propagated_probe.abs() ** 2) / self.data.size().numel()

src/ptychi/data_structures/probe.py:369

  • Add or update unit tests to verify that constrain_probe_power correctly normalizes the probe power after FFT propagation, covering both the old and new behavior.
        propagated_probe_power = torch.sum(propagated_probe.abs() ** 2) / self.data.size().numel()

propagated_probe = propagator.propagate_forward(probe_composed)
propagated_probe_power = torch.sum(propagated_probe.abs() ** 2)
power_correction = torch.sqrt(self.probe_power / propagated_probe_power)
propagated_probe_power = torch.sum(propagated_probe.abs() ** 2) / self.data.size().numel()
Copy link

Copilot AI Jun 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using propagated_probe.numel() instead of self.data.size().numel() to clearly reference the propagated tensor’s element count and avoid possible mismatches if shapes differ.

Suggested change
propagated_probe_power = torch.sum(propagated_probe.abs() ** 2) / self.data.size().numel()
propagated_probe_power = torch.sum(propagated_probe.abs() ** 2) / propagated_probe.numel()

Copilot uses AI. Check for mistakes.
@mdw771
Copy link
Collaborator

mdw771 commented Aug 8, 2025

Sorry it slipped out of my sight..merging it now

@mdw771 mdw771 merged commit 7aec2ca into main Aug 8, 2025
3 of 4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants