Skip to content

Random sampling asymmetry between simulation and measurement in WaveDiff pipeline #176

@jeipollack

Description

@jeipollack

Hi (again) @tobias-liaudat,

While looking at my unit tests for centroids and tip and tilt functions I came back to wonder about the asymmetry in the computed tip and tilt versus the input ones. After speaking with PAF, I went through the data generation code, plugging in values and noted that each displacement is remapped with this scale_to_range function.

delta_pix_x = np.random.rand(total_n_stars).reshape(-1, 1)
delta_pix_x = scale_to_range(delta_pix_x, [0.0, 1.0], intrapixel_shift_range)
delta_Z1_arr = shift_x_y_to_zk1_2_wavediff(delta_pix_x * pix_sampling)
delta_pix_y = np.random.rand(total_n_stars).reshape(-1, 1)
delta_pix_y = scale_to_range(delta_pix_y, [0.0, 1.0], intrapixel_shift_range)
delta_Z2_arr = shift_x_y_to_zk1_2_wavediff(delta_pix_y * pix_sampling)

Perhaps, the reason the measured Zernike tip and tilt coefficients from the stars are approximately negative of input values is due to scale_to_range() running during simulation and not in the measurement creating asymmetry.

There is this note in the description of shift_x_y_to_zk1_2_wavediff:

To apply match the centroid with a `dx` that has a corresponding `zk1`,
the new PSF should be generated with `-zk1`.

which could indicate that indeed the results should be negative of the input values.

Questions:

  1. Should the same random sampling approach be used in both simulation and measurement?
  2. Is the negative relationship between input and measured values expected due to sign conventions, or is it a bug?
  3. Can we simplify the code while maintaining consistency?

Regarding simplification, random + scale_to_range seems to work generally if applied consistently, but it could also be expressed more concisely with uniform(low, high): https://numpy.org/doc/stable/reference/random/generated/numpy.random.uniform.html avoiding the need to rescale.

If uniform() is used to replace np.random + scale_to_range it could also be applied here:

train_positions = np.random.rand(n_train_stars, 2)
test_positions = np.random.rand(n_test_stars, 2)
# Scale the positions to the field of view
train_positions[:, 0] = scale_to_range(train_positions[:, 0], [0, 1], x_lims)
train_positions[:, 1] = scale_to_range(train_positions[:, 1], [0, 1], y_lims)
test_positions[:, 0] = scale_to_range(test_positions[:, 0], [0, 1], x_lims)
test_positions[:, 1] = scale_to_range(test_positions[:, 1], [0, 1], y_lims)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    Status

    No status

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions