Skip to content

Conversation

@lucaeg
Copy link
Contributor

@lucaeg lucaeg commented Jan 15, 2026

Vectorize the sampling rather than drawing one value at a time. This cosmetic change makes the code cleaner and easier to understand. In our view, drawing all samples for a parameter simultaneously is more natural and better aligned with standard sampling APIs (e.g., scipy).

  • PR title captures the intent of the changes, and is fitting for release notes.
  • Added appropriate release note label
  • Commit history is consistent and clean, in line with the contribution guidelines.
  • Make sure unit tests pass locally after every commit (git rebase -i main --exec 'just rapid-tests')

When applicable

  • When there are user facing changes: Updated documentation
  • New behavior or changes to existing untested code: Ensured that unit tests are added (See Ground Rules).
  • Large PR: Prepare changes in small commits for more convenient review
  • Bug fix: Add regression test for the bug
  • Bug fix: Add backport label to latest release (format: 'backport release-branch-name')

@lucaeg lucaeg requested a review from oyvindeide January 15, 2026 12:50
@codecov-commenter
Copy link

codecov-commenter commented Jan 15, 2026

❌ 1 Tests Failed:

Tests completed Failed Passed Skipped
3234 1 3233 91
View the full list of 1 ❄️ flaky test(s)
tests/ert/unit_tests/gui/simulation/test_run_dialog.py::test_that_exception_in_run_model_is_displayed_in_a_suggestor_window_after_simulation_fails

Flake rate in main: 89.87% (Passed 8 times, Failed 71 times)

Stack Traces | 360s run time
qtbot = <pytestqt.qtbot.QtBot object at 0x7ffb50f20c00>, use_tmpdir = None

    @pytest.mark.integration_test
    @pytest.mark.usefixtures("use_tmpdir")
    def test_that_exception_in_run_model_is_displayed_in_a_suggestor_window_after_simulation_fails(  # noqa E501
        qtbot: QtBot, use_tmpdir
    ):
        config_file = "minimal_config.ert"
        Path(config_file).write_text(
            "NUM_REALIZATIONS 1\nQUEUE_SYSTEM LOCAL", encoding="utf-8"
        )
        args_mock = Mock()
        args_mock.config = config_file
    
        ert_config = ErtConfig.from_file(config_file)
        with patch.object(
            ert.run_models.SingleTestRun,
            "run_experiment",
            MagicMock(side_effect=ValueError("I failed :(")),
        ):
            gui = _setup_main_window(ert_config, args_mock, GUILogHandler(), "storage")
            qtbot.addWidget(gui)
            run_experiment = gui.findChild(QToolButton, name="run_experiment")
    
            handler_done = False
    
            def assert_failure_in_error_dialog(run_dialog):
                nonlocal handler_done
                wait_until(lambda: run_dialog.fail_msg_box is not None, timeout=10000)
                suggestor_termination_window = run_dialog.fail_msg_box
                assert suggestor_termination_window
                text = (
                    suggestor_termination_window.findChild(
                        QWidget, name="suggestor_messages"
                    )
                    .findChild(QLabel)
                    .text()
                )
                assert "I failed :(" in text
                button = suggestor_termination_window.findChild(
                    QPushButton, name="close_button"
                )
                assert button
                button.click()
                handler_done = True
    
            simulation_mode_combo = gui.findChild(QComboBox)
            simulation_mode_combo.setCurrentText("Single realization test-run")
            qtbot.mouseClick(run_experiment, Qt.MouseButton.LeftButton)
            run_dialog = wait_for_child(gui, qtbot, RunDialog)
    
            QTimer.singleShot(100, lambda: assert_failure_in_error_dialog(run_dialog))
            # Capturing exceptions in order to catch an assertion error
            # from assert_failure_in_error_dialog and stop waiting
            with qtbot.captureExceptions() as exceptions:
                qtbot.waitUntil(
                    lambda: run_dialog.is_simulation_done() is True or bool(exceptions),
                    timeout=100000,
                )
                qtbot.waitUntil(lambda: handler_done or bool(exceptions), timeout=100000)
            if exceptions:
>               raise AssertionError(
                    f"Exception(s) happened in Qt event loop: {exceptions}"
                )
E               AssertionError: Exception(s) happened in Qt event loop: [(<class 'Failed'>, Timeout (>360.0s) from pytest-timeout., <traceback object at 0x7ffb5eab9fc0>)]

.../gui/simulation/test_run_dialog.py:652: AssertionError

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

@codspeed-hq
Copy link

codspeed-hq bot commented Jan 15, 2026

CodSpeed Performance Report

Merging this PR will not alter performance

Comparing vectorize-sample-prior (4a38d50) with main (f0d9de4)

Summary

✅ 22 untouched benchmarks

@lucaeg lucaeg added this to SCOUT Jan 16, 2026
@lucaeg lucaeg moved this to Ready for Review in SCOUT Jan 16, 2026
Copy link

@achaikou achaikou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am sorry to crash the party uninvited, but I was out of stuff to do 😊

I am also missing some context in the commit description about the reason for the change (is there some problem to be solved or would code just look nicer?), so I have to guess 😄

Copy link

@achaikou achaikou left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I right to assume that all those commits would be squashed?
I am just more accustomed to the fixup commit style 🙂

Btw, I am happy with the changes, but I don't feel like an ert-adult yet to click the "Approve" button for this PR 😄

Comment on lines 146 to 147
- num_realizations (int): Total number of realizations to generate the
reproducible sampling.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got somewhat confused by the wording (thinking it was "number of realizations that would be generated").

Taking words from your other comment, maybe something like

Total number of realizations. Assures stable sampling for a given global_seed regardless of currently active realizations.

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I have tried to make it more clear :) Also, all the commits will be squashed yes!

@github-project-automation github-project-automation bot moved this from Ready for Review to Reviewed in SCOUT Jan 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Reviewed

Development

Successfully merging this pull request may close these issues.

4 participants