-
Notifications
You must be signed in to change notification settings - Fork 123
improved tests by waiting asserts #2957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
|
lets see how this performs after we merged stable into develop |
|
it looks to me that we now see more failures on github by file I/O related tests. |
mslib/msui/mpl_qtwidget.py
Outdated
| line.remove() | ||
| except ValueError as e: | ||
| logging.debug("Vertical line was somehow already removed:\n%s", e) | ||
| except (ValueError, NotImplementedError) as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When is this encountered? AFAICS self.vertical_lines is only appended to with axvline, which produces a matplotlib.lines.Line2D object, which definitely has a remove method. It might be a bug that NotImplementedError can come up here at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we recheck this. I wondered that this comes up.
|
|
||
| def _sync_local_from_server(self): | ||
| server_xml = self.window.mscolab.request_wps_from_server() | ||
| self.window.mscolab.waypoints_model = ft.WaypointsTableModel(xml_content=server_xml) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you assigning to attributes of the window instance? The application code itself is supposed to do this, otherwise it is broken because it doesn't update the model properly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should assume there's an error in the code. Unfortunately, it's not easy to pinpoint because it doesn't fail consistently – but it fails very often. We need a test that can force the error to occur. Right now, I’ve only been able to demonstrate the opposite: that the test itself doesn't have an error. If it passes like this, the problem must lie within the actual code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it's determined that the application code is broken, then we shouldn't change the test in a way in which it hides this error. Rather, mark it as xfail, or fix the real bug if it's simple enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I think it’s good that we’ve come to that decision. It makes it easier to see that the test code is actually correct and the error lies elsewhere. I’ll mark it as xfail in the original.
tests/_test_msui/test_msui.py
Outdated
|
|
||
| # on github saving is too fast | ||
| def assert_(): | ||
| msui.save_handler() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should only be called once?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm seeing strange behavior on GitHub with this. I changed to using st_mtime_ns after noticing GitHub reported identical values, but it’s still happening – GitHub sometimes shows the same value even when using st_mtime_ns. It looks like qwait might be a better solution.
tests/_test_msui/test_sideview.py
Outdated
| self.wms_control.multilayers.btGetCapabilities, | ||
| QtCore.Qt.LeftButton) | ||
|
|
||
| qtbot.waitUntil( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In other places we have consistently used the more pythonic method name with an underscore (qtbot.wait_{until,signal,exposed}).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
tests/_test_msui/test_sideview.py
Outdated
| QtCore.Qt.LeftButton) | ||
|
|
||
| qtbot.waitUntil( | ||
| lambda: getattr(self.wms_control, "cpdlg", None) is not None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think cpdlg is a progress dialog that is closed automatically when it finishes. Is the attribute not None when the dialog finishes? Otherwise it might happen that this wait_until does not see the dialog if it happens too fast.
tests/_test_msui/test_wms_control.py
Outdated
| mock_critical.assert_called_once() | ||
| qtbot.waitUntil(lambda: mock_critical.call_count == 1, timeout=3000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couldn't this just be qtbot.wait_until(mock_critical.assert_called_once)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, with the extra timeout value. it seems not to come sometimes to an end.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The timeout should be 5 seconds by default.
tests/_test_msui/test_wms_control.py
Outdated
| with qtbot.wait_signal(self.window.image_displayed): | ||
| QtTest.QTest.mouseClick(self.window.btGetMap, QtCore.Qt.LeftButton) | ||
| try: | ||
| with qtbot.wait_signal(self.window.image_displayed, timeout=5000): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
timeout=5000 is the default anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I thought that it is different.
tests/_test_msui/test_wms_control.py
Outdated
| except TimeoutError: | ||
| pytest.fail("Timeout: image_displayed was not emitted.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to catch this, if it's thrown and pytest sees it the test will fail in effectively the same way.
tests/_test_msui/test_wms_control.py
Outdated
| # Make sure that the signal is emitted on click | ||
| def _emit_image_displayed(*args, **kwargs): | ||
| self.window.image_displayed.emit() | ||
|
|
||
| self.window.btGetMap.clicked.connect(_emit_image_displayed) | ||
|
|
||
| # Ensure the signal actually triggers the view update | ||
| self.window.image_displayed.connect(self.view.draw_image) | ||
|
|
||
| # NEW: Ensure metadata drawing is triggered when image is displayed | ||
| def _emit_metadata_displayed(*args, **kwargs): | ||
| self.window.metadata_displayed.emit() | ||
|
|
||
| self.window.btGetMap.clicked.connect(_emit_metadata_displayed) | ||
| self.window.metadata_displayed.connect(self.view.draw_metadata) | ||
|
|
||
| # Ensure legend drawing is triggered when fetch_legend is called | ||
| def _emit_legend_displayed(*args, **kwargs): | ||
| # only emit when we simulate a non-cached fetch | ||
| if kwargs.get("use_cache") is False: | ||
| self.window.legend_displayed.emit() | ||
|
|
||
| self.window.legend_displayed.connect(self.view.draw_legend) | ||
| self.window.fetcher.fetch_legend.side_effect = _emit_legend_displayed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of this is re-implementing behavior that the actual application code has to exhibit to function properly, I think. If this is necessary for the test to pass, then there must be something broken in the application or in what the test tries to assert.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agree
tests/_test_msui/test_wms_control.py
Outdated
| # Mock the actual fetching so that no real HTTP call happens. | ||
| self.window.fetcher.fetch_map = mock.MagicMock() | ||
| self.window.fetcher.fetch_legend = mock.MagicMock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why should it not talk to mswms?
| signal_disable_cbs = QtCore.pyqtSignal(name="disable_cbs") | ||
| signal_enable_cbs = QtCore.pyqtSignal(name="enable_cbs") | ||
| image_displayed = QtCore.pyqtSignal() | ||
| legend_displayed = QtCore.pyqtSignal() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wondered a bit that the signals weren't complete. And that this makes random crashes.
| self.overwriteBtn.animateClick() | ||
|
|
||
|
|
||
| @pytest.mark.skip(reason='The test identifies an inaccuracy in the code, which sporadically results in errors.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
xfail - does not help. Sometimes some of the skipped tests will succeed.
Purpose of PR?:
Fixes #2956