#include "hegel/hegel.h"
int main() {
hegel::hegel([]() {
using namespace hegel::generators;
auto n = integers<int>({.min_value = 0, .max_value = -1}).generate();
if (n < 0 || n > 100) {
throw std::runtime_error("out of bounds");
}
});
return 0;
}
The code above should return an InvalidArgument, but the error is discarded altogether and marked as a failed test, as though the system under test is incorrect rather than the way the user set up the Hegel test.
When we handle the generate command, data.draw throws an exception InvalidArgument('Cannot have max_value=-1 < min_value=0'). The InvalidArgument exception is then communicated to the client here. Somewhere (not sure), that InvalidArgument becomes a RequestError, which is interpreted by _run_one as an interesting test failure.
We can see this in the runner result below:
{'RequestError at /home/echoumcp1/hegel-core/src/hegel/protocol/channel.py:43': ConjectureResult(status=Status.INTERESTING, interesting_origin='RequestError at /home/echoumcp1/hegel-core/src/hegel/protocol/channel.py:43', length=0, output='', expected_exception=None, expected_traceback=None, has_discards=False, target_observations={}, tags=frozenset({StructuralCoverageTag(label=1), StructuralCoverageTag(label=3604665989267143887)}))}
_run_one sends a "test_done" event to the SDK and we get the appearance of the SUT failing the property since the message "Hegel test failed" is printed.
In my mind, there are two solutions. The stopgap solution I have in my mind is in the event loop of the client, when the event_type is test_case and we get an exception message from the channel, we save it and then when we exit the event loop, we check if there's an exception saved and throw it.
Example code from hegel-cpp
if (fatal_error) { // fatal_error is the saved error
std::rethrow_exception(fatal_error);
}
if (!test_passed) {
throw std::runtime_error("Hegel test failed");
}
Another solution for the results dict to include the interesting_origin and modify the RequestError to include the underlying error. If the origin is a RequestError, display the underlying error.
In spirit, I don't feel that either solutions are satisfying because the server is still communicating a hypothesis error as a SUT failure.
I believe a satisfying solution will require significant changes to the protocol we should discuss.
The code above should return an InvalidArgument, but the error is discarded altogether and marked as a failed test, as though the system under test is incorrect rather than the way the user set up the Hegel test.
When we handle the generate command,
data.drawthrows an exceptionInvalidArgument('Cannot have max_value=-1 < min_value=0'). The InvalidArgument exception is then communicated to the client here. Somewhere (not sure), that InvalidArgument becomes a RequestError, which is interpreted by _run_one as an interesting test failure.We can see this in the runner result below:
_run_onesends a"test_done"event to the SDK and we get the appearance of the SUT failing the property since the message"Hegel test failed"is printed.In my mind, there are two solutions. The stopgap solution I have in my mind is in the event loop of the client, when the
event_typeistest_caseand we get an exception message from the channel, we save it and then when we exit the event loop, we check if there's an exception saved and throw it.Example code from hegel-cpp
Another solution for the results dict to include the
interesting_originand modify theRequestErrorto include the underlying error. If the origin is aRequestError, display the underlying error.In spirit, I don't feel that either solutions are satisfying because the server is still communicating a hypothesis error as a SUT failure.
I believe a satisfying solution will require significant changes to the protocol we should discuss.