Skip to content
10 changes: 10 additions & 0 deletions CHANGELOG.MD
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,16 @@ This project adheres to [Semantic Versioning](https://semver.org/). Version numb
- **MINOR**: New features that are backward-compatible.
- **PATCH**: Bug fixes or minor changes that do not affect backward compatibility.

## [1.12.1]

_released 09-30-2025

### Added
- Added failed automated tests assignment using --assign option

### Fixed
- Fixed an issue where JUnit parser fails to detect test case IDs at the beginning or it contains parentheses at the end of testcase names

## [1.12.0]

_released 09-11-2025
Expand Down
54 changes: 50 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ trcli
```
You should get something like this:
```
TestRail CLI v1.12.0
TestRail CLI v1.12.1
Copyright 2025 Gurock Software GmbH - www.gurock.com
Supported and loaded modules:
- parse_junit: JUnit XML Files (& Similar)
Expand All @@ -47,7 +47,7 @@ CLI general reference
--------
```shell
$ trcli --help
TestRail CLI v1.12.0
TestRail CLI v1.12.1
Copyright 2025 Gurock Software GmbH - www.gurock.com
Usage: trcli [OPTIONS] COMMAND [ARGS]...

Expand Down Expand Up @@ -136,6 +136,8 @@ Options:
--allow-ms Allows using milliseconds for elapsed times.
--special-parser Optional special parser option for specialized JUnit
reports.
-a, --assign Comma-separated list of user emails to assign failed test
results to.
--help Show this message and exit.
```

Expand Down Expand Up @@ -266,6 +268,50 @@ case_result_statuses:
```
You can find statuses ids for your project using following endpoint:
```/api/v2/get_statuses```

### Auto-Assigning Failed Tests

The `--assign` (or `-a`) option allows you to automatically assign failed test results to specific TestRail users. This feature is particularly useful in CI/CD environments where you want to automatically assign failures to responsible team members for investigation.

#### Usage

```shell
# Assign failed tests to a single user
$ trcli parse_junit -f results.xml --assign user@example.com \
--host https://yourinstance.testrail.io --username <your_username> --password <your_password> \
--project "Your Project"

# Assign failed tests to multiple users (round-robin distribution)
$ trcli parse_junit -f results.xml --assign "user1@example.com,user2@example.com,user3@example.com" \
--host https://yourinstance.testrail.io --username <your_username> --password <your_password> \
--project "Your Project"

# Short form using -a
$ trcli parse_junit -f results.xml -a user@example.com \
--host https://yourinstance.testrail.io --username <your_username> --password <your_password> \
--project "Your Project"
```

#### Example Output

```shell
Parser Results Execution Parameters
> Report file: results.xml
> Config file: /path/to/config.yml
> TestRail instance: https://yourinstance.testrail.io (user: your@email.com)
> Project: Your Project
> Run title: Automated Test Run
> Update run: No
> Add to milestone: No
> Auto-assign failures: Yes (user1@example.com,user2@example.com)
> Auto-create entities: True

Creating test run. Done.
Adding results: 100%|████████████| 25/25 [00:02<00:00, 12.5results/s]
Assigning failed results: 3/3, Done.
Submitted 25 test results in 2.1 secs.
```

### Exploring other features

#### General features
Expand Down Expand Up @@ -1039,7 +1085,7 @@ Options:
### Reference
```shell
$ trcli add_run --help
TestRail CLI v1.12.0
TestRail CLI v1.12.1
Copyright 2025 Gurock Software GmbH - www.gurock.com
Usage: trcli add_run [OPTIONS]

Expand Down Expand Up @@ -1163,7 +1209,7 @@ providing you with a solid base of test cases, which you can further expand on T
### Reference
```shell
$ trcli parse_openapi --help
TestRail CLI v1.12.0
TestRail CLI v1.12.1
Copyright 2025 Gurock Software GmbH - www.gurock.com
Usage: trcli parse_openapi [OPTIONS]

Expand Down
18 changes: 18 additions & 0 deletions tests/test_data/XML/junit5_parentheses_test.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="JUnit 5 Test Suite with Parentheses">
<testsuite failures="0" errors="0" skipped="1" tests="4" time="0.05" name="JUnit5ParenthesesTests">
<!-- JUnit 5 style test names with parentheses that should now work -->
<testcase classname="com.example.MyTests" name="test_name_C120013()" time="1.5"/>
<testcase classname="com.example.MyTests" name="testMethod_C123()" time="2.1"/>
<testcase classname="com.example.MyTests" name="complexTest_C456(String param, int value)" time="0.8"/>

<!-- Test case that should still work (at beginning) -->
<testcase classname="com.example.MyTests" name="C789_test_name()" time="1.2"/>

<!-- Test case with brackets (should still work) -->
<testcase classname="com.example.MyTests" name="[C999] test_with_brackets()" time="0.9"/>

<!-- Test case without parentheses (regression test) -->
<testcase classname="com.example.MyTests" name="test_name_C555" time="1.0"/>
</testsuite>
</testsuites>
101 changes: 101 additions & 0 deletions tests/test_data/json/junit5_parentheses_test.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
{
"name": "JUnit 5 Test Suite with Parentheses",
"testsections": [
{
"name": "JUnit5ParenthesesTests",
"testcases": [
{
"title": "test_name",
"case_id": 120013,
"result": {
"case_id": 120013,
"elapsed": 1.5,
"attachments": [],
"result_fields": {},
"custom_step_results": [],
"status_id": 1,
"comment": ""
},
"custom_automation_id": "com.example.MyTests.test_name_C120013()",
"case_fields": {}
},
{
"title": "testMethod",
"case_id": 123,
"result": {
"case_id": 123,
"elapsed": 2.1,
"attachments": [],
"result_fields": {},
"custom_step_results": [],
"status_id": 1,
"comment": ""
},
"custom_automation_id": "com.example.MyTests.testMethod_C123()",
"case_fields": {}
},
{
"title": "complexTest",
"case_id": 456,
"result": {
"case_id": 456,
"elapsed": 0.8,
"attachments": [],
"result_fields": {},
"custom_step_results": [],
"status_id": 1,
"comment": ""
},
"custom_automation_id": "com.example.MyTests.complexTest_C456(String param, int value)",
"case_fields": {}
},
{
"title": "test_name()",
"case_id": 789,
"result": {
"case_id": 789,
"elapsed": 1.2,
"attachments": [],
"result_fields": {},
"custom_step_results": [],
"status_id": 1,
"comment": ""
},
"custom_automation_id": "com.example.MyTests.C789_test_name()",
"case_fields": {}
},
{
"title": "test_with_brackets()",
"case_id": 999,
"result": {
"case_id": 999,
"elapsed": 0.9,
"attachments": [],
"result_fields": {},
"custom_step_results": [],
"status_id": 1,
"comment": ""
},
"custom_automation_id": "com.example.MyTests.[C999] test_with_brackets()",
"case_fields": {}
},
{
"title": "test_name",
"case_id": 555,
"result": {
"case_id": 555,
"elapsed": 1.0,
"attachments": [],
"result_fields": {},
"custom_step_results": [],
"status_id": 1,
"comment": ""
},
"custom_automation_id": "com.example.MyTests.test_name_C555",
"case_fields": {}
}
]
}
],
"source": null
}
101 changes: 101 additions & 0 deletions tests/test_matchers_parser.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
import pytest
from trcli.data_classes.data_parsers import MatchersParser


class TestMatchersParser:
"""Test cases for MatchersParser.parse_name_with_id method"""

@pytest.mark.parametrize(
"test_input, expected_id, expected_name",
[
# Basic patterns (existing functionality)
("C123 my test case", 123, "my test case"),
("my test case C123", 123, "my test case"),
("C123_my_test_case", 123, "my_test_case"),
("my_test_case_C123", 123, "my_test_case"),
("module_1_C123_my_test_case", 123, "module_1_my_test_case"),
("[C123] my test case", 123, "my test case"),
("my test case [C123]", 123, "my test case"),
("module 1 [C123] my test case", 123, "module 1 my test case"),

# JUnit 5 patterns with parentheses (new functionality)
("test_name_C120013()", 120013, "test_name"),
("testMethod_C123()", 123, "testMethod"),
("my_test_C456()", 456, "my_test"),
("C789_test_name()", 789, "test_name()"),
("C100 test_name()", 100, "test_name()"),

# JUnit 5 patterns with parameters
("test_name_C120013(TestParam)", 120013, "test_name"),
("test_C456(param1, param2)", 456, "test"),
("complexTest_C999(String param, int value)", 999, "complexTest"),

# Edge cases with parentheses
("myTest_C789()", 789, "myTest"),
("C200_method()", 200, "method()"),
("[C300] test_case()", 300, "test_case()"),
("test [C400] method()", 400, "test method()"),

# Cases that should not match
("test_name_C()", None, "test_name_C()"),
("test_name_123()", None, "test_name_123()"),
("test_name", None, "test_name"),
("C_test_name", None, "C_test_name"),
("test_Cabc_name", None, "test_Cabc_name"),

# Case sensitivity
("c123_test_name", 123, "test_name"),
("test_name_c456", 456, "test_name"),
("[c789] test_name", 789, "test_name"),
]
)
def test_parse_name_with_id_patterns(self, test_input, expected_id, expected_name):
"""Test various patterns of test name parsing including JUnit 5 parentheses support"""
case_id, case_name = MatchersParser.parse_name_with_id(test_input)
assert case_id == expected_id, f"Expected ID {expected_id}, got {case_id} for input '{test_input}'"
assert case_name == expected_name, f"Expected name '{expected_name}', got '{case_name}' for input '{test_input}'"

def test_parse_name_with_id_junit5_specific(self):
"""Specific test cases for JUnit 5 parentheses issue reported by user"""
# The exact examples from the user's issue
junit5_cases = [
("test_name_C120013()", 120013, "test_name"), # Should work now
("test_name_C120013", 120013, "test_name"), # Should still work
("C120013_test_name()", 120013, "test_name()"), # Should work
]

for test_case, expected_id, expected_name in junit5_cases:
case_id, case_name = MatchersParser.parse_name_with_id(test_case)
assert case_id == expected_id, f"JUnit 5 case failed: {test_case}"
assert case_name == expected_name, f"JUnit 5 name failed: {test_case}"

def test_parse_name_with_id_regression(self):
"""Ensure existing functionality still works (regression test)"""
# Test all the patterns mentioned in the docstring
existing_patterns = [
("C123 my test case", 123, "my test case"),
("my test case C123", 123, "my test case"),
("C123_my_test_case", 123, "my_test_case"),
("my_test_case_C123", 123, "my_test_case"),
("module_1_C123_my_test_case", 123, "module_1_my_test_case"),
("[C123] my test case", 123, "my test case"),
("my test case [C123]", 123, "my test case"),
("module 1 [C123] my test case", 123, "module 1 my test case"),
]

for test_case, expected_id, expected_name in existing_patterns:
case_id, case_name = MatchersParser.parse_name_with_id(test_case)
assert case_id == expected_id, f"Regression failed for: {test_case}"
assert case_name == expected_name, f"Regression name failed for: {test_case}"

def test_parse_name_with_id_empty_and_none(self):
"""Test edge cases with empty or None inputs"""
# Empty string
case_id, case_name = MatchersParser.parse_name_with_id("")
assert case_id is None
assert case_name == ""

# String with just spaces
case_id, case_name = MatchersParser.parse_name_with_id(" ")
assert case_id is None
assert case_name == " "
3 changes: 3 additions & 0 deletions tests/test_results_uploader.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,9 @@ def result_uploader_data_provider(self, mocker):
environment.run_id = None
environment.file = "results.xml"
environment.case_matcher = MatchersParser.AUTO
environment.assign_failed_to = None
environment._has_invalid_users = False
environment._validated_user_ids = []

junit_file_parser = mocker.patch.object(JunitParser, "parse_file")
api_request_handler = mocker.patch(
Expand Down
39 changes: 39 additions & 0 deletions tests_e2e/reports_junit/assign_test_failures.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
<testsuites>
<testsuite failures="3" errors="1" skipped="1" tests="6" time="15.5" name="[ASSIGNTESTSUITE] Suite">
<!-- Passed test -->
<testcase classname="[ASSIGNTESTSUITE] Suite" name="[C101] test successful operation" time="2.1"/>

<!-- Failed test 1 -->
<testcase classname="[ASSIGNTESTSUITE] Suite" name="[C102] test failed validation" time="1.5">
<failure type="AssertionError" message="Validation failed">
Expected validation to pass, but it failed with error: Invalid input
</failure>
</testcase>

<!-- Failed test 2 -->
<testcase classname="[ASSIGNTESTSUITE] Suite" name="[C103] test failed network call" time="3.2">
<failure type="NetworkError" message="Connection timeout">
Network call failed after 30 seconds timeout
</failure>
</testcase>

<!-- Error test -->
<testcase classname="[ASSIGNTESTSUITE] Suite" name="[C104] test error exception" time="0.8">
<error type="RuntimeError" message="Unexpected runtime error">
An unexpected runtime error occurred during test execution
</error>
</testcase>

<!-- Failed test 3 -->
<testcase classname="[ASSIGNTESTSUITE] Suite" name="[C105] test failed assertion" time="1.9">
<failure type="AssertionError" message="Assertion failed">
Expected value 'expected' but got 'actual'
</failure>
</testcase>

<!-- Skipped test -->
<testcase classname="[ASSIGNTESTSUITE] Suite" name="[C106] test skipped feature" time="0.0">
<skipped type="pytest.skip" message="Feature not implemented"/>
</testcase>
</testsuite>
</testsuites>
Loading