Skip to content

Support plan_id when submitting results from pytest #138

@matthcap

Description

@matthcap

What would you like the TestRail CLI to be able to do?

posting on behalf of @lscott in the TestRail Community: https://discuss.testrail.com/t/trcli-not-accepting-test-plan-run-id/23701

Use a test plan (multiple runs) run id in the --run-id field of trcli

When attempting to use a plan-id, I just get this error:

TestRail CLI v1.4.3
Copyright 2021 Gurock Software GmbH - www.gurock.com
Parse JUnit Execution Parameters

Report file: ./reports/junit-report.xml
Config file: None
TestRail instance: https://OUR-INSTANCE.testrail.io
Project: Firmware
Run title: Temporary Automated Test Plan 2
Update run: 3078
Add to milestone: No
Auto-create entities: True
Parsing JUnit report.
Processed 65 test cases in 1 sections.
Checking project. Done.
Nonexistent case IDs found in the report file: [30864, 30862, 30863, 30865, 30878, 30877, 30867, 30868, 30869, 30871, 30870, 30873, 30872, 30875, 30874, 30866, 28059, 28060, 30879, 30881, 30880, 30884, 30882, 30883, 30885, 30889, 30887, 30886, 30890, 30891, 30892, 30888, 30894, 30893, 30895, 30906, 30903, 30904, 30900, 30901, 30899, 30904, 30902, 30909, 30907, 30908, 30915, 30916, 30918, 30919, 30920, 30917, 30921, 30925, 30924, 30923, 30922, 30927, 30928, 30929, 30926, 30930, 30933, 30931, 30932]
Error occurred while checking for 'missing test cases': 'Case IDs not in TestRail project or suite were detected in the report file.'
Adding missing sections to the suite.
Updating run: https://OUR-INSTANCE.testrail.io/index.php?/runs/view/3078
Adding results: 0/65
Error during add_results. Trying to cancel scheduled tasks.

Aborting: add_results. Trying to cancel scheduled tasks.
Adding results: 0/65
No attachments found to upload.
Field :run_id is not a valid test run.
Deleted created section
At the moment, I’m bodging this by
Am I doing something wrong or are test plans simply not supported at this time?

Why is this feature necessary on the TestRail CLI?

Using a singular --run-id doesn’t scale for the multiple products that we make (each with different tests to be ran) when we use test plans with multiple test runs to capture

More details

In the spirit of open source, here is the code we currently use to get round this:

#!/usr/bin/python3

Author: lscott

Usage: Generate string for pytest and submit results to TestRail using trcli

"""
TODO :
- Copy this file to ~/git/
- '/' is added to stop finding cases like test_AccessAPI.py, it is removed in the report creation stage
"""

import logging
import subprocess
import sys
from time import sleep
import os
import xml.etree.ElementTree as ET
import requests

If I've done my job right, you should only have to update the infomation in this box

But obviously feel free to peruse at your leisure :)

unit = "unit"
title = "Temporary Automated Test Plan 2"

Update me with appropriate runIDs

If any new suites are added, they must be also added here with an approriate ID

test_types = {
"About/": "3079",
"Access/": "3080",
"Analytics/": "3081",
"API/": "3082",
"CustomInstall/": "3084",
"Alarms/": "3085",
"Airplay/": "3102",
"Analogue/": "3103",
"Bluetooth/": "3104",
"CD/": "3105",
"CD/": "3106"
}

Configure the logger

logging.basicConfig(filename='test.log', level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s')
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.INFO)
console_handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(message)s'))
logging.getLogger().addHandler(console_handler)

home = os.path.expanduser('~')

Get IP from ip.xml

tree = ET.parse(f"{home}/code/ip.xml")

Get the root element

root = tree.getroot()

Find the ip element and extract its value

ip_element = root.find('ip')
ip_address = ip_element.text
logging.info(f"Using IP: {ip_address}")

Get UUTs build information

response = requests.get(f'http://{ip_address}:15081/system')
if response.status_code != 200:
logging.critical(f"Device NOT reachable at {ip_address}")
sys.exit(1)

data = response.json()
build = data['build']
logging.info(f'Build: {build}')

folder = ''

Main loop

for test_type, run_id in test_types.items():
logging.info(f"Processing test type {test_type}")
with open(f"{home}/git/SWTestScripts/JenkinsHelpers/config/{unit}", 'r') as tests:
# Ignore commented tests
matching_tests = []
# Ignore commented tests
# This is added to NOT break current automated testing in Jenkins
for test in tests:
if test.startswith('#'):
continue
if test.startswith('!'):
# Get first word after '!'
folder = test.strip().split()[1]
if test_type in test:
matching_tests.append(f"{home}/git/SWTestScripts/CI/{folder}/{test.strip()}")
logging.info(f"Found {len(matching_tests)} matching tests")
if not matching_tests:
continue
test_type = test_type[:-1]
command = f"python3 -m pytest --junitxml 'reports/{test_type}-report.xml' {' '.join(matching_tests)}"
logging.info(f"Running command: {command}")
# Run command and wait until completed waiting on an exit code
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdout, stderr = process.communicate()
exit_code = process.wait()
logging.info(f"Command output:\n{stdout.decode('utf-8')}")
if exit_code == 0:
logging.info(f"Command completed successfully with exit code {exit_code}")
else:
logging.error(f"Command failed with exit code {exit_code}:\n{stderr.decode('utf-8')}")
sleep(3)

    # Submit to TestRail with results file
    # Suite ID = Generic Automation Suite
    command = f'trcli -y -c config.yaml parse_junit --title "{title}" --case-matcher "name" --run-id {run_id} --suite-id 1671 --result-fields version:{build} --allow-ms -f reports/{test_type}-report.xml'
    logging.info(f"Running command: {command}")
    process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
    stdout, stderr = process.communicate()
    exit_code = process.wait()
    logging.info(f"Command output:\n{stdout.decode('utf-8')}")
    if exit_code == 0:
        logging.info(f"Command completed successfully with exit code {exit_code}")
    else:
        logging.error(f"Command failed with exit code {exit_code}:\n{stderr.decode('utf-8')}")
    sleep(3)

The configs are structured as so:

File name: unit-name

! Features

Inputs/test_inputs.py
#IgnoredTests/
As you can see, this is not a fix, but a bodge.

Hope that helps some deal with this annoyance :slight_smile:

Interested in implementing it yourself?

Maybe, let's talk!

Metadata

Metadata

Assignees

No one assigned

    Labels

    New FeatureNew feature or requestSprint PlanningThe ticket is added to the Sprint Planning backlog

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions