Skip to content

Latest commit

 

History

History

README.md

Tests

Comprehensive unit test suite for Charon infrastructure scripts and automation.

Overview

The test suite provides complete coverage for all Python scripts in the scripts/ directory, ensuring reliability and preventing regressions in infrastructure automation.

Test Structure

Test Files

Test File Script Under Test Coverage
test_cleanup_headscale_nodes.py scripts/headscale/cleanup_headscale_nodes.py Headscale node cleanup, DNS pruning
test_configure_ldap.py scripts/redmine/configure_ldap.py LDAP configuration for Redmine
test_create_cloudflare_secret.py scripts/cert-manager/create_cloudflare_secret.py Cloudflare secret creation
test_create_users.py scripts/freeipa/create_users.py FreeIPA user creation
test_get_tailscale_key.py scripts/tailscale/get_tailscale_key.py Tailscale key generation
test_update_service_dns.py scripts/dns/update_service_dns.py DNS record updates

Test Categories

  • API Integration Tests - Cloudflare API, Headscale API, Kubernetes API
  • Environment Variable Tests - Configuration validation and defaults
  • Error Handling Tests - Network failures, invalid responses, missing credentials
  • JSON Parsing Tests - Response validation and error handling
  • Subprocess Tests - kubectl command execution and mocking

Test Configuration

conftest.py

Provides shared test fixtures and configuration:

  • Environment Mocking - Prevents loading real .env files during tests
  • Default Values - Sets safe test defaults for database and API credentials
  • Custom Markers - @pytest.mark.requires_env, @pytest.mark.requires_cert
  • Graceful Skipping - Skips tests requiring external resources when unavailable

Running Tests

# Run all tests
pytest

# Run specific test file
pytest tests/test_get_tailscale_key.py

# Run with coverage
pytest --cov=scripts --cov-report=html

# Run tests matching pattern
pytest -k "tailscale"

# Run tests in verbose mode
pytest -v

Test Requirements

All tests use standard Python testing tools:

  • pytest - Test framework
  • pytest-mock - Mocking utilities
  • pytest-cov - Coverage reporting
  • Standard library modules only (no external dependencies)

Test Coverage Areas

✅ Complete Coverage

  • Script Argument Parsing - All CLI arguments and environment variables
  • API Error Handling - Network failures, authentication errors, rate limits
  • JSON Response Validation - Malformed responses, missing fields, unexpected data
  • Environment Configuration - Default values, precedence, validation
  • Exit Code Handling - Success (0) and failure (1) scenarios
  • Idempotent Operations - "Already exists" error handling

Test Patterns

API Mocking Pattern

def test_api_call_success(self, mocker):
    """Test successful API call with mocked response."""
    mock_run = mocker.patch("subprocess.run")
    mock_run.return_value.stdout = '{"success": true}'

    result = script_function()
    assert result["success"] is True
    mock_run.assert_called_once()

Error Handling Pattern

def test_api_call_failure(self, mocker):
    """Test API call failure handling."""
    mock_run = mocker.patch("subprocess.run")
    mock_run.side_effect = subprocess.CalledProcessError(1, ["cmd"], "error")

    with pytest.raises(SystemExit) as exc_info:
        script_function()
    assert exc_info.value.code == 1

Environment Variable Pattern

def test_environment_defaults(self, mocker):
    """Test default environment variable handling."""
    mocker.patch.dict(os.environ, {}, clear=True)

    # Test uses default values when env vars not set
    result = script_function()
    assert result["namespace"] == "default"

Test Maintenance

Adding New Tests

  1. Create test_<script_name>.py in tests/ directory
  2. Import the script function and required test fixtures
  3. Mock all external dependencies (subprocess, API calls, file I/O)
  4. Test success paths, error paths, and edge cases
  5. Ensure tests run in CI without external dependencies

Test Quality Standards

  • 100% Function Coverage - Every function must be tested
  • Error Path Coverage - All exception handling must be tested
  • Mock Completeness - No real API calls or file system access
  • Descriptive Names - Test names should describe what they're testing
  • Independent Tests - Each test should be runnable in isolation

CI/CD Integration

Tests are automatically run on:

  • Pull request creation and updates
  • Push to main branch
  • Manual workflow dispatch

Test Results

  • Coverage reports generated and uploaded
  • Test failures block merges
  • Slack notifications for failures
  • Detailed logs available in GitHub Actions

Troubleshooting Tests

Common Issues

Environment Variables Leaking

# Clear environment before running tests
env -i pytest tests/

Mock Not Working

# Check import path in test file
from scripts.module import function_to_test

Coverage Not Reported

# Run with specific source directory
pytest --cov=scripts --cov-report=html

Debug Mode

# Run single test with debugging
pytest tests/test_example.py::TestClass::test_method -v -s

# Drop into debugger on failure
pytest --pdb

Contributing

When adding new scripts:

  1. Create corresponding unit tests immediately
  2. Follow existing test patterns and naming conventions
  3. Ensure tests pass without external dependencies
  4. Update this README with new test coverage information
  5. Run full test suite before submitting PR

Related Documentation

<xai:function_call name="TodoWrite"> [{"content":"Review tests/ directory for comprehensive coverage and documentation","status":"completed","activeForm":"Updated scripts/README.md and created comprehensive tests/README.md with coverage documentation"},{"content":"Check submodule repositories for tests and GitHub workflows","status":"in_progress","activeForm":"Checking submodule repositories for CI/CD and testing"}]