Skip to content

Conversation

@llbbl
Copy link

@llbbl llbbl commented Sep 1, 2025

Set up Python Testing Infrastructure

Summary

This PR establishes a comprehensive testing infrastructure for the FLUX training project, providing a complete foundation for writing and running tests.

Changes Made

Package Management

  • Poetry Setup: Created pyproject.toml with Poetry configuration
  • Dependency Migration: Migrated all dependencies from requirements.txt to Poetry format
  • Python Version: Updated to require Python 3.10+ to support all dependencies
  • Test Dependencies: Added pytest, pytest-cov, and pytest-mock as development dependencies

Testing Configuration

  • Pytest Configuration: Comprehensive pytest setup in pyproject.toml including:
    • Test discovery patterns for files, classes, and functions
    • Coverage reporting with 80% threshold
    • HTML and XML coverage output formats
    • Custom markers for unit, integration, and slow tests
    • Strict configuration and warning handling
  • Coverage Settings: Configured to exclude test files, build artifacts, and virtual environments

Directory Structure

tests/
├── __init__.py
├── conftest.py              # Shared fixtures
├── test_setup_validation.py # Infrastructure validation tests
├── unit/
│   └── __init__.py
└── integration/
    └── __init__.py

Shared Fixtures (conftest.py)

  • temp_dir: Temporary directory for file operations
  • mock_torch_device: Mock PyTorch device for CPU-only testing
  • mock_model_config / mock_training_config: Configuration mocks
  • sample_data: Sample data for testing ML components
  • mock_wandb: Mock W&B logging for testing
  • mock_transformers / mock_diffusers: Mock ML model components
  • disable_gpu: Automatically disable GPU for all tests
  • File operation mocks: Mock file system operations

Infrastructure Validation

  • Validation Test Suite: test_setup_validation.py verifies:
    • Pytest functionality
    • Python version compatibility
    • Project structure integrity
    • Fixture availability and functionality
    • Test markers working correctly
    • Import capabilities

Configuration Updates

  • .gitignore: Added Claude Code settings exclusion
  • Lock Files: Poetry.lock is tracked (not excluded) for reproducible builds

Running Tests

Install Dependencies

poetry install

Run Tests

# Run all tests
poetry run pytest

# Run with verbose output
poetry run pytest -v

# Run specific test file
poetry run pytest tests/test_setup_validation.py

# Run tests with specific markers
poetry run pytest -m unit
poetry run pytest -m integration
poetry run pytest -m "not slow"

Coverage Reports

  • Terminal: Coverage summary displayed after test runs
  • HTML: Available in htmlcov/index.html
  • XML: Available in coverage.xml for CI systems

Notes

  • No Production Code Tests: This PR only sets up the infrastructure - no actual unit tests for the ML codebase are included yet
  • Ready for Development: Developers can now immediately start writing tests using the provided fixtures and structure
  • CI/CD Ready: Coverage reporting formats are compatible with most CI systems
  • Extensible: Easy to add more fixtures and test categories as needed

Validation

✅ Dependencies install successfully
✅ All validation tests pass (10/10)
✅ Coverage reporting generates correctly
✅ Test discovery finds validation tests
✅ Fixtures work as expected
✅ Markers are properly configured

- Add Poetry configuration with migrated dependencies from requirements.txt
- Configure pytest with coverage reporting, markers, and strict settings
- Create testing directory structure with unit/integration separation
- Add comprehensive shared fixtures in conftest.py for mocking and data
- Include validation tests to verify infrastructure functionality
- Update .gitignore with Claude Code settings
- Set up 80% coverage threshold with HTML/XML reporting
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant