Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
134 changes: 38 additions & 96 deletions .lad/.copilot-instructions.md
Original file line number Diff line number Diff line change
@@ -1,147 +1,89 @@
# Global Copilot Instructions

* Prioritize **minimal scope**: only edit code directly implicated by the failing test.
* Prioritize **minimal scope**: only edit code directly implicated by the failing test.
* Protect existing functionality: do **not** delete or refactor code outside the immediate test context.
* Before deleting any code, follow the "Coverage & Code Safety" guidelines below.

Copilot, do not modify any files under .lad/.
All edits must occur outside .lad/, or in prompts/ when explicitly updating LAD itself.

Coding & formatting
* Follow PEP 8; run Black.
* Follow PEP 8; formatting enforced by pre-commit hooks (black, isort).
* Use type hints everywhere.
* External dependencies limited to numpy, pandas, requests.
* Target Python 3.11.
* Respect existing project dependencies declared in pyproject.toml.

Testing & linting
* Write tests using component-appropriate strategy (see Testing Strategy below).
* Run flake8 with `--max-complexity=10`; keep complexity ≤ 10.
* Write tests using pytest; run via `tox -e py3` or `python -m pytest dandi`.
* Tests requiring the DANDI archive use Docker Compose fixtures.
* Mark AI-generated tests with `@pytest.mark.ai_generated`.
* Every function/class **must** include a **NumPy-style docstring** (Sections: Parameters, Returns, Raises, Examples).

## Testing Strategy by Component Type

**API Endpoints & Web Services:**
* Use **integration testing** - import the real FastAPI/Django/Flask app
* Mock only external dependencies (databases, external APIs, file systems)
* Test actual HTTP routing, validation, serialization, and error handling
* Verify real request/response behavior and framework integration
**CLI Commands:**
* Use click's `CliRunner` for testing CLI entry points
* Test argument parsing, output formatting, and error messages
* Mock API calls and filesystem where appropriate

**Business Logic & Algorithms:**
* Use **unit testing** - mock all dependencies completely
* Test logic in complete isolation, focus on edge cases
* Maximize test speed and reliability
* Test pure business logic without framework concerns
**API Client Operations (upload, download, move, etc.):**
* Use **integration testing** with Docker Compose fixtures for archive interactions
* Mock only external services not under test
* Test actual HTTP interactions, authentication, and error handling

**Data Processing & Utilities:**
**File Processing & Utilities:**
* Use **unit testing** with minimal dependencies
* Use test data fixtures for predictable inputs
* Use test data fixtures (tmp_path, simple NWB files) for predictable inputs
* Focus on input/output correctness and error handling

## Regression Prevention

**Before making changes:**
* Run full test suite to establish baseline: `pytest -q --tb=short`
* Run full test suite to establish baseline: `tox -e py3` or `python -m pytest dandi`
* Identify dependencies: `grep -r "function_name" . --include="*.py"`
* Understand impact scope before modifications

**During development:**
* Run affected tests after each change: `pytest -q tests/test_modified_module.py`
* Run affected tests after each change: `python -m pytest dandi/tests/test_modified_module.py`
* Preserve public API interfaces or update all callers
* Make minimal changes focused on the failing test

**Before commit:**
* Run full test suite: `pytest -q --tb=short`
* Run full test suite: `tox -e py3`
* Verify no regressions introduced
* Ensure test coverage maintained or improved

## Code Quality Setup (One-time per project)
## Code Quality Setup

**1. Install quality tools:**
```bash
pip install flake8 pytest coverage radon flake8-radon black
```

**2. Configure .flake8 file in project root:**
```ini
[flake8]
max-complexity = 10
radon-max-cc = 10
exclude =
__pycache__,
.git,
.lad,
.venv,
venv,
build,
dist
```

**3. Configure .coveragerc file (see kickoff prompt for template)**

**4. Verify setup:**
```bash
flake8 --version # Should show flake8-radon plugin
radon --version # Confirm radon installation
pytest --cov=. --version # Confirm coverage plugin
```

## Installing & Configuring Radon
**This project already has quality tooling configured.** Do not create new config files; use existing ones.

**Install Radon and its Flake8 plugin:**
**Verify setup:**
```bash
pip install radon flake8-radon
pre-commit install # Install pre-commit hooks if not present
tox -e lint # Run linting
tox -e typing # Run type checking
python -m pytest dandi # Run tests
```
This installs Radon's CLI and enables the `--radon-max-cc` option in Flake8.

**Enable Radon in Flake8** by adding to `.flake8` or `setup.cfg`:
```ini
[flake8]
max-complexity = 10
radon-max-cc = 10
```
Functions exceeding cyclomatic complexity 10 will be flagged as errors (C901).

**Verify Radon raw metrics:**
```bash
radon raw path/to/your/module.py
```
Outputs LOC, LLOC, comments, blank lines—helping you spot oversized modules quickly.

**(Optional) Measure Maintainability Index:**
```bash
radon mi path/to/your/module.py
```
Gives a 0–100 score indicating code maintainability.
**Existing configuration locations:**
- **Linting/formatting**: `.pre-commit-config.yaml` (black, isort, flake8)
- **Pytest config**: `tox.ini` under `[pytest]` section
- **Type checking**: `tox.ini` under `[testenv:typing]`
- **Dependencies**: `pyproject.toml`

Coverage & Code Safety
* For safety checks, do **not** run coverage inside VS Code.
Instead, ask the user:
> "Please run in your terminal:
> ```bash
> coverage run -m pytest [test_files] -q && coverage html
> ```
> then reply **coverage complete**."

* Before deleting code, verify:
1. 0% coverage via `coverage report --show-missing`
2. Absence from Level-2 API docs
If both hold, prompt:

Delete <name>? (y/n)
Reason: 0% covered and not documented.
(Tip: use VS Code "Find All References" on <name>.)
2. No references found via grep
If both hold, prompt for confirmation before deletion.

Commits
* Use Conventional Commits. Example:
`feat(pipeline-filter): add ROI masking helper`
* Keep body as bullet list of sub-tasks completed.
* Follow existing project conventions for commit messages.
* pre-commit hooks will auto-fix formatting; if commit fails due to auto-fixes, re-run the commit.

Docs
* High-level docs live under the target project's `docs/` and are organised in three nested levels using `<details>` tags.
* High-level docs live under the target project's `docs/` directory (Sphinx RST format).

* After completing each **main task** (top-level checklist item), run:
• `flake8 {{PROJECT_NAME}} --max-complexity=10`
• `python -m pytest --cov={{PROJECT_NAME}} --cov-context=test -q --maxfail=1`
• `tox -e lint`
• `python -m pytest dandi -q --maxfail=1`
If either step fails, pause for user guidance.

* **Radon checks:** Use `radon raw <file>` to get SLOC; use `radon mi <file>` to check maintainability. If `raw` LOC > 500 or MI < 65, propose splitting the module.
19 changes: 11 additions & 8 deletions .lad/CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,10 @@
*Auto-updated by LAD workflows - current system understanding*

## Code Style Requirements
- **Docstrings**: NumPy-style required for all functions/classes
- **Linting**: Flake8 compliance (max-complexity 10)
- **Testing**: TDD approach, component-aware strategies
- **Coverage**: 90%+ target for new code
- **Docstrings**: NumPy-style for public APIs
- **Formatting**: Black (line length 100), isort (profile="black"), enforced via pre-commit
- **Linting**: `tox -e lint`; type checking: `tox -e typing`
- **Testing**: pytest via `tox -e py3`; Docker Compose for integration tests

## Communication Guidelines
**Objective, European-Style Communication**:
Expand All @@ -26,9 +26,10 @@
- **Progress tracking**: Update both TodoWrite and plan.md files consistently

## Testing Strategy Guidelines
- **API Endpoints**: Integration testing (real app + mocked external deps)
- **Business Logic**: Unit testing (complete isolation + mocks)
- **Data Processing**: Unit testing (minimal deps + test fixtures)
- **CLI Commands**: click CliRunner + mocked API calls
- **API Client Operations**: Integration testing with Docker Compose fixtures
- **File Processing & Utilities**: Unit testing (tmp_path + test data fixtures)
- **AI-generated tests**: Mark with `@pytest.mark.ai_generated`

## Project Structure Patterns
*Learned from exploration - common patterns and conventions*
Expand All @@ -46,6 +47,8 @@

### Token Optimization for Large Codebases
**Standard test commands:**
- **Full suite**: `tox -e py3` or `python -m pytest dandi`
- **Single test**: `python -m pytest dandi/tests/test_file.py::test_function -v`
- **Large test suites**: Use `2>&1 | tail -n 100` for pytest commands to capture only final results/failures
- **Coverage reports**: Use `tail -n 150` for comprehensive coverage output to include summary
- **Keep targeted tests unchanged**: Single test runs (`pytest -xvs`) don't need redirection
Expand Down Expand Up @@ -94,4 +97,4 @@
- *No anti-patterns logged*

---
*Last updated by Claude Code LAD Framework*
*Last updated by Claude Code LAD Framework*
Loading