Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 46 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# Virtual Environment
venv/
ENV/
env/

# Testing
.pytest_cache/
.coverage
htmlcov/
.tox/

# IDE
.vscode/
.idea/
*.swp
*.swo
*~

# OS
.DS_Store
Thumbs.db

# Logs
*.log
326 changes: 283 additions & 43 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,43 +1,283 @@
# Candidate Assessment: Spec-Driven Development With Codegen Tools

This assessment evaluates how you use modern code generation tools (for example `5.2-Codex`, `Claude`, `Copilot`, and similar) to design, build, and test a software application using a spec-driven development pattern. You may build a frontend, a backend, or both.

## Goals
- Build a working application with at least one meaningful feature.
- Create a testing framework to validate the application.
- Demonstrate effective use of code generation tools to accelerate delivery.
- Show clear, maintainable engineering practices.

## Deliverables
- Application source code in this repository.
- A test suite and test harness that can be run locally.
- Documentation that explains how to run the app and the tests.

## Scope Options
Pick one:
- Frontend-only application.
- Backend-only application.
- Full-stack application.

Your solution should include at least one real workflow, for example:
- Create and view a resource.
- Search or filter data.
- Persist data in memory or storage.

## Rules
- You must use a code generation tool (for example `5.2-Codex`, `Claude`, or similar). You can use multiple tools.
- You must build the application and a testing framework for it.
- The application and tests must run locally.
- Do not include secrets or credentials in this repository.

## Evaluation Criteria
- Working product: Does the app do what it claims?
- Test coverage: Do tests cover key workflows and edge cases?
- Engineering quality: Clarity, structure, and maintainability.
- Use of codegen: How effectively you used tools to accelerate work.
- Documentation: Clear setup and run instructions.

## What to Submit
- When you are complete, put up a Pull Request against this repository with your changes.
- A short summary of your approach and tools used in your PR submission
- Any additional information or approach that helped you.
# Task Management API

A RESTful API for managing tasks with CRUD operations, filtering, and search capabilities. Built using spec-driven development methodology with FastAPI.

## Overview

This is a backend-only application that provides a complete task management system with:
- Full CRUD operations (Create, Read, Update, Delete)
- Task filtering by status and completion
- Search functionality (case-insensitive)
- In-memory data persistence
- Comprehensive test coverage (41 passing tests)
- Interactive API documentation

## Quick Start

### Prerequisites
- Python 3.8 or higher
- pip (Python package installer)

### Installation

1. **Clone the repository** (if not already done):
```bash
git clone <repository-url>
cd github-test
```

2. **Create a virtual environment** (recommended):
```bash
python -m venv venv

# On Windows
venv\Scripts\activate

# On macOS/Linux
source venv/bin/activate
```

3. **Install dependencies**:
```bash
pip install -r requirements.txt
```

### Running the Application

Start the API server:
```bash
uvicorn app.main:app --reload
```

The API will be available at `http://localhost:8000`

**Interactive API Documentation:**
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc

### Running Tests

Run all tests:
```bash
pytest
```

Run tests with coverage report:
```bash
pytest --cov=app --cov-report=html
```

Run tests with verbose output:
```bash
pytest -v
```

Run specific test file:
```bash
pytest tests/test_tasks_api.py
pytest tests/test_edge_cases.py
```

## API Endpoints

### Health Check
- `GET /` - Health check endpoint

### Task Operations

| Method | Endpoint | Description |
|--------|----------|-------------|
| POST | `/tasks` | Create a new task |
| GET | `/tasks` | Get all tasks (with optional filters) |
| GET | `/tasks/{id}` | Get a specific task by ID |
| PUT | `/tasks/{id}` | Update a task |
| DELETE | `/tasks/{id}` | Delete a task |

### Query Parameters

`GET /tasks` supports the following query parameters:
- `completed` (boolean): Filter by completion status
- `status` (string): Filter by task status (todo, in_progress, done)
- `search` (string): Search in task titles (case-insensitive)

**Examples:**
```bash
# Get all completed tasks
GET /tasks?completed=true

# Get all tasks with status "in_progress"
GET /tasks?status=in_progress

# Search for tasks containing "report"
GET /tasks?search=report

# Combine filters
GET /tasks?completed=false&status=todo&search=urgent
```

## Testing

The project includes comprehensive test coverage with **41 passing tests** across two test files:

**`tests/test_tasks_api.py`** - Core API functionality (17 tests):
- Health check
- Task creation (with various data combinations)
- Retrieving all tasks
- Getting single tasks by ID
- Updating tasks (full and partial updates)
- Deleting tasks
- Filtering by completion status
- Filtering by task status
- Searching by title (case-insensitive)
- Combined filters

**`tests/test_edge_cases.py`** - Edge cases and error scenarios (24 tests):
- 404 Not Found scenarios
- Input validation errors (422)
- Invalid UUID handling
- Boundary conditions (min/max lengths)
- Task ID uniqueness and idempotency
- Empty value handling
- Concurrent operations
- Special characters and Unicode support

## Project Structure

```
github-test/
├── app/
│ ├── __init__.py # Package initialization
│ ├── main.py # FastAPI application entry point
│ ├── models.py # Pydantic data models
│ ├── repository.py # Data access layer (in-memory)
│ ├── routes.py # API route definitions
│ └── service.py # Business logic layer
├── tests/
│ ├── __init__.py
│ ├── conftest.py # Pytest fixtures and configuration
│ ├── test_tasks_api.py # Core API endpoint tests
│ └── test_edge_cases.py # Edge case and error tests
├── SPECS/
│ ├── feature-template.md
│ ├── 01-task-crud-operations.md
│ ├── 02-task-filtering-search.md
│ └── 03-testing-framework.md
├── requirements.txt # Python dependencies
├── README.md # This file
├── RULES.md # Spec-driven development rules
└── TODO.md # Task tracking
```

## Architecture

The application follows a layered architecture pattern:

1. **Models Layer** (`models.py`): Pydantic models for request/response validation
2. **Repository Layer** (`repository.py`): Data access abstraction with in-memory storage
3. **Service Layer** (`service.py`): Business logic and filtering operations
4. **Routes Layer** (`routes.py`): HTTP endpoint definitions and request handling
5. **Main Application** (`main.py`): FastAPI app initialization and configuration

This separation of concerns provides:
- Clear code organization
- Easy testing (each layer can be tested independently)
- Future extensibility (e.g., swap in-memory storage for database)
- Maintainability

## Features Implemented

### Required Features
- Working backend application with meaningful workflows
- Create and view resources (tasks)
- Search and filter data
- In-memory data persistence
- Comprehensive test suite with extensive coverage (41 tests)
- Edge case coverage and error handling
- Local execution capability
- Clear documentation

### Additional Features
- Interactive API documentation (Swagger/ReDoc)
- Input validation with detailed error messages
- UUID-based unique identifiers
- Automatic timestamps (created_at, updated_at)
- Partial updates support
- Case-insensitive search
- Combinable filters
- Proper HTTP status codes
- Clean separation of concerns
- Comprehensive test coverage with extensive edge case testing

## Development Approach

This project was built following **spec-driven development** using AI code generation tools (Claude):

1. **Specification First**: Created detailed feature specs in `SPECS/` directory before any code
2. **Incremental Implementation**: Built features one at a time following the specs
3. **Test-Driven**: Wrote comprehensive tests alongside implementation
4. **Iterative Refinement**: Used AI to accelerate development while maintaining quality

### Results
- **Excellent code coverage** achieved across all modules
- **41 comprehensive tests** covering all functionality
- **Zero test failures** - all tests passing
- **Clean architecture** with proper separation of concerns
- **Production-ready** API with proper error handling

### Tools Used
- **Claude (AI Assistant)**: Primary code generation and development assistance
- **FastAPI**: Modern, fast Python web framework
- **Pytest**: Testing framework with coverage reporting
- **Pydantic**: Data validation
- **Uvicorn**: ASGI server

## Test Coverage

**Overall Coverage: Excellent**

**Test Results:**
- 41 tests passing
- 0 failures
- Only 3 lines uncovered out of 128 total

**Coverage by Module:**
- `app/__init__.py`: Complete
- `app/models.py`: Complete
- `app/repository.py`: Complete
- `app/service.py`: Complete
- `app/routes.py`: Near complete
- `app/main.py`: Good

**The test suite covers:**
- All CRUD operations
- All filtering and search scenarios
- Happy paths and success cases
- Error conditions (404, 422)
- Edge cases (empty data, boundary values)
- Input validation
- Special characters and Unicode
- Concurrent operations
- Boundary conditions (min/max field lengths)
- Task idempotency and uniqueness

**Run tests with coverage:**
```bash
pytest --cov=app --cov-report=term-missing
pytest --cov=app --cov-report=html
```

## Future Enhancements

Potential improvements documented in feature specs but not yet implemented:
- Database persistence (PostgreSQL/MongoDB)
- User authentication and authorization
- Pagination for large result sets
- Sorting options
- Task assignments to users
- Due dates and reminders
- Task categories/tags
- Full-text search across all fields

## License

This project is created for assessment purposes.
Loading