Skip to content

aravv27/AgileAI

Repository files navigation

User Story Refinement System - Hybrid Approach Implementation

🎯 Overview

This is a complete implementation of a hybrid approach for the User Story Refinement application. The system uses Google's Gemini AI to help Product Owners create INVEST-compliant user stories through conversational interviews.

Key Features

βœ… Real-time Conversational Interface - WebSocket-based chat with Business Analyst agent
βœ… Structured Interview Process - Guides through Initial Collection β†’ Deep-Dive β†’ Final Summary
βœ… Automatic Story Extraction - Background AI agent extracts structured data from conversations
βœ… Database Persistence - SQLite storage for sessions, stories, criteria, and dependencies
βœ… JIRA Integration - Export validated stories directly to JIRA
βœ… Status Tracking - Monitor extraction progress with dedicated API endpoints
βœ… Error Handling - Robust error handling with retry mechanisms


πŸ—οΈ Architecture

System Flow

1. WebSocket Connection
   ↓
2. Initial Collection Phase
   - PO shares all feature ideas
   - Raw stories saved to database
   ↓
3. Sequential Deep-Dive Phase
   - Validate each story against INVEST
   - Track conversation per story
   ↓
4. Final Summary Phase
   - Present structured summary
   - PO confirms
   ↓
5. Background Extraction (Automatic)
   - Extract structured UserStory objects
   - Parse acceptance criteria
   - Resolve dependencies
   - Save to database
   ↓
6. JIRA Export (Optional)
   - Export stories to JIRA project

Components

  • main.py - FastAPI server with WebSocket and REST endpoints
  • agent.py - PromptAgent for conversational interview
  • story_extractor.py - StoryExtractionAgent for parsing conversations
  • database.py - SQLite database with ORM-like interface
  • jira_exporter.py - JIRA API integration

πŸš€ Quick Start

Prerequisites

Option 1: Automated Setup (Recommended)

Linux/Mac:

chmod +x quickstart.sh
./quickstart.sh

Windows:

quickstart.bat

Option 2: Manual Setup

  1. Create virtual environment:
python -m venv venv
source venv/bin/activate  # Linux/Mac
venv\Scripts\activate     # Windows
  1. Install dependencies:
pip install fastapi uvicorn[standard] websockets google-generativeai python-dotenv pydantic requests
  1. Configure environment:
# Create .env file
echo "GEMINI_API_KEY=your_api_key_here" > .env
  1. Replace files:
# Backup originals
cp database.py database.py.backup
cp agent.py agent.py.backup
cp main.py main.py.backup

# Replace with updated versions
mv database_updated.py database.py
mv agent_updated.py agent.py
mv main_updated.py main.py
  1. Run the server:
python main.py

πŸ“‘ API Reference

WebSocket Endpoint

Connect: ws://localhost:8000/ws/{session_id}

Client Message:

{
  "content": "I want to build a user authentication system"
}

Server Response:

{
  "type": "agent_response",
  "content": "Great! Tell me more about...",
  "timestamp": "2025-11-02T20:30:00",
  "progress": {
    "phase": "initial_collection",
    "total_stories": 0,
    "current_story": 0,
    "completed_stories": 0
  },
  "session_completed": false
}

REST Endpoints

Check Extraction Status

GET /sessions/{session_id}/extraction-status

Response:

{
  "session_id": "abc-123",
  "status": "extraction_complete",
  "total_stories": 5,
  "extracted_stories": 5,
  "errors": null
}

Status Values:

  • in_progress - Conversation ongoing
  • extraction_pending - Extraction queued/running
  • extraction_complete - All stories extracted
  • jira_exported - Exported to JIRA
  • failed - Extraction failed

Retry Extraction

POST /sessions/{session_id}/retry-extraction

Use this if extraction fails due to transient errors.

Get Session Stories

GET /sessions/{session_id}/stories

Returns complete SessionSummary with all extracted stories.

Export to JIRA

POST /sessions/{session_id}/export-to-jira
Content-Type: application/json

{
  "jira_url": "https://your-domain.atlassian.net",
  "jira_project_key": "PROJ",
  "jira_email": "user@example.com",
  "jira_api_token": "your_api_token"
}

Note: Extraction must be complete before JIRA export.

Health Check

GET /health

πŸ§ͺ Testing

Manual Testing Flow

  1. Start server:
python main.py
  1. Connect with WebSocket client (e.g., Postman, wscat):
wscat -c ws://localhost:8000/ws/test-session-123
  1. Initial Collection:
{"content": "I need a login feature and a dashboard"}
  1. Deep-Dive Questions: Answer the agent's questions about each story.

  2. Final Summary:

{"content": "yes, that looks good"}
  1. Check extraction status:
curl http://localhost:8000/sessions/test-session-123/extraction-status
  1. View extracted stories:
curl http://localhost:8000/sessions/test-session-123/stories

Automated Testing

See IMPLEMENTATION_GUIDE.md for comprehensive testing checklist covering:

  • Initial Collection Phase
  • Sequential Deep-Dive Phase
  • Final Summary Phase
  • Background Extraction
  • API Endpoints
  • Error Scenarios

πŸ—„οΈ Database Schema

Sessions Table

CREATE TABLE sessions (
    session_id TEXT PRIMARY KEY,
    created_at TEXT NOT NULL,
    completed_at TEXT,
    user_personas TEXT,
    total_stories INTEGER DEFAULT 0,
    raw_stories TEXT,              -- JSON array
    story_conversations TEXT,       -- JSON array
    status TEXT DEFAULT 'in_progress',
    extraction_error TEXT
);

User Stories Table

CREATE TABLE user_stories (
    id TEXT PRIMARY KEY,
    session_id TEXT,
    title TEXT NOT NULL,
    description TEXT NOT NULL,
    user_persona TEXT NOT NULL,
    business_value TEXT NOT NULL,
    priority INTEGER NOT NULL,
    estimation_notes TEXT,
    is_independent BOOLEAN DEFAULT 0,
    is_negotiable BOOLEAN DEFAULT 0,
    is_valuable BOOLEAN DEFAULT 0,
    is_estimable BOOLEAN DEFAULT 0,
    is_small BOOLEAN DEFAULT 0,
    is_testable BOOLEAN DEFAULT 0,
    FOREIGN KEY (session_id) REFERENCES sessions (session_id)
);

Acceptance Criteria Table

CREATE TABLE acceptance_criteria (
    id TEXT PRIMARY KEY,
    story_id TEXT,
    criteria_text TEXT NOT NULL,
    priority INTEGER NOT NULL,
    FOREIGN KEY (story_id) REFERENCES user_stories (id)
);

Story Dependencies Table

CREATE TABLE story_dependencies (
    id TEXT PRIMARY KEY,
    dependent_story_id TEXT,
    depends_on_story_id TEXT,
    dependency_type TEXT NOT NULL,
    FOREIGN KEY (dependent_story_id) REFERENCES user_stories (id),
    FOREIGN KEY (depends_on_story_id) REFERENCES user_stories (id)
);

πŸ”§ Configuration

Environment Variables

Create a .env file:

GEMINI_API_KEY=your_gemini_api_key_here

Gemini Model Configuration

The system uses two Gemini models:

  • agent.py: gemini-2.5-flash - For conversational interview
  • story_extractor.py: gemini-2.0-flash-exp - For structured extraction

You can change models in the respective files:

self.model = genai.GenerativeModel('gemini-2.5-flash')

πŸ“Š Monitoring & Debugging

Enable Detailed Logging

# In main.py, change logging level
logging.basicConfig(level=logging.DEBUG)

Check Database Contents

sqlite3 user_stories.db

-- View sessions
SELECT session_id, status, total_stories FROM sessions;

-- View raw stories
SELECT session_id, json_extract(raw_stories, '$') FROM sessions;

-- View extracted stories
SELECT title, user_persona, priority FROM user_stories;

Common Issues

See IMPLEMENTATION_GUIDE.md β†’ Troubleshooting section for detailed solutions to:

  • Background extraction not starting
  • Extraction returning None
  • Dependencies not resolving
  • Database schema errors
  • JIRA export failing

🎨 Frontend Integration

The backend is designed to work with any frontend that supports WebSockets. Example frontend integration:

const ws = new WebSocket('ws://localhost:8000/ws/my-session-id');

ws.onmessage = (event) => {
  const data = JSON.parse(event.data);
  console.log('Agent:', data.content);
  console.log('Progress:', data.progress);

  if (data.session_completed) {
    console.log('Session completed, extraction starting...');
    checkExtractionStatus();
  }
};

ws.send(JSON.stringify({
  content: 'I want to build a user management feature'
}));

async function checkExtractionStatus() {
  const response = await fetch('/sessions/my-session-id/extraction-status');
  const status = await response.json();
  console.log('Extraction status:', status.status);
}

πŸ“š Documentation

  • README.md (this file) - Overview and quick start
  • IMPLEMENTATION_GUIDE.md - Detailed implementation, testing, and troubleshooting
  • API Docs - Auto-generated at http://localhost:8000/docs (FastAPI Swagger UI)

πŸ”„ Migration from Original System

If you're migrating from the original system:

  1. Backup your database:
cp user_stories.db user_stories.db.backup
  1. Choose migration option:

    Option A - Fresh Start (Recommended for Testing):

    rm user_stories.db  # New schema will be created automatically

    Option B - Manual Migration (For Production):

    ALTER TABLE sessions ADD COLUMN raw_stories TEXT;
    ALTER TABLE sessions ADD COLUMN story_conversations TEXT;
    ALTER TABLE sessions ADD COLUMN status TEXT DEFAULT 'in_progress';
    ALTER TABLE sessions ADD COLUMN extraction_error TEXT;
  2. Replace files (see Quick Start above)

  3. Test thoroughly (see Testing section)


πŸš€ Deployment

Development

python main.py

Production

With Gunicorn (Linux/Mac):

pip install gunicorn
gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000

With PM2 (Node.js process manager):

npm install -g pm2
pm2 start "uvicorn main:app --host 0.0.0.0 --port 8000" --name story-refinement

Docker:

FROM python:3.9-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .
EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

🀝 Contributing

This implementation follows the detailed requirements specified in the prompt. Key design decisions:

  1. Hybrid Approach - Conversation tracking + post-processing extraction
  2. Background Tasks - Non-blocking extraction using asyncio
  3. Status Tracking - Clear status progression for monitoring
  4. Error Handling - Graceful degradation with retry mechanisms
  5. Modular Design - Separate concerns (agent, extractor, database)

πŸ“ License

This implementation is provided as-is for the User Story Refinement project.


πŸ“ž Support

For issues and questions:

  1. Check IMPLEMENTATION_GUIDE.md β†’ Troubleshooting
  2. Review logs in console output
  3. Test with minimal example (see Testing section)
  4. Check database contents directly

πŸŽ‰ Success Criteria

The implementation is successful when:

βœ… Stories discussed in Deep-Dive are fully captured with all details
βœ… Background extraction reliably creates structured UserStory objects
βœ… Database contains complete, accurate story data after session
βœ… JIRA export works seamlessly with extracted stories
βœ… System handles errors gracefully without data loss
βœ… Status endpoints provide clear visibility into extraction progress


Built with ❀️ using FastAPI, Gemini AI, and SQLite

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published