This is a complete implementation of a hybrid approach for the User Story Refinement application. The system uses Google's Gemini AI to help Product Owners create INVEST-compliant user stories through conversational interviews.
β
Real-time Conversational Interface - WebSocket-based chat with Business Analyst agent
β
Structured Interview Process - Guides through Initial Collection β Deep-Dive β Final Summary
β
Automatic Story Extraction - Background AI agent extracts structured data from conversations
β
Database Persistence - SQLite storage for sessions, stories, criteria, and dependencies
β
JIRA Integration - Export validated stories directly to JIRA
β
Status Tracking - Monitor extraction progress with dedicated API endpoints
β
Error Handling - Robust error handling with retry mechanisms
1. WebSocket Connection
β
2. Initial Collection Phase
- PO shares all feature ideas
- Raw stories saved to database
β
3. Sequential Deep-Dive Phase
- Validate each story against INVEST
- Track conversation per story
β
4. Final Summary Phase
- Present structured summary
- PO confirms
β
5. Background Extraction (Automatic)
- Extract structured UserStory objects
- Parse acceptance criteria
- Resolve dependencies
- Save to database
β
6. JIRA Export (Optional)
- Export stories to JIRA project
- main.py - FastAPI server with WebSocket and REST endpoints
- agent.py - PromptAgent for conversational interview
- story_extractor.py - StoryExtractionAgent for parsing conversations
- database.py - SQLite database with ORM-like interface
- jira_exporter.py - JIRA API integration
- Python 3.8+
- Google Gemini API key (Get one here)
Linux/Mac:
chmod +x quickstart.sh
./quickstart.shWindows:
quickstart.bat- Create virtual environment:
python -m venv venv
source venv/bin/activate # Linux/Mac
venv\Scripts\activate # Windows- Install dependencies:
pip install fastapi uvicorn[standard] websockets google-generativeai python-dotenv pydantic requests- Configure environment:
# Create .env file
echo "GEMINI_API_KEY=your_api_key_here" > .env- Replace files:
# Backup originals
cp database.py database.py.backup
cp agent.py agent.py.backup
cp main.py main.py.backup
# Replace with updated versions
mv database_updated.py database.py
mv agent_updated.py agent.py
mv main_updated.py main.py- Run the server:
python main.pyConnect: ws://localhost:8000/ws/{session_id}
Client Message:
{
"content": "I want to build a user authentication system"
}Server Response:
{
"type": "agent_response",
"content": "Great! Tell me more about...",
"timestamp": "2025-11-02T20:30:00",
"progress": {
"phase": "initial_collection",
"total_stories": 0,
"current_story": 0,
"completed_stories": 0
},
"session_completed": false
}GET /sessions/{session_id}/extraction-statusResponse:
{
"session_id": "abc-123",
"status": "extraction_complete",
"total_stories": 5,
"extracted_stories": 5,
"errors": null
}Status Values:
in_progress- Conversation ongoingextraction_pending- Extraction queued/runningextraction_complete- All stories extractedjira_exported- Exported to JIRAfailed- Extraction failed
POST /sessions/{session_id}/retry-extractionUse this if extraction fails due to transient errors.
GET /sessions/{session_id}/storiesReturns complete SessionSummary with all extracted stories.
POST /sessions/{session_id}/export-to-jira
Content-Type: application/json
{
"jira_url": "https://your-domain.atlassian.net",
"jira_project_key": "PROJ",
"jira_email": "user@example.com",
"jira_api_token": "your_api_token"
}Note: Extraction must be complete before JIRA export.
GET /health- Start server:
python main.py- Connect with WebSocket client (e.g., Postman, wscat):
wscat -c ws://localhost:8000/ws/test-session-123- Initial Collection:
{"content": "I need a login feature and a dashboard"}-
Deep-Dive Questions: Answer the agent's questions about each story.
-
Final Summary:
{"content": "yes, that looks good"}- Check extraction status:
curl http://localhost:8000/sessions/test-session-123/extraction-status- View extracted stories:
curl http://localhost:8000/sessions/test-session-123/storiesSee IMPLEMENTATION_GUIDE.md for comprehensive testing checklist covering:
- Initial Collection Phase
- Sequential Deep-Dive Phase
- Final Summary Phase
- Background Extraction
- API Endpoints
- Error Scenarios
CREATE TABLE sessions (
session_id TEXT PRIMARY KEY,
created_at TEXT NOT NULL,
completed_at TEXT,
user_personas TEXT,
total_stories INTEGER DEFAULT 0,
raw_stories TEXT, -- JSON array
story_conversations TEXT, -- JSON array
status TEXT DEFAULT 'in_progress',
extraction_error TEXT
);CREATE TABLE user_stories (
id TEXT PRIMARY KEY,
session_id TEXT,
title TEXT NOT NULL,
description TEXT NOT NULL,
user_persona TEXT NOT NULL,
business_value TEXT NOT NULL,
priority INTEGER NOT NULL,
estimation_notes TEXT,
is_independent BOOLEAN DEFAULT 0,
is_negotiable BOOLEAN DEFAULT 0,
is_valuable BOOLEAN DEFAULT 0,
is_estimable BOOLEAN DEFAULT 0,
is_small BOOLEAN DEFAULT 0,
is_testable BOOLEAN DEFAULT 0,
FOREIGN KEY (session_id) REFERENCES sessions (session_id)
);CREATE TABLE acceptance_criteria (
id TEXT PRIMARY KEY,
story_id TEXT,
criteria_text TEXT NOT NULL,
priority INTEGER NOT NULL,
FOREIGN KEY (story_id) REFERENCES user_stories (id)
);CREATE TABLE story_dependencies (
id TEXT PRIMARY KEY,
dependent_story_id TEXT,
depends_on_story_id TEXT,
dependency_type TEXT NOT NULL,
FOREIGN KEY (dependent_story_id) REFERENCES user_stories (id),
FOREIGN KEY (depends_on_story_id) REFERENCES user_stories (id)
);Create a .env file:
GEMINI_API_KEY=your_gemini_api_key_hereThe system uses two Gemini models:
- agent.py:
gemini-2.5-flash- For conversational interview - story_extractor.py:
gemini-2.0-flash-exp- For structured extraction
You can change models in the respective files:
self.model = genai.GenerativeModel('gemini-2.5-flash')# In main.py, change logging level
logging.basicConfig(level=logging.DEBUG)sqlite3 user_stories.db
-- View sessions
SELECT session_id, status, total_stories FROM sessions;
-- View raw stories
SELECT session_id, json_extract(raw_stories, '$') FROM sessions;
-- View extracted stories
SELECT title, user_persona, priority FROM user_stories;See IMPLEMENTATION_GUIDE.md β Troubleshooting section for detailed solutions to:
- Background extraction not starting
- Extraction returning None
- Dependencies not resolving
- Database schema errors
- JIRA export failing
The backend is designed to work with any frontend that supports WebSockets. Example frontend integration:
const ws = new WebSocket('ws://localhost:8000/ws/my-session-id');
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log('Agent:', data.content);
console.log('Progress:', data.progress);
if (data.session_completed) {
console.log('Session completed, extraction starting...');
checkExtractionStatus();
}
};
ws.send(JSON.stringify({
content: 'I want to build a user management feature'
}));
async function checkExtractionStatus() {
const response = await fetch('/sessions/my-session-id/extraction-status');
const status = await response.json();
console.log('Extraction status:', status.status);
}- README.md (this file) - Overview and quick start
- IMPLEMENTATION_GUIDE.md - Detailed implementation, testing, and troubleshooting
- API Docs - Auto-generated at
http://localhost:8000/docs(FastAPI Swagger UI)
If you're migrating from the original system:
- Backup your database:
cp user_stories.db user_stories.db.backup-
Choose migration option:
Option A - Fresh Start (Recommended for Testing):
rm user_stories.db # New schema will be created automaticallyOption B - Manual Migration (For Production):
ALTER TABLE sessions ADD COLUMN raw_stories TEXT; ALTER TABLE sessions ADD COLUMN story_conversations TEXT; ALTER TABLE sessions ADD COLUMN status TEXT DEFAULT 'in_progress'; ALTER TABLE sessions ADD COLUMN extraction_error TEXT;
-
Replace files (see Quick Start above)
-
Test thoroughly (see Testing section)
python main.pyWith Gunicorn (Linux/Mac):
pip install gunicorn
gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000With PM2 (Node.js process manager):
npm install -g pm2
pm2 start "uvicorn main:app --host 0.0.0.0 --port 8000" --name story-refinementDocker:
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]This implementation follows the detailed requirements specified in the prompt. Key design decisions:
- Hybrid Approach - Conversation tracking + post-processing extraction
- Background Tasks - Non-blocking extraction using asyncio
- Status Tracking - Clear status progression for monitoring
- Error Handling - Graceful degradation with retry mechanisms
- Modular Design - Separate concerns (agent, extractor, database)
This implementation is provided as-is for the User Story Refinement project.
For issues and questions:
- Check IMPLEMENTATION_GUIDE.md β Troubleshooting
- Review logs in console output
- Test with minimal example (see Testing section)
- Check database contents directly
The implementation is successful when:
β
Stories discussed in Deep-Dive are fully captured with all details
β
Background extraction reliably creates structured UserStory objects
β
Database contains complete, accurate story data after session
β
JIRA export works seamlessly with extracted stories
β
System handles errors gracefully without data loss
β
Status endpoints provide clear visibility into extraction progress
Built with β€οΈ using FastAPI, Gemini AI, and SQLite