AI cover letter agent that helps PMs land more interviews via intelligent feedback, re-usable content and objective level assessment.
Live Demo: narrata.co
- Intelligent Feedback System: Get AI-powered feedback on your cover letters
- Reusable Content Library: Build and manage your professional content
- Level Assessment: Objective assessment based on your work history and stories
- Gap Analysis: Identify the highest-impact changes to achieve your goals
- React + TypeScript
- Tailwind CSS
- Supabase
- AI-powered feedback system
- Clone the repository
- Install dependencies:
npm install - Set up environment variables (see
.env.example) - Run the development server:
npm run dev
We welcome contributions! Please feel free to submit a Pull Request.
This project is licensed under the MIT License.
The Cover Letter Agent is a production-ready system that intelligently selects relevant case studies for job applications. It combines:
- Hybrid LLM + Tag Matching: Two-stage selection with fast tag filtering and intelligent LLM scoring
- Work History Context Enhancement: Tag inheritance and semantic matching from work history
- Human-in-the-Loop (HIL) Approval: Interactive CLI for case study selection with feedback collection
- Rule of Three Compliance: Always selects 3 case studies when possible for better storytelling
- Comprehensive Testing: End-to-end validation with real-world scenarios
- Production-Ready Infrastructure: Configuration management, error handling, and logging
- Intelligent Case Study Selection: Hybrid approach combining tag-based filtering with LLM semantic scoring
- Work History Integration: Enhances case studies with context from work history
- Human-in-the-Loop Approval: Interactive CLI for case study selection with progress tracking
- Rule of Three: Always selects 3 relevant case studies when possible
- Cost Control: Efficient LLM usage with <$0.10 per application
- Performance: <0.001s average processing time
- Progress Tracking: Shows "X/3 case studies added" for clear progress
- Full Case Study Display: Shows complete case study paragraphs for informed decisions
- Dynamic Alternatives: Shows next best candidate when rejecting suggestions
- Targeted Feedback: Prompts for feedback only when rejecting AI suggestions and approving alternatives
- Search vs Add New: Every 3 rejections, asks if user wants to keep searching or add new case studies
- Session Insights: Aggregates discrepancy statistics for continuous improvement
- Configuration Management: Centralized settings with YAML configuration
- Error Handling: Comprehensive error tracking and recovery
- Logging: Detailed logging for debugging and monitoring
- Testing: Full integration test suite with 100% success rate
- Modular Architecture: Clean separation of concerns
# Clone the repository
git clone https://github.com/ycb/cover-letter-agent.git
cd cover-letter-agent
# Install dependencies
pip install -r requirements.txt
# Run tests to verify installation
python3 tests/test_integration.pyThe system uses a centralized configuration file at config/agent_config.yaml:
# Hybrid Case Study Selection
hybrid_selection:
max_llm_candidates: 10
confidence_threshold: 1.0 # Rule of three threshold
llm_cost_per_call: 0.01
max_total_time: 2.0 # seconds
max_cost_per_application: 0.10 # dollars
# Work History Context Enhancement
work_history:
suppressed_inheritance_tags:
- frontend
- backend
- marketing
# ... more tags
# HIL CLI Configuration
hil_cli:
feedback_file: "users/{user_id}/hil_feedback.jsonl"
session_insights_file: "users/{user_id}/session_insights.jsonl"
max_rejections_before_add_new: 3from agents.hybrid_case_study_selection import HybridCaseStudySelector
from agents.work_history_context import WorkHistoryContextEnhancer
from agents.hil_approval_cli import HILApprovalCLI
# Initialize components
enhancer = WorkHistoryContextEnhancer()
selector = HybridCaseStudySelector()
hil_cli = HILApprovalCLI()
# Enhance case studies with work history context
enhanced_case_studies = enhancer.enhance_case_studies_batch(case_studies)
# Select relevant case studies
result = selector.select_case_studies(
enhanced_case_studies,
job_keywords=['product manager', 'cleantech', 'leadership'],
job_level='L5',
job_description='Senior PM at cleantech startup'
)
# Human-in-the-Loop approval
approved_case_studies, feedback_list = hil_cli.run_approval_workflow(
result.selected_case_studies,
result.all_ranked_candidates,
job_id='duke_2025_pm',
user_id='peter'
)
# Access results
print(f"Approved {len(approved_case_studies)} case studies")
print(f"Collected {len(feedback_list)} feedback items")# Run HIL approval workflow
python3 test_hil_peter_real.py
# Expected output:
# π Case Study 1
# Progress: 0/3 case studies added
# π Full Case Study Paragraph: [complete text]
# π€ LLM Score: 6.5
# Do you want to use this case study? (y/n): y
# Rate the relevance (1-10): 8
# β
Approved case study: ENACT Case Studyfrom agents.end_to_end_testing import EndToEndTester
# Run comprehensive tests
tester = EndToEndTester()
results = tester.run_all_tests()
report = tester.generate_test_report(results)
print(f"Success rate: {report['summary']['success_rate']:.1%}")- Success Rate: 100% (HIL CLI workflow)
- Performance: <0.001s average time
- Cost Control: $0.050 average cost per test
- Quality: 0.80 average confidence
- User Experience: Clean, efficient HIL workflow
- Progress Tracking: Clear "X/3 case studies added" display
- Feedback Collection: Targeted prompting for meaningful insights
- Dynamic Alternatives: Automatic next-best candidate selection
- Session Insights: Ranking discrepancy analysis and aggregation
- L5 Cleantech PM: 3 case studies selected β
- L4 AI/ML PM: 2 case studies selected (limited candidates)
- L3 Consumer PM: 3 case studies selected β
- Stage 1: Fast tag-based filtering for pre-selection
- Stage 2: LLM semantic scoring for top candidates only
- Cost Control: Only top 10 candidates sent to LLM
- Fallback: Graceful degradation if LLM fails
- Tag Inheritance: Relevant tags from work history
- Semantic Matching: Expanded tags based on context
- Tag Suppression: Prevents irrelevant tag inheritance
- Confidence Scoring: Quality assessment for enhancements
- Progress Tracking: Clear visual progress indicators
- Full Content Display: Complete case study paragraphs
- Dynamic Alternatives: Next-best candidate selection
- Targeted Feedback: Smart prompting for meaningful insights
- Session Insights: Discrepancy analysis and aggregation
- Search vs Add New: User choice for gap-filling strategy
- ConfigManager: Centralized configuration management
- ErrorHandler: Comprehensive error tracking and recovery
- Safe Execution: Wrapper functions for error handling
- Logging: Detailed logging for debugging
# Run all integration tests
python3 tests/test_integration.py
# Expected output:
# Tests run: 8
# Success rate: 100.0%
# β
All integration tests passed!# Test HIL CLI with real user data
python3 test_hil_peter_real.py
# Test HIL CLI with mock data
python3 test_phase6_hil_system.py
# Expected output:
# π Case Study 1
# Progress: 0/3 case studies added
# β
Approved case study: ENACT Case Study
# π All 3 case studies selected!# Test hybrid selection
python3 test_phase4_hybrid_selection.py
# Test work history enhancement
python3 test_work_history_context.py
# Test end-to-end pipeline
python3 agents/end_to_end_testing.py- Simple tag-based case study selection
- Basic relevance scoring
- Improved tag matching algorithms
- Better relevance scoring
- Tag inheritance from work history
- Semantic tag matching
- Tag provenance and weighting system
- Tag suppression rules
- 0.90 average confidence score
- Two-stage selection pipeline
- LLM semantic scoring for top candidates
- Cost-controlled LLM usage
- Integration with Phase 3 enhancements
- <0.001s performance, <$0.04 cost per application
- End-to-end testing with real-world scenarios
- Comprehensive validation metrics
- Performance and cost validation
- Quality assurance
- 66.7% success rate with room for optimization
- Interactive CLI: User-friendly approval workflow
- Progress Tracking: Clear "X/3 case studies added" display
- Full Content Display: Complete case study paragraphs
- Dynamic Alternatives: Next-best candidate selection on rejection
- Targeted Feedback: Smart prompting for meaningful insights
- Session Insights: Ranking discrepancy analysis and aggregation
- Search vs Add New: User choice for gap-filling strategy
- Real User Data: Testing with Peter's actual case studies
- 100% Success Rate: All HIL workflows working perfectly
- Identify missing case studies
- Suggest gap-filling strategies
- Prioritize gaps by importance
- Manual case study input with LLM proofing and enhancement
- β Configuration Management: Centralized YAML configuration
- β Error Handling: Comprehensive error tracking and recovery
- β Logging: Detailed logging for debugging
- β Code Organization: Proper module structure and imports
- β Comprehensive Testing: Full integration test suite
- β Performance Optimization: Better error handling and validation
- β Advanced Documentation: Comprehensive README and docstrings
- β Code Style Improvements: Better organization and maintainability
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run the test suite
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- β Phase 6: Human-in-the-Loop (HIL) CLI System - COMPLETED
- Phase 7: Gap Detection & Gap-Filling
- Production Deployment: Web interface and user management
- Advanced Features: Multi-modal matching, dynamic prompts
- Performance Optimization: Caching and batch processing