Skip to content

ycb/cover-letter-agent-prototype

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

67 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Narrata - AI Cover Letter Agent

OG-image

AI cover letter agent that helps PMs land more interviews via intelligent feedback, re-usable content and objective level assessment.

Live Demo: narrata.co

Features

  • Intelligent Feedback System: Get AI-powered feedback on your cover letters
  • Reusable Content Library: Build and manage your professional content
  • Level Assessment: Objective assessment based on your work history and stories
  • Gap Analysis: Identify the highest-impact changes to achieve your goals

Tech Stack

  • React + TypeScript
  • Tailwind CSS
  • Supabase
  • AI-powered feedback system

Getting Started

  1. Clone the repository
  2. Install dependencies: npm install
  3. Set up environment variables (see .env.example)
  4. Run the development server: npm run dev

Contributing

We welcome contributions! Please feel free to submit a Pull Request.

License

This project is licensed under the MIT License.

🎯 Overview: CLI-specific

The Cover Letter Agent is a production-ready system that intelligently selects relevant case studies for job applications. It combines:

  • Hybrid LLM + Tag Matching: Two-stage selection with fast tag filtering and intelligent LLM scoring
  • Work History Context Enhancement: Tag inheritance and semantic matching from work history
  • Human-in-the-Loop (HIL) Approval: Interactive CLI for case study selection with feedback collection
  • Rule of Three Compliance: Always selects 3 case studies when possible for better storytelling
  • Comprehensive Testing: End-to-end validation with real-world scenarios
  • Production-Ready Infrastructure: Configuration management, error handling, and logging

πŸš€ Features

Core Functionality

  • Intelligent Case Study Selection: Hybrid approach combining tag-based filtering with LLM semantic scoring
  • Work History Integration: Enhances case studies with context from work history
  • Human-in-the-Loop Approval: Interactive CLI for case study selection with progress tracking
  • Rule of Three: Always selects 3 relevant case studies when possible
  • Cost Control: Efficient LLM usage with <$0.10 per application
  • Performance: <0.001s average processing time

HIL CLI Features

  • Progress Tracking: Shows "X/3 case studies added" for clear progress
  • Full Case Study Display: Shows complete case study paragraphs for informed decisions
  • Dynamic Alternatives: Shows next best candidate when rejecting suggestions
  • Targeted Feedback: Prompts for feedback only when rejecting AI suggestions and approving alternatives
  • Search vs Add New: Every 3 rejections, asks if user wants to keep searching or add new case studies
  • Session Insights: Aggregates discrepancy statistics for continuous improvement

Production Features

  • Configuration Management: Centralized settings with YAML configuration
  • Error Handling: Comprehensive error tracking and recovery
  • Logging: Detailed logging for debugging and monitoring
  • Testing: Full integration test suite with 100% success rate
  • Modular Architecture: Clean separation of concerns

πŸ“‹ Installation

# Clone the repository
git clone https://github.com/ycb/cover-letter-agent.git
cd cover-letter-agent

# Install dependencies
pip install -r requirements.txt

# Run tests to verify installation
python3 tests/test_integration.py

πŸ”§ Configuration

The system uses a centralized configuration file at config/agent_config.yaml:

# Hybrid Case Study Selection
hybrid_selection:
  max_llm_candidates: 10
  confidence_threshold: 1.0  # Rule of three threshold
  llm_cost_per_call: 0.01
  max_total_time: 2.0  # seconds
  max_cost_per_application: 0.10  # dollars

# Work History Context Enhancement
work_history:
  suppressed_inheritance_tags:
    - frontend
    - backend
    - marketing
    # ... more tags

# HIL CLI Configuration
hil_cli:
  feedback_file: "users/{user_id}/hil_feedback.jsonl"
  session_insights_file: "users/{user_id}/session_insights.jsonl"
  max_rejections_before_add_new: 3

πŸ§ͺ Usage

Basic Usage

from agents.hybrid_case_study_selection import HybridCaseStudySelector
from agents.work_history_context import WorkHistoryContextEnhancer
from agents.hil_approval_cli import HILApprovalCLI

# Initialize components
enhancer = WorkHistoryContextEnhancer()
selector = HybridCaseStudySelector()
hil_cli = HILApprovalCLI()

# Enhance case studies with work history context
enhanced_case_studies = enhancer.enhance_case_studies_batch(case_studies)

# Select relevant case studies
result = selector.select_case_studies(
    enhanced_case_studies,
    job_keywords=['product manager', 'cleantech', 'leadership'],
    job_level='L5',
    job_description='Senior PM at cleantech startup'
)

# Human-in-the-Loop approval
approved_case_studies, feedback_list = hil_cli.run_approval_workflow(
    result.selected_case_studies,
    result.all_ranked_candidates,
    job_id='duke_2025_pm',
    user_id='peter'
)

# Access results
print(f"Approved {len(approved_case_studies)} case studies")
print(f"Collected {len(feedback_list)} feedback items")

HIL CLI Workflow

# Run HIL approval workflow
python3 test_hil_peter_real.py

# Expected output:
# πŸ“‹ Case Study 1
# Progress: 0/3 case studies added
# πŸ“„ Full Case Study Paragraph: [complete text]
# πŸ€– LLM Score: 6.5
# Do you want to use this case study? (y/n): y
# Rate the relevance (1-10): 8
# βœ… Approved case study: ENACT Case Study

End-to-End Testing

from agents.end_to_end_testing import EndToEndTester

# Run comprehensive tests
tester = EndToEndTester()
results = tester.run_all_tests()
report = tester.generate_test_report(results)

print(f"Success rate: {report['summary']['success_rate']:.1%}")

πŸ“Š Performance Metrics

Test Results (Phase 6)

  • Success Rate: 100% (HIL CLI workflow)
  • Performance: <0.001s average time
  • Cost Control: $0.050 average cost per test
  • Quality: 0.80 average confidence
  • User Experience: Clean, efficient HIL workflow

HIL CLI Results

  • Progress Tracking: Clear "X/3 case studies added" display
  • Feedback Collection: Targeted prompting for meaningful insights
  • Dynamic Alternatives: Automatic next-best candidate selection
  • Session Insights: Ranking discrepancy analysis and aggregation

Rule of Three Results

  • L5 Cleantech PM: 3 case studies selected βœ…
  • L4 AI/ML PM: 2 case studies selected (limited candidates)
  • L3 Consumer PM: 3 case studies selected βœ…

πŸ—οΈ Architecture

Core Modules

Hybrid Case Study Selection

  • Stage 1: Fast tag-based filtering for pre-selection
  • Stage 2: LLM semantic scoring for top candidates only
  • Cost Control: Only top 10 candidates sent to LLM
  • Fallback: Graceful degradation if LLM fails

Work History Context Enhancement

  • Tag Inheritance: Relevant tags from work history
  • Semantic Matching: Expanded tags based on context
  • Tag Suppression: Prevents irrelevant tag inheritance
  • Confidence Scoring: Quality assessment for enhancements

Human-in-the-Loop (HIL) CLI

  • Progress Tracking: Clear visual progress indicators
  • Full Content Display: Complete case study paragraphs
  • Dynamic Alternatives: Next-best candidate selection
  • Targeted Feedback: Smart prompting for meaningful insights
  • Session Insights: Discrepancy analysis and aggregation
  • Search vs Add New: User choice for gap-filling strategy

Configuration & Error Handling

  • ConfigManager: Centralized configuration management
  • ErrorHandler: Comprehensive error tracking and recovery
  • Safe Execution: Wrapper functions for error handling
  • Logging: Detailed logging for debugging

πŸ§ͺ Testing

Integration Tests

# Run all integration tests
python3 tests/test_integration.py

# Expected output:
# Tests run: 8
# Success rate: 100.0%
# βœ… All integration tests passed!

HIL CLI Tests

# Test HIL CLI with real user data
python3 test_hil_peter_real.py

# Test HIL CLI with mock data
python3 test_phase6_hil_system.py

# Expected output:
# πŸ“‹ Case Study 1
# Progress: 0/3 case studies added
# βœ… Approved case study: ENACT Case Study
# πŸŽ‰ All 3 case studies selected!

Module Tests

# Test hybrid selection
python3 test_phase4_hybrid_selection.py

# Test work history enhancement
python3 test_work_history_context.py

# Test end-to-end pipeline
python3 agents/end_to_end_testing.py

πŸ“ˆ Development Phases

βœ… Completed Phases

Phase 1: Basic Tag Matching

  • Simple tag-based case study selection
  • Basic relevance scoring

Phase 2: Enhanced Tag Matching

  • Improved tag matching algorithms
  • Better relevance scoring

Phase 3: Work History Context Enhancement

  • Tag inheritance from work history
  • Semantic tag matching
  • Tag provenance and weighting system
  • Tag suppression rules
  • 0.90 average confidence score

Phase 4: Hybrid LLM + Tag Matching

  • Two-stage selection pipeline
  • LLM semantic scoring for top candidates
  • Cost-controlled LLM usage
  • Integration with Phase 3 enhancements
  • <0.001s performance, <$0.04 cost per application

Phase 5: Testing & Validation

  • End-to-end testing with real-world scenarios
  • Comprehensive validation metrics
  • Performance and cost validation
  • Quality assurance
  • 66.7% success rate with room for optimization

Phase 6: Human-in-the-Loop (HIL) CLI System

  • Interactive CLI: User-friendly approval workflow
  • Progress Tracking: Clear "X/3 case studies added" display
  • Full Content Display: Complete case study paragraphs
  • Dynamic Alternatives: Next-best candidate selection on rejection
  • Targeted Feedback: Smart prompting for meaningful insights
  • Session Insights: Ranking discrepancy analysis and aggregation
  • Search vs Add New: User choice for gap-filling strategy
  • Real User Data: Testing with Peter's actual case studies
  • 100% Success Rate: All HIL workflows working perfectly

🚧 Future Phases

Phase 7: Gap Detection & Gap-Filling

  • Identify missing case studies
  • Suggest gap-filling strategies
  • Prioritize gaps by importance
  • Manual case study input with LLM proofing and enhancement

πŸ”§ Cleanup Improvements

Phase 1: High Priority

  • βœ… Configuration Management: Centralized YAML configuration
  • βœ… Error Handling: Comprehensive error tracking and recovery
  • βœ… Logging: Detailed logging for debugging

Phase 2: Medium Priority

  • βœ… Code Organization: Proper module structure and imports
  • βœ… Comprehensive Testing: Full integration test suite
  • βœ… Performance Optimization: Better error handling and validation

Phase 3: Low Priority

  • βœ… Advanced Documentation: Comprehensive README and docstrings
  • βœ… Code Style Improvements: Better organization and maintainability

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests for new functionality
  5. Run the test suite
  6. Submit a pull request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

🎯 Roadmap

  • βœ… Phase 6: Human-in-the-Loop (HIL) CLI System - COMPLETED
  • Phase 7: Gap Detection & Gap-Filling
  • Production Deployment: Web interface and user management
  • Advanced Features: Multi-modal matching, dynamic prompts
  • Performance Optimization: Caching and batch processing

About

Cover letter agent with job description parser, dynamic blocks logic, Google Drive and OpenAI integrations

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages