CareerPathAI is a comprehensive AI-powered career guidance system that helps job seekers analyze their skills, get personalized career recommendations, and develop learning plans to achieve their career goals.
- π Resume Parsing: Extract skills and information from PDF/DOCX resumes using NLP
- π― Career Recommendations: Get personalized job recommendations based on your skills
- π Skill Gap Analysis: Identify missing skills for your target roles
- π Learning Plans: Get curated learning resources for skill development
- π‘ Personalized Advice: Receive tailored career guidance and next steps
- π Skill Visualization: Interactive charts and metrics for skill analysis
- π§ Manual Skills Input: Don't have a resume? Input skills manually
- FastAPI: Modern, fast web framework for building APIs
- spaCy: Advanced NLP for text processing and entity extraction
- scikit-learn: Machine learning for clustering and analysis
- Sentence Transformers: State-of-the-art text embeddings
- PDFMiner: PDF text extraction
- python-docx: DOCX file processing
- Streamlit: Beautiful, interactive web application
- Plotly: Interactive data visualizations
- Requests: HTTP client for API communication
- NLP: Named Entity Recognition, skill extraction
- Clustering: Career path clustering
- Similarity Matching: Skill-to-job matching algorithms
- Python 3.8+
- pip (Python package manager)
- Git
git clone <repository-url>
cd CareerPathAIpython -m venv venvWindows:
venv\Scripts\activatemacOS/Linux:
source venv/bin/activatepip install -r requirements.txtpython -m spacy download en_core_web_smcd app
uvicorn main:app --reloadThe API will be available at http://localhost:8000
cd frontend
streamlit run app.pyThe web app will be available at http://localhost:8501
CareerPathAI/
βββ app/
β βββ main.py # FastAPI backend
β βββ models/ # ML models (future)
β βββ services/
β β βββ resume_parser.py # Resume parsing service
β β βββ career_recommender.py # Career recommendation engine
β βββ utils/ # Helper utilities
βββ frontend/
β βββ app.py # Streamlit frontend
βββ data/
β βββ resumes/ # Sample resumes
β βββ jobs/ # Job data
βββ notebooks/ # Jupyter notebooks for development
βββ requirements.txt # Python dependencies
βββ README.md # This file
- Open the web application at
http://localhost:8501 - Navigate to "Upload Resume" in the sidebar
- Upload your PDF or DOCX resume
- View the analysis results:
- Extracted skills by category
- Career recommendations with match scores
- Learning plan for missing skills
- Personalized career advice
- Navigate to "Manual Skills Input" in the sidebar
- Select your skills from the dropdown menus
- Click "Analyze Skills"
- Get the same comprehensive analysis without a resume
GET /: API information and available endpointsGET /health: Health checkPOST /upload_resume: Upload and parse resume filePOST /parse_text: Parse resume from text inputPOST /analyze_skills: Analyze skills without resume
GET /job_profiles: Get available job profilesGET /learning_resources: Get learning resources
{
"parsed_resume": {
"name": "John Doe",
"email": "john.doe@email.com",
"phone": "(555) 123-4567",
"skills": {
"programming": ["python", "javascript", "java"],
"web_development": ["react", "node.js", "django"],
"databases": ["mysql", "postgresql", "mongodb"],
"cloud_platforms": ["aws", "docker", "kubernetes"]
}
},
"career_analysis": {
"recommendations": [
{
"job_title": "Software Engineer",
"match_score": 85.5,
"description": "Design, develop, and maintain software applications",
"missing_skills": ["algorithms", "data structures"]
}
],
"learning_plan": {
"algorithms": {
"courses": ["https://www.coursera.org/learn/algorithms"],
"books": ["Introduction to Algorithms"],
"practice": ["LeetCode", "HackerRank"]
}
}
}
}Edit app/services/resume_parser.py to add new skill categories:
SKILL_KEYWORDS = {
'new_category': [
'skill1', 'skill2', 'skill3'
]
}Edit app/services/career_recommender.py to add new job profiles:
JOB_PROFILES = {
'New Job Title': {
'required_skills': ['skill1', 'skill2'],
'preferred_skills': ['skill3', 'skill4'],
'description': 'Job description here'
}
}Update the LEARNING_RESOURCES dictionary in career_recommender.py:
LEARNING_RESOURCES = {
'new_skill': {
'courses': ['course_url'],
'books': ['book_title'],
'practice': ['practice_platform']
}
}- Follow the Quick Start guide above
- Both backend and frontend will run on localhost
-
Backend: Deploy to cloud platforms like:
- Heroku
- Railway
- AWS EC2
- Google Cloud Run
-
Frontend: Deploy to:
- Streamlit Cloud
- Vercel
- Netlify
# Backend Dockerfile
FROM python:3.9
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY app/ .
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]- Push this repository to a GitHub repo.
- Go to https://share.streamlit.io and connect your GitHub account.
- Add an app with the following settings:
- Repository: your-repo
- Main file path: streamlit_app.py
- (Optional) Set the environment variable API_BASE_URL in Streamlit Cloud to your hosted backend (e.g. https://my-backend.example.com). If not set, the app will attempt to use http://localhost:8000 (useful for local development).
- Ensure the repository has the requirements.txt at the root (already present).
- Deploy β Streamlit Cloud will install dependencies and run streamlit_app.py automatically.
Notes:
- If you host the FastAPI backend separately, point API_BASE_URL in the Streamlit Cloud app settings to that backend URL.
- If you want both frontend and backend in the same deployment, consider deploying the backend to a separate service (Heroku/Railway/GCP/AWS) and set API_BASE_URL accordingly.
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes
- Add tests if applicable
- Commit your changes:
git commit -m 'Add feature' - Push to the branch:
git push origin feature-name - Submit a pull request
Made with β€οΈ by Dasini Jayakody for job seekers worldwide