An intelligent resume analysis and job matching system that leverages AI/LLM technology to provide real-time feedback and suggestions for resume optimization.
- Multi-format Support: Parse resumes in PDF, DOCX, and TXT formats
- AI-Powered Analysis: Leverage Ollama with local LLM inference for privacy
- Real-time Streaming: Get instant feedback with Server-Sent Events (SSE)
- Skill Extraction: Automatically identify and extract skills from resumes
- Job Description Matching: Compare resume content against job requirements
- Modern Web Interface: Beautiful React-based UI with Material-UI components
- Docker Ready: Easy deployment with Docker Compose
The project consists of four main services:
βββββββββββββββ βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Frontend β β Backend β β ML Service β β Ollama β
β (React) βββββΊβ (Go) βββββΊβ (Python) βββββΊβ (LLM) β
β Port 3000β β Port 8080 β β Port 8000 β β Port 11434 β
βββββββββββββββ βββββββββββββββ βββββββββββββββ βββββββββββββββ
- Frontend: React 18 + Vite + Material-UI, responsive web interface
- Backend: Go + Gin framework, handles file uploads and API routing
- ML Service: Python + FastAPI, processes resumes and integrates with LLM
- Ollama: Local LLM inference engine with GPU acceleration support
- Docker & Docker Compose: For containerized deployment
- Git: For cloning the repository
- Modern Web Browser: Chrome, Firefox, Safari, or Edge
git clone https://github.com/chenjunlin110/resume-match.git
cd resume-match# Navigate to infrastructure directory
cd infra
# Start all services
docker compose -f docker-compose.mac.yml up -d
# Check service status
docker compose ps
# View logs
docker compose logs -f- Open your browser and navigate to
http://localhost:3000 - Upload a resume file (PDF, DOCX, or TXT format)
- Enter job description text in the provided field
- Submit and watch real-time AI analysis
- Review results including skill extraction and matching suggestions
POST /api/upload
Content-Type: multipart/form-data
Parameters:
- jd_text: Job description text
- resume_file: Resume file uploadPOST /api/upload_stream
Content-Type: multipart/form-data
Parameters:
- jd_text: Job description text
- resume_file: Resume file upload
Response: Server-Sent Events (SSE) streamTest the system with sample files in the docs/samples/ directory:
resume_sample.txt- Sample resume contentjd_sample.txt- Sample job description
ML_URL=http://ml:8000/api/score_file_stream # ML service endpointOLLAMA_BASE_URL=http://ollama:11434 # Ollama service URLOLLAMA_GPU_LAYERS=35 # GPU acceleration layers
OLLAMA_FLASH_ATTENTION=true # Flash attention optimization
OLLAMA_METAL=1 # Apple Metal GPU support (macOS)The docker-compose.mac.yml file includes:
- Port mappings for all services
- Volume mounts for persistent data
- Environment variable configurations
- Service dependencies and health checks
# Test the streaming endpoint directly
curl -v -X POST http://localhost:8080/api/upload_stream \
-F "jd_text=Go programming" \
-F "resume_file=@../docs/samples/resume_sample.txt" \
--no-buffer# Test ML service
curl http://localhost:8000/health
# Test backend
curl http://localhost:8080/health
# Test Ollama
curl http://localhost:11434/api/tagscd frontend
npm install
npm run devcd backend
go mod tidy
go run main.gocd ml_service
pip install -r requirements.txt
uvicorn app:app --reload --host 0.0.0.0 --port 8000resume-match/
βββ frontend/ # React frontend application
β βββ src/
β β βββ components/ # React components
β β βββ config.ts # API configuration
β βββ Dockerfile # Frontend container
β βββ package.json
βββ backend/ # Go backend service
β βββ main.go # Main application
β βββ Dockerfile # Backend container
βββ ml_service/ # Python ML service
β βββ app.py # FastAPI application
β βββ requirements.txt # Python dependencies
β βββ Dockerfile # ML service container
βββ infra/ # Infrastructure configuration
β βββ docker-compose.mac.yml # Docker Compose for macOS
β βββ docker-compose.yml # Standard Docker Compose
βββ docs/ # Documentation and samples
β βββ samples/ # Sample files for testing
βββ README.md # This file
- Local LLM: All AI processing happens locally via Ollama
- No External APIs: No data sent to third-party services
- File Handling: Secure file upload and processing
- CORS Configuration: Proper cross-origin resource sharing setup
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama - Local LLM inference
- FastAPI - Python web framework
- Gin - Go web framework
- React - Frontend framework
- Material-UI - UI component library
- Issues: GitHub Issues
- Connect: Gmail
Made with β€οΈ by Junlin Chen
Transform your resume with AI-powered insights and real-time feedback.