Skip to content

SunnyYeahBoiii/CP-Helper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

10 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

CP-Helper

A Competitive Programming assistant powered by Retrieval-Augmented Generation (RAG) with local LLM capabilities. This project provides intelligent assistance for CP problems using VNOI/USACO documentation.

πŸ—οΈ Architecture

This is a full-stack application with two main components:

  • Backend (api-rag): FastAPI-based RAG engine with local LLM processing
  • Frontend (frontend): Next.js web interface using assistant-ui

πŸš€ Features

  • Semantic Chunking: Context-aware document splitting using Semantic Router
  • Local LLM: Meta Llama 3 running via Ollama for privacy and cost efficiency
  • Vector Search: Pinecone serverless vector database with high-quality embeddings
  • Real-time Chat: Streaming responses with modern React UI
  • Multi-language Support: Optimized for Vietnamese CP documentation

πŸ› οΈ Tech Stack

Backend

  • Framework: FastAPI (Python 3.10+)
  • LLM: Meta Llama 3 (local via Ollama)
  • Vector DB: Pinecone Serverless Index
  • Embeddings: nomic-ai/nomic-embed-text-v1.5 (768 dim)
  • Chunking: Semantic Router with rolling window splitting
  • Processing: LangChain integration

Frontend

  • Framework: Next.js 16 with React 19
  • UI Components: assistant-ui, Radix UI
  • Styling: Tailwind CSS
  • AI Integration: Vercel AI SDK
  • TypeScript: Full type safety

πŸ“‹ Prerequisites

  • Python 3.10+
  • Node.js 18+
  • Ollama installed and running
  • Pinecone account and API key

πŸ› οΈ Installation & Setup

Backend Setup

  1. Navigate to the backend directory:
cd api-rag
  1. Create a virtual environment:
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Set up environment variables:
cp .env.example .env
# Edit .env with your Pinecone API key and other configurations
  1. Start Ollama and pull Llama 3:
ollama serve
ollama pull llama3
  1. Run the backend server:
python server.py

Frontend Setup

  1. Navigate to the frontend directory:
cd frontend
  1. Install dependencies:
npm install
# or
pnpm install
  1. Set up environment variables:
cp .env.example .env.local
# Edit with your API keys if needed
  1. Run the development server:
npm run dev
# or
pnpm dev
  1. Open http://localhost:3000 in your browser.

πŸ“ Project Structure

CP-Helper/
β”œβ”€β”€ api-rag/                  # Backend RAG engine
β”‚   β”œβ”€β”€ server.py            # FastAPI server with chat endpoints
β”‚   β”œβ”€β”€ indexing.py          # Document indexing and chunking
β”‚   β”œβ”€β”€ multiquery.py        # RAG query processing
β”‚   β”œβ”€β”€ requirements.txt     # Python dependencies
β”‚   └── .env.example         # Environment variables template
β”œβ”€β”€ frontend/                 # Next.js frontend
β”‚   β”œβ”€β”€ app/                 # App router pages
β”‚   β”œβ”€β”€ components/          # React components
β”‚   β”œβ”€β”€ package.json        # Node.js dependencies
β”‚   β”œβ”€β”€ tsconfig.json       # TypeScript configuration
β”‚   └── tailwind.config.js  # Tailwind CSS config
β”œβ”€β”€ .gitignore              # Git ignore file
└── README.md              # This file

πŸ”§ Configuration

Backend Environment Variables

  • PINECONE_API_KEY: Your Pinecone API key
  • PINECONE_INDEX_NAME: Name of your Pinecone index
  • OLLAMA_BASE_URL: Ollama server URL (default: http://localhost:11434)

Frontend Environment Variables

πŸ€– Usage

  1. Start both backend and frontend servers
  2. Open the web interface at http://localhost:3000
  3. Ask questions about competitive programming concepts
  4. Get intelligent responses based on VNOI/USACO documentation

πŸ“š Documentation Sources

The system is designed to work with competitive programming documentation from:

  • VNOI (Vietnam Olympiad in Informatics)
  • USACO (USA Computing Olympiad)
  • CP-Algorithms
  • Other CP learning resources

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ” API Endpoints

POST /api/chat

Main chat endpoint for querying the RAG system.

Request:

{
  "question": "How does binary search work?"
}

Response: Streaming response with RAG-enhanced answers.

πŸ› οΈ Development

Running Tests

# Backend
cd api-rag
pytest

# Frontend  
cd frontend
npm test

Code Formatting

# Backend
cd api-rag
black .
isort .

# Frontend
cd frontend
npm run prettier:fix
npm run lint

πŸ› Troubleshooting

Common Issues

  1. Ollama connection failed: Make sure Ollama is running and Llama 3 is downloaded
  2. Pinecone connection error: Verify your API key and index configuration
  3. CORS issues: Check the CORS settings in server.py

Getting Help

  • Check the logs in both backend and frontend
  • Ensure all environment variables are properly set
  • Verify that Ollama and Pinecone services are accessible

Built with ❀️ for the Competitive Programming community

About

A Competitive Programming assistant powered by Retrieval-Augmented Generation (RAG) with local LLM capabilities. This project provides intelligent assistance for CP problems using VNOI/USACO documentation.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors