StudyBuddy is your personal AI-powered study companion that transforms your lecture notes and study materials into interactive learning tools! π Whether you need flashcards for quick revision, practice exams to test your knowledge, or visual summaries with AI-generated diagrams, StudyBuddy has you covered. Built with a self-hosted backend running state-of-the-art open-source models (Llama, Mistral, Stable Diffusion), it keeps your data private while delivering powerful AI capabilities. Plus, you can chat with an AI tutor that understands your study materials! π€π‘
- FAQ
- Changelog
-
Summary π: Generate comprehensive, well-structured Markdown summaries of your study materials, complete with AI-generated diagrams and illustrations. The AI identifies key concepts that benefit from visual aids and creates custom images on-the-fly to enhance understanding. Perfect for condensing dense textbooks or lecture notes into digestible visual guides!
-
Flashcards π: Automatically extract key concepts, definitions, and facts from your materials and transform them into question-answer flashcard pairs. Ideal for memorization and quick review sessions. The AI focuses on the most important information, so you don't waste time on trivial details.
- Exam Questions π: Generate realistic multiple-choice practice exams based on your study content. Each question comes with four options and a verified correct answer. Great for testing your knowledge before the real exam and identifying gaps in your understanding!
- Chat π¬: Have an interactive conversation with an AI tutor that has full context of your study materials. Ask questions, request clarifications, or explore topics in depth. The AI adapts its responses to your learning style and provides clear, actionable explanations.
The backend runs locally using PyTorch and Hugging Face models. The application was developed and tested with an RTX 4090 with 24GB VRAM. Choose AI models based on your VRAM availability (see section: βοΈ Configuration Options)
Installation:
cd backend
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txtRunning the Server:
# From the project root
./run_backend.sh
# Alternatively if you want to change the ip or port
./run_backend.sh --host <HOST> --port <PORT>The API will be available at http://localhost:8000 (Or your specified host/port). Check <HOST>:<PORT>/health to verify it's running!
You can customize the AI models via environment variables or a .env file in the backend/ directory
backend/.env.example provides a skeleton you can copy and rename to .env. All settings are explained in detail there.
π― Model Recommendations:
Different models have different strengths and hardware requirements. Here are some great options:
Text Models (choose based on your GPU):
Qwen/Qwen2.5-7B-Instruct- Best for structured output & JSON (7B, ~8GB VRAM) βmeta-llama/Llama-3.1-8B-Instruct- Excellent all-rounder (8B, ~8GB VRAM)mistralai/Mistral-Nemo-Instruct-2407- Great quality (12B, ~12GB VRAM)
Image Models (for summary diagrams):
stabilityai/sdxl-turbo- Fast & good quality (recommended!) βstabilityai/stable-diffusion-xl-base-1.0- Best quality, slowerByteDance/SDXL-Lightning- Ultra-fast, great for real-time
π‘ Why choose different models? Larger models are smarter but need more VRAM and run slower. Smaller models are faster and use less memory but may struggle with complex reasoning. Pick what fits your hardware and patience level!
π‘ Disable Image Generation: If you don't need generated images in your summaries, consider disabling image generation and use the saved VRAM for a more capable text model.
π§ Use API's: If you want to use external text or image tools via API and are an advanced user, you can override the generative functions in llm.py to hook them up. If there is demand, we might add common API's like Google or OpenAI natively in the future.
The frontend is a React + TypeScript app built with Vite.
Installation:
cd frontend
npm installRunning the Dev Server:
# From the project root
./run_frontend.shThe app will be available at http://localhost:5173 (or the port shown in your terminal).
By default, the frontend expects the backend at http://localhost:8000. If your backend runs on a different machine or port:
- Edit
frontend/services/geminiService.ts - Change the
API_BASE_URLconstant:const API_BASE_URL = 'http://your-backend-ip:8000';
If you just want to run StudyBuddy locally and not worry about front- and backend management, you can just do
# From the project root
./run_fullstack.shThis will automatically set both components up, connect them, and give you an access link to the application
This backend has no authentication, rate limiting, or API keys. It's designed for local, personal use. If you expose it to the internet:
- Anyone can access it and consume your GPU resources
- Anyone can send arbitrary content to your models
- Consider adding authentication (e.g., API keys, OAuth) before deployment
- Use a reverse proxy (nginx, Caddy) with rate limiting for production
For personal use on localhost or a trusted local network, you're fine! π
π§ Should there be demand, we might consider adding key based API authentication in the future.
Contributions are welcome and appreciated! π Whether you want to add features, fix bugs, improve documentation, or suggest better models, feel free to:
- π Open an issue for bugs or feature requests
- π§ Submit pull requests (please follow existing code style)
- π‘ Share your ideas in discussions
- β Star the repo if you find it useful!
This is a hobby project built for fun and learning, so don't be shy - all skill levels are welcome! Let's make studying suck less together. πβ¨
If you use StudyBuddy in academic work, research etc., please provide attribution:
StudyBuddy AI - An open-source AI-powered study companion
GitHub: https://github.com/SvenPfiffner/StudyBuddy
Author: Sven Pfiffner
Year: 2025
For commercial use, please ensure compliance with the licenses of the underlying models:
- Llama 3.1: Meta's License Agreement
- Mistral Models: Apache 2.0
- Stable Diffusion: CreativeML Open RAIL-M License
- Qwen Models: Apache 2.0
This project itself is provided as-is for educational and personal use. π




