EmoLearn is an advanced AI-powered platform for real-time emotion detection and adaptive learning. By combining facial expression, voice tone, and mouse interaction analysis, it understands user engagement and mood to provide actionable insights. Whenever an emotion is detected, the AI assistant proactively appears to offer support, guidance, or encouragement tailored to the learner’s current state—whether they are confused, bored, or motivated. The assistant helps not only by providing motivation, but also through interactive games and targeted learning support, helping users understand concepts and guiding them toward their best learning condition. Through real-time, personalized interventions such as explanations, motivational messages, and engaging activities, EmoLearn Nexus ensures that help is always timely and relevant, making learning more effective, supportive, and adaptive to each individual's needs.
- Learners (Students): Learn more effectively, stay focused, and maintain the best emotional state for studying.
- Tutors & Educators: Monitor students’ emotions in real time and adapt lectures to keep classes engaging.
- Parents: Gain insights into their child’s emotional well-being and learning journey.
- Gamers & Game Developers: Discover which parts of a game are most engaging or frustrating to improve game design and user experience.
- Corporate Trainers: Track employee engagement and emotional responses during professional development.
- Therapists & Counselors: Monitor emotional states during digital or remote therapy sessions.
- UX/UI Researchers: Gather valuable data on user emotions to enhance product interfaces and digital experiences.
- Facial Emotion Detection: Uses OpenCV, MediaPipe, and DeepFace for real-time webcam-based emotion analysis.
- Voice Emotion Detection: Analyzes pitch, ZCR, and keywords from microphone input using librosa, PyAudio, and SpeechRecognition.
- Mouse Interaction Emotion Detection: Tracks mouse movement, clicks, and idle patterns to infer engagement or frustration.
- Emotion Combiner: Fuses all three modalities with priority logic and logs results to both CSV and JSON.
- Customizable Emotion Weights: Tune detection sensitivity via
emotion_weights.json. - FastAPI Server:
/api/emotionendpoint returns the current combined emotion. - CORS Enabled: Ready for frontend integration.
- Structured Logging: Logs all emotion events for analytics and visualization.
- Live Emotion Dashboard: Visualizes current mood, trends, and emotion history.
- AI Learning Assistant: Context-aware chatbot that adapts to your mood, offers explanations, quizzes, resources, and stress-relief exercises.
- Interactive Controls: Toggle camera/mic, launch breathing exercises, and access quick actions.
- Progress Badges & Stats: Earn achievements and track learning streaks, focus, and mastery.
- Motivational Tools: Breathing exercises, lofi music, and motivational quotes.
- Modern UI/UX: Responsive, accessible, and visually engaging.
[ Webcam ] [ Microphone ] [ Mouse ]
| | |
v v v
[Facial Module] [Speech Module] [Mouse Module]
| | |
+-------[ Emotion Combiner ]-------+
|
v
[ FastAPI Backend /api/emotion ]
|
v
[ React Frontend Dashboard ]
emo-learn-nexus/
│
├── backend/
│ ├── main.py # FastAPI backend server
│ ├── emotion_log.csv # CSV log of emotions
│ ├── emotion_log.json # JSON log with timestamp & all emotion sources
│ ├── emotion_weights.json # Customizable keyword weights
│ ├── modules/
│ │ ├── facial_emotion.py # Facial emotion detection
│ │ ├── mouse_emotion.py # Mouse activity emotion detection
│ │ ├── speech_emotion.py # Voice tone and keyword emotion detection
│ │ └── emotion_combiner.py # Combines all modalities and logs final result
│ ├── static/ # Optional: legacy HTML/CSS/JS
│ ├── templates/ # Optional: Jinja HTML templates
│ └── requirements.txt # Python dependencies
│
├── src/ # React frontend
│ ├── components/ # UI components (dashboard, assistant, etc.)
│ ├── pages/ # Main pages (Index, NotFound)
│ ├── api/ # API integration (Groq GPT, etc.)
│ ├── assets/ # Static assets (audio, images)
│ └── ...
└── README.md
- Create and activate a virtual environment
cd backend python -m venv venv venv\Scripts\activate # or source venv/bin/activate on Mac/Linux
- Install dependencies
pip install -r requirements.txt
- Start FastAPI server
The API will be available at http://localhost:8000/api/emotion
uvicorn main:app --reload
- Install dependencies
npm install
- Start the development server
The app will be available at http://localhost:5173
npm run dev
- Facial Emotion Detection
python modules/facial_emotion.py
- Voice Emotion Detection
python modules/speech_emotion.py
- Mouse Emotion Detection (Simulated)
python modules/mouse_emotion.py
- Combined Emotion Detection
python modules/emotion_combiner.py
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/emotion | Returns current combined emotion |
- JSON (
backend/emotion_log.json):[ { "time": "2025-06-28 15:00:04", "facial_emotion": "Bored", "voice_emotion": "Sleepy", "interaction_emotion": "Bored", "final_emotion": "Bored", "value": 0.45 } ] - CSV (
backend/emotion_log.csv):timestamp,emotion,source 2025-06-25 22:43:38,Bored,mouse 2025-06-25 22:44:08,Sleepy,mouse ...
- Live Mood Tracker: Real-time display of detected emotion (😊 🤔 😫 💤 😐)
- AI Assistant: Context-aware chatbot with interventions (explain, quiz, resources, breathing)
- Control Panel: Toggle camera/mic, launch breathing, quick actions (quiz, focus, DND)
- Mood Graph: Visualizes mood trends over time
- Progress Badges: Earn achievements for focus, calm, curiosity, etc.
- Breathing Exercise: Timer, lofi music, motivational quotes, reflex mini-game
- Voice Input: Ask questions by voice (browser speech recognition)
- Notifications: Toasts and alerts for feedback
- Emotion Weights: Edit
backend/emotion_weights.jsonto tune keyword sensitivity for each emotion. - Motivational Quotes: Add/edit quotes in
src/components/quotes.jsonfor the breathing exercise. - Lofi Music: Replace
src/assets/lofi.mp3with your own audio for relaxation.
- Fork the repo and create a feature branch.
- Follow code style and add clear comments.
- Submit a pull request with a detailed description.
- Backend: FastAPI, OpenCV, MediaPipe, DeepFace, librosa, PyAudio, SpeechRecognition, torch
- Frontend: React, TypeScript, TailwindCSS, Radix UI, Sonner, Recharts
- AI Assistant: Groq API (Llama3-70B)
