A powerful Chrome extension that lets you upload course materials, then uses an open-source LLM from Hugging Face to answer questions about deadlines, grading policies, learning objectives, and more.
- π₯ Drag-and-Drop Uploads: Upload PDFs and documents directly from the popup
- π€ AI-Powered Answers: Uses an open-source LLM via Hugging Face to answer questions about your course materials
- π¬ Interactive Chat: Conversational interface to ask questions about:
- Assignment and exam deadlines
- Grading policies and rubrics
- Course learning objectives
- Course requirements and policies
- And more!
- πΎ Drive-Backed Storage: Uploads files to Google Drive for future RAG
- π§ Optional Vector API: Store embeddings in MongoDB Atlas and query top-K chunks
- π Private: Processes files locally; only sends text summaries to the LLM API
- Chrome Browser (version 88 or later)
- Hugging Face API Key (free tier available)
- Sign up at huggingface.co
- Get your API key from huggingface.co/settings/tokens
- Accept the terms for the selected model access
- Google Drive OAuth Client ID
- Add your client ID to
manifest.json(oauth2.client_id) - Scope used:
https://www.googleapis.com/auth/drive.file
- Add your client ID to
- (Optional) Vector API Server
- Node.js 18+ recommended for the MongoDB driver
cd /Users/parkhiagarwal/Downloads/LMS- Open Chrome and go to
chrome://extensions/ - Enable Developer mode (toggle in top right)
- Click "Load unpacked"
- Select the LMS extension folder
- The extension should appear in your Chrome extensions list
- Click the extension icon in the Chrome toolbar
- Enter your Hugging Face API key in the "Hugging Face API Key" field
- Click Save
- Click the extension popup icon
- Click Authorize Drive when prompted
- Drag and drop files into the upload area
- Files are uploaded to Google Drive immediately
- Once files are uploaded, type your question in the text area
- Click "Ask Question" or press Shift+Enter
- The extension will:
- Extract text from the uploaded files
- Send it to the LLM with your question
- Display the answer in the chat
Use the suggested questions to quickly ask common queries:
- π Upcoming deadlines - Get all assignment and exam dates
- π Grading policy - Learn how your grade is calculated
- π― Learning objectives - Understand course goals
LMS/
βββ manifest.json # Extension configuration
βββ background.js # Service worker (LLM communication)
βββ popup.html # Extension UI
βββ popup.js # Popup logic
βββ popup.css # Styling
βββ utils.js # Text extraction utilities
βββ README.md # This file
βββ icons/ # Extension icons
βββ icon16.png
βββ icon48.png
βββ icon128.png
-
Background Service Worker (
background.js):- Receives questions from the popup
- Fetches and extracts text from uploaded files
- Creates a context-aware prompt
- Calls Hugging Face API with the selected model
- Returns answers to the popup
-
Popup UI (
popup.html,popup.js,popup.css):- User interface for interacting with the extension
- Uploads files to Google Drive
- Displays uploaded files
- Chat interface for questions and answers
- Stores API key securely in Chrome storage
The extension uses a small language model (SLM) by default via Hugging Face Inference API:
google/gemma-2-2b-it
You can modify background.js to use alternative models:
meta-llama/Llama-3.1-8B-Instruct(balanced speed/quality)mistralai/Mistral-7B-Instruct-v0.3(strong general-purpose)- Other open-source text-only models available on Hugging Face
The extension uses a lightweight RAG pipeline to ground answers in your course files:
- Embeddings model:
Qwen/Qwen3-Embedding-8Bvia Hugging Face Router - Chunking: 1200 characters with 150 overlap
- Retrieval: top 8 chunks per query
- Vector store: saved to
vector-store.jsonin your Google Drive folderBrightspace LLM Assistant
You can store embeddings in MongoDB Atlas and retrieve top-K chunks via a local API server.
- Set up a vector index in MongoDB Atlas on the
embeddingfield. - Configure the server:
- Copy
server/.env.exampletoserver/.env - Set
MONGODB_URI,MONGODB_DB,MONGODB_COLLECTION, andVECTOR_INDEX_NAME
- Copy
- Start the server:
cd server && npm installnpm start
- In the extension popup, set Vector API URL (e.g.,
http://localhost:3000/api). - If you set
API_KEYinserver/.env, set the same value in Vector API key in the popup.
When the Vector API URL is set, the extension stores embeddings in MongoDB and queries top-K chunks from the API.
For completely private inference without API keys:
- Install Ollama
- Run
ollama pull llama3.2:3b-instruct - Start Ollama:
ollama serve - Enable local mode in the popup and set the model name
- Verify your Hugging Face API key is correct
- Make sure you have accepted the selected model terms on Hugging Face
- Check that your free tier account has API access active
- Check your internet connection
- Verify the file wasn't too large (token limits apply)
- Try a simpler question first
- Check Hugging Face service status
- Make sure you authorized Google Drive in the popup
- Check browser console (F12) for OAuth errors
- Verify your OAuth client ID in
manifest.json
Invalid API key: Set Vector API key in the popup to matchAPI_KEYinserver/.env, or removeAPI_KEYto disable auth.Failed to connect to MongoDB: Double-checkMONGODB_URIcredentials and Atlas IP allowlist.
- β Files are processed locally for text extraction
- β
Uploaded files are stored in your Google Drive folder
Brightspace LLM Assistant - β
Vector index is stored in
vector-store.jsonin the same Drive folder (if Vector API URL is not set) - β Vector API mode stores embeddings in MongoDB Atlas
- β API key is stored only in Chrome local storage
- β Only text content is sent to the LLM API
β οΈ Your Hugging Face API key is visible to Hugging Face servicesβ οΈ Don't share your API key with others
- Token Limits: LLM has context window limits (~2000 tokens for free tier)
- File Types: Currently best with text-based files (PDF, TXT, DOCX); requires pdf.js for PDF extraction
- Accuracy: LLM responses are AI-generated and may contain errors
- Rate Limits: Free tier Hugging Face has rate limits
- Large Files: Very large files may be truncated to fit token limits
- Add support for multiple file formats (images, audio)
- Implement local PDF.js for better PDF extraction
- Add mammoth.js for better DOCX support
- Support for document summarization
- Custom prompt templates for different question types
- Integration with other learning platforms (Canvas, Moodle, etc.)
- Multi-language support
- Offline mode with local LLM
Feel free to extend and modify this extension:
- Add better file parsing (pdf.js, mammoth.js)
- Support more file types
- Improve the UI/UX
- Add more LLM providers
- Optimize token usage
This project is provided as-is for educational purposes.
For issues or questions:
- Check the troubleshooting section
- Review browser console errors (F12)
- Check Hugging Face API documentation
- Review Chrome extension documentation
This extension is not affiliated with Brightspace, Meta, or Hugging Face. Use it responsibly and in accordance with your institution's policies.