A browser extension for Chrome and Firefox that captures web pages as bookmarks, extracts readable content, and enables semantic search using RAG (Retrieval-Augmented Generation). All data is stored locally in your browser.
Download for Firefox or Chrome
| Popup | Library | Search |
|---|---|---|
![]() |
![]() |
![]() |
- One-click capture — Save URL, title, and full DOM HTML with a click or keyboard shortcut
- Content extraction — Converts HTML to Markdown using Mozilla's Readability
- Q&A generation — LLM generates question-answer pairs for each bookmark
- Semantic search — Search bookmarks by meaning using embeddings
- Tag organization — Flat organization with tags; click to filter, type to create
- Stumble mode — Randomly surface bookmarks you may have forgotten
- Health indicator — Visual status shows processing state; click for diagnostics
- Jobs dashboard — Monitor processing jobs with status and progress
- Bulk URL import — Import multiple bookmarks at once with validation
- Import/Export — Backup and restore bookmarks as JSON files
- WebDAV sync — Sync bookmarks across devices using your own WebDAV server
- Configurable API — Use OpenAI or any compatible endpoint (local models included)
- Node.js 18+ and npm
- Chrome or Firefox browser
- OpenAI API key (or compatible API endpoint)
-
Install dependencies:
npm install
-
Build the extension:
npm run build
This creates a
dist/folder with the built extension.
- Open Chrome and go to
chrome://extensions/ - Enable "Developer mode" in the top right
- Click "Load unpacked"
- Select the
distfolder
- Open Firefox and go to
about:debugging#/runtime/this-firefox - Click "Load Temporary Add-on"
- Navigate to the
distfolder and selectmanifest.json
A standalone web version is available for testing; but it's of limited use because of the lack of generalized cross-origin requests.
npm run dev:web- Click the extension icon and select "Settings"
- Configure your API settings:
- API Base URL: Default is
https://api.openai.com/v1 - API Key: Your OpenAI API key
- Chat Model: Model for Q&A generation (e.g.,
gpt-4o-mini) - Embedding Model: Model for embeddings (e.g.,
text-embedding-3-small)
- API Base URL: Default is
- Click "Test Connection" to verify your settings
- Click "Save Settings"
The extension works with any OpenAI-compatible API server (vLLM, llama.cpp, etc.):
- Set API Base URL to your local endpoint (e.g.,
http://localhost:1234/v1) - Set API Key to any non-empty string (many local servers don't require a key)
- Set model names that your local server supports
Method 1: Extension Icon
- Navigate to any web page
- Click the extension icon
- Click "Save This Page"
Method 2: Keyboard Shortcut
- Windows/Linux:
Ctrl+Shift+B - Mac:
Cmd+Shift+B
- Click the extension icon
- Click "Explore Bookmarks"
- View bookmarks with their processing status:
- Pending: Waiting to be processed
- Processing: Currently being processed
- Complete: Ready to search
- Error: Processing failed (can retry)
- Open the Search view
- Enter a search query
- Results are ranked by semantic similarity
- Click any result to view the full bookmark details
- Click any tag pill to filter bookmarks by that tag
- In bookmark details, type in the tag input to add new tags
- Tags are flat (no hierarchy) and use lowercase with hyphens
- Open the Stumble view
- See 10 random bookmarks with a Q&A preview
- Click "Shuffle" for a new random selection
- Filter by tags to stumble within a topic
- Open Settings
- Scroll to "Bulk Import URLs"
- Paste a list of URLs (one per line)
- Click "Import URLs"
- Monitor progress in the Jobs dashboard
- Open Settings
- Enable WebDAV sync
- Enter your WebDAV server URL, username, and password
- Set a sync path (default:
/bookmarks) - Sync happens automatically when bookmarks change
Export:
- Open Settings
- Click "Export All Bookmarks"
- A JSON file will be downloaded
Import:
- Open Settings
- Click "Choose File to Import"
- Select a previously exported JSON file
- Duplicate URLs are skipped automatically
When you save a bookmark:
- Capture: Full page HTML is captured and saved locally
- Extract: Readability extracts the main content and converts to Markdown
- Generate Q&A: LLM generates 5-10 question-answer pairs about the content
- Embed: Each Q&A pair is converted to embeddings (question, answer, and combined)
- Index: Everything is stored in IndexedDB
When you search:
- Your query is converted to an embedding
- Cosine similarity is computed against all stored Q&A embeddings
- Results are ranked by similarity score
- Top results are grouped by bookmark and displayed
- All bookmarks are stored locally in your browser's IndexedDB
- Only the extracted Markdown content is sent to your configured API for processing
- No data is sent to any third-party servers (except your configured LLM API)
- WebDAV sync sends bookmark data to your own server
- Export your data anytime as JSON files
# Development mode (Chrome)
npm run dev:chrome
# Development mode (Firefox)
npm run dev:firefox
# Development mode (Web)
npm run dev:web
# Run unit tests
npm run test:unit
# Run all tests
npm run test:allSend us an email with the title of this section at localforge.org. We don't have a GPG key configured, but will monitor and answer fast (as of late 2025).
Here are some attack vectors we considered from web content pages:
- XSS via bookmark title: use textContent always when displaying
title - XSS via bookmark content: use lib DOMPurify to sanitize
- URL injection: only allow http/https protocols
- Content script injection: captured HTML goes through Readability + DOMPurify
- Stored XSS via IndexedDB: unlikely, but avoid innerHTML at all cost and always try to use DOM libs, even for already stored contend, and never display captured HTML
MIT
Built using:
- Dexie.js for IndexedDB
- @mozilla/readability for content extraction
- Turndown for HTML to Markdown conversion
- Vite and @crxjs/vite-plugin for building


