A comprehensive, production-ready intelligent bookmark management platform with AI-powered categorization, RAG-based conversational search, and advanced organization features.
- Smart Bookmark Management: Create, organize, and manage bookmarks with folders and tags
- AI-Powered Categorization: Automatic categorization using OpenAI, Anthropic, Google Gemini, or local Ollama models
- RAG-Powered Chat: Ask questions about your bookmarks using semantic search and AI
- Health Monitoring: Automatic detection of dead/offline links with status tracking
- Thumbnail Generation: Automatic screenshot capture for visual bookmark browsing
- Multi-Provider AI Support: OpenAI GPT-4, Anthropic Claude, Google Gemini, or local Ollama
- Vector Embeddings: Semantic search using pgvector for PostgreSQL
- Secure Authentication: JWT with refresh token rotation, 2FA via TOTP
- Role-Based Access Control: Admin, Manager, and User roles
- RESTful API: Comprehensive API with OpenAPI/Swagger documentation
- Docker-Ready: Complete containerization with docker-compose
- Node.js 20+ with TypeScript
- Express.js for REST API
- Prisma ORM with PostgreSQL
- pgvector for vector embeddings
- Bull for background job processing
- Redis for caching and queues
- React 18 with TypeScript
- Vite for build tooling
- Tailwind CSS for styling
- shadcn/ui component library
- React Query for server state
- Zustand for client state
- Docker & Docker Compose
- Nginx reverse proxy
- PostgreSQL 16 with pgvector
- Redis 7
- Node.js 20 or higher
- Docker and Docker Compose
- Git
-
Clone the repository
git clone https://github.com/yourusername/bookmarkbrain.git cd bookmarkbrain -
Copy environment file
cp .env.example .env
-
Configure environment variables Edit
.envand set:JWT_SECRETandJWT_REFRESH_SECRET(generate secure random strings)- At least one AI provider API key (OPENAI_API_KEY, ANTHROPIC_API_KEY, or GEMINI_API_KEY)
-
Start with Docker Compose (recommended)
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
-
Run database migrations
docker-compose exec backend npx prisma migrate deploy -
Access the application
- Frontend: http://localhost:5173
- Backend API: http://localhost:3001
- API Docs: http://localhost:3001/api/docs
-
Install dependencies
npm install
-
Start PostgreSQL and Redis
docker-compose up -d postgres redis
-
Run migrations
cd backend npx prisma migrate dev -
Start development servers
npm run dev
bookmarkbrain/
├── backend/ # Express.js API server
│ ├── prisma/ # Database schema and migrations
│ ├── src/
│ │ ├── config/ # Configuration files
│ │ ├── controllers/ # Request handlers
│ │ ├── middleware/ # Express middleware
│ │ ├── routes/ # API routes
│ │ ├── services/ # Business logic
│ │ │ ├── ai/ # AI provider integrations
│ │ │ └── rag/ # RAG system (embeddings, chat)
│ │ └── index.ts # Entry point
│ └── Dockerfile
├── frontend/ # React SPA
│ ├── src/
│ │ ├── components/ # UI components
│ │ ├── lib/ # Utilities and API client
│ │ ├── pages/ # Page components
│ │ └── stores/ # State management
│ └── Dockerfile
├── docker/ # Docker configuration
│ ├── nginx/ # Nginx reverse proxy
│ ├── postgres/ # Database initialization
│ └── puppeteer/ # Screenshot service
├── docker-compose.yml # Production compose file
├── docker-compose.dev.yml # Development overrides
└── package.json # Root monorepo config
The API is fully documented using OpenAPI/Swagger. Access the interactive documentation at:
- Development: http://localhost:3001/api/docs
- Production: https://your-domain.com/api/docs
| Endpoint | Method | Description |
|---|---|---|
/api/auth/register |
POST | Create new account |
/api/auth/login |
POST | Authenticate user |
/api/bookmarks |
GET | List bookmarks with filtering |
/api/bookmarks |
POST | Create bookmark |
/api/folders/tree |
GET | Get folder hierarchy |
/api/tags |
GET | List all tags |
/api/chat |
POST | Send message to RAG chat |
/api/health/check-all |
POST | Check all bookmark health |
| Variable | Description | Default |
|---|---|---|
DATABASE_URL |
PostgreSQL connection string | Required |
REDIS_URL |
Redis connection string | redis://localhost:6379 |
JWT_SECRET |
JWT signing secret | Required |
JWT_REFRESH_SECRET |
Refresh token secret | Required |
OPENAI_API_KEY |
OpenAI API key | Optional |
ANTHROPIC_API_KEY |
Anthropic API key | Optional |
GEMINI_API_KEY |
Google Gemini API key | Optional |
OLLAMA_URL |
Ollama server URL | http://localhost:11434 |
BookmarkBrain supports multiple AI providers for categorization and chat:
- OpenAI (Default): Best quality, requires API key
- Anthropic: Claude models, requires API key
- Google Gemini: Gemini Pro, requires API key
- Ollama: Local models, no API key needed
Set your preferred provider in Settings > AI & Categorization.
Based on current benchmarks and community consensus:
LLM Models (for chat & categorization):
| Model | VRAM | Best For |
|---|---|---|
qwen3:14b |
12GB | Best balance - recommended default |
qwen3:8b |
8GB | Laptop-friendly, good all-rounder |
qwen3:30b-a3b |
20GB | MoE model, excellent quality |
qwen3:4b |
4GB | Edge devices, low resources |
llama3.3:70b |
40GB+ | Maximum quality |
Embedding Models (for RAG/semantic search):
| Model | Dimensions | Best For |
|---|---|---|
mxbai-embed-large |
1024 | Highest accuracy - recommended |
nomic-embed-text |
768 | Long documents, good speed |
bge-m3 |
1024 | Multilingual, long context |
snowflake-arctic-embed |
1024 | Code & technical docs |
Why Qwen3? Outperforms Gemma 3 in reasoning/coding benchmarks, 128K context window, Apache 2.0 license, excellent multilingual support, and dual-mode reasoning (thinking/non-thinking).
-
Configure production environment
cp .env.example .env # Edit .env with production values -
Generate SSL certificates (using Let's Encrypt)
certbot certonly --webroot -w /var/www/certbot -d your-domain.com
-
Enable HTTPS in Nginx config Edit
docker/nginx/conf.d/default.confand uncomment the HTTPS server block. -
Deploy
docker-compose up -d docker-compose exec backend npx prisma migrate deploy
To use local AI models with Ollama:
-
Start Ollama service
docker-compose --profile ollama up -d
-
Pull recommended models
# LLM model (pick one based on your hardware) docker-compose exec ollama ollama pull qwen3:14b # or for lighter hardware: docker-compose exec ollama ollama pull qwen3:8b # Embedding model docker-compose exec ollama ollama pull mxbai-embed-large
-
Configure in Settings Select "Ollama" as your AI provider and choose the model.
# Backend tests
cd backend
npm test
# Frontend tests
cd frontend
npm test# Lint
npm run lint
# Type check
npm run typecheck
# Format
npm run format# Create migration
cd backend
npx prisma migrate dev --name your_migration_name
# Apply migrations
npx prisma migrate deploy
# Reset database (development only)
npx prisma migrate reset- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.