This document provides detailed technical diagrams of Local Deep Research's architecture.
flowchart TB
subgraph USER["👤 User Interface"]
WEB[Web Browser<br/>localhost:5000]
end
subgraph FLASK["🌐 Flask Backend"]
API[REST API + WebSocket]
AUTH[Authentication<br/>CSRF Protection]
ROUTES[Research Routes]
end
subgraph RESEARCH["🔬 Research Engine"]
STRAT[Strategy Selector<br/>20+ strategies]
QGEN[Question Generator]
EXEC[Search Executor]
SYNTH[Report Synthesizer]
end
subgraph LLM["🤖 LLM Providers"]
direction TB
LOCAL_LLM[/"🏠 Local LLMs"\]
OLLAMA[Ollama]
LMSTUDIO[LM Studio]
CLOUD_LLM[/"☁️ Cloud LLMs"\]
OPENAI[OpenAI]
ANTHROPIC[Anthropic]
GEMINI[Google Gemini]
OPENROUTER[OpenRouter<br/>100+ models]
end
subgraph SEARCH["🔍 Search Engines"]
direction TB
LOCAL_SEARCH[/"🏠 Local Search"\]
SEARXNG[SearXNG]
ELASTIC[Elasticsearch]
LIBRARY[Document Library]
WEB_SEARCH[/"🌐 Web Search"\]
TAVILY[Tavily]
BRAVE[Brave]
DDG[DuckDuckGo]
ACADEMIC[/"📚 Academic"\]
ARXIV[ArXiv]
PUBMED[PubMed]
SEMANTIC[Semantic Scholar]
end
subgraph STORAGE["💾 Storage Layer"]
SQLCIPHER[(SQLCipher DB<br/>AES-256 Encrypted)]
VECTORS[(Vector Store<br/>Embeddings)]
FILES[(File Storage<br/>PDFs & Docs)]
end
subgraph OUTPUT["📄 Output"]
MD[Markdown]
PDF[PDF Export]
LATEX[LaTeX]
QUARTO[Quarto]
RIS[RIS/BibTeX]
end
WEB <--> API
API --> AUTH
AUTH --> ROUTES
ROUTES --> STRAT
STRAT --> QGEN
QGEN --> EXEC
EXEC --> SYNTH
EXEC <--> OLLAMA & LMSTUDIO
EXEC <--> OPENAI & ANTHROPIC & GEMINI & OPENROUTER
EXEC <--> SEARXNG & ELASTIC & LIBRARY
EXEC <--> TAVILY & BRAVE & DDG
EXEC <--> ARXIV & PUBMED & SEMANTIC
SYNTH --> SQLCIPHER
LIBRARY <--> VECTORS
LIBRARY <--> FILES
SYNTH --> MD & PDF & LATEX & QUARTO & RIS
LOCAL_LLM ~~~ OLLAMA
CLOUD_LLM ~~~ OPENAI
LOCAL_SEARCH ~~~ SEARXNG
WEB_SEARCH ~~~ TAVILY
ACADEMIC ~~~ ARXIV
style USER fill:#e1f5fe
style FLASK fill:#fff3e0
style RESEARCH fill:#f3e5f5
style LLM fill:#e8f5e9
style SEARCH fill:#fce4ec
style STORAGE fill:#fff8e1
style OUTPUT fill:#e0f2f1
flowchart LR
subgraph INPUT["1️⃣ Input"]
Q[Research Query]
end
subgraph ITERATE["2️⃣ Iterative Research"]
direction TB
GEN[Generate<br/>Questions]
SEARCH[Parallel<br/>Search]
ANALYZE[Analyze<br/>Results]
GEN --> SEARCH --> ANALYZE
ANALYZE -.->|"Need more info?"| GEN
end
subgraph SYNTHESIZE["3️⃣ Synthesis"]
REPORT[Generate<br/>Report]
CITE[Add<br/>Citations]
REPORT --> CITE
end
subgraph OUTPUT["4️⃣ Output"]
RESULT[Final Report<br/>with Sources]
end
Q --> GEN
ANALYZE --> REPORT
CITE --> RESULT
style INPUT fill:#e3f2fd
style ITERATE fill:#f3e5f5
style SYNTHESIZE fill:#e8f5e9
style OUTPUT fill:#fff3e0
flowchart TB
subgraph FULL_LOCAL["🏠 Fully Local (Maximum Privacy)"]
direction LR
L_LDR[LDR] <--> L_OLLAMA[Ollama]
L_LDR <--> L_SEARX[SearXNG]
L_LDR <--> L_DB[(Encrypted DB)]
end
subgraph HYBRID["⚡ Hybrid (Balanced)"]
direction LR
H_LDR[LDR] <--> H_OLLAMA[Ollama]
H_LDR <-->|"Web Search"| H_CLOUD[Cloud APIs]
H_LDR <--> H_DB[(Encrypted DB)]
end
subgraph CLOUD["☁️ Cloud-Powered (Maximum Speed)"]
direction LR
C_LDR[LDR] <--> C_OPENAI[OpenAI/Claude]
C_LDR <--> C_TAVILY[Tavily/Brave]
C_LDR <--> C_DB[(Encrypted DB)]
end
style FULL_LOCAL fill:#e8f5e9
style HYBRID fill:#fff3e0
style CLOUD fill:#e3f2fd
mindmap
root((Local Deep<br/>Research))
Research
Quick Summary
Detailed Reports
Follow-up Questions
20+ Strategies
Search Sources
Web Search
SearXNG
Tavily
Brave
DuckDuckGo
Academic
ArXiv
PubMed
Semantic Scholar
Local
Document Library
Collections
Elasticsearch
LLM Support
Local
Ollama
LM Studio
Cloud
OpenAI
Anthropic
Google
OpenRouter
Output
Markdown
PDF
LaTeX
Quarto
RIS/BibTeX
Features
News Subscriptions
Cost Analytics
Benchmarking
RAG Search
Per-User Encryption
| Provider | Type | Description |
|---|---|---|
| Ollama | Local | Self-hosted open-source models |
| LM Studio | Local | Desktop app for local models |
| OpenAI | Cloud | GPT-4, GPT-3.5 |
| Anthropic | Cloud | Claude 3 family |
| Cloud | Gemini models | |
| OpenRouter | Cloud | 100+ models via single API |
| Engine | Type | Best For |
|---|---|---|
| SearXNG | Local/Self-hosted | Privacy, aggregated results |
| Tavily | Cloud | AI-optimized search |
| ArXiv | Academic | Physics, CS, Math papers |
| PubMed | Academic | Biomedical research |
| Semantic Scholar | Academic | Cross-discipline papers |
| Wikipedia | Knowledge | General knowledge |
| Your Documents | Local | Private document search |
| Format | Use Case |
|---|---|
| Markdown | Default, web display |
| Sharing, printing | |
| LaTeX | Academic papers |
| Quarto | Reproducible documents |
| RIS/BibTeX | Reference managers |
One of LDR's powerful features is the ability to build a personal knowledge base that improves future research.
flowchart TB
subgraph RESEARCH["1️⃣ Research"]
Q[Your Question] --> Engine[Research Engine]
Engine --> Results[Results + Sources]
end
subgraph DOWNLOAD["2️⃣ Download"]
Results --> Track[Track Sources]
Track --> Queue[Download Queue]
Queue --> Extract[Download & Extract Text]
end
subgraph LIBRARY["3️⃣ Library"]
Extract --> Store[(Document Storage)]
Store --> Organize[Organize into Collections]
end
subgraph INDEX["4️⃣ Index"]
Organize --> Chunk[Chunk Documents]
Chunk --> Embed[Generate Embeddings]
Embed --> FAISS[(Vector Index)]
end
subgraph REUSE["5️⃣ Reuse"]
FAISS --> SearchEngine[Collection as Search Engine]
SearchEngine --> Q
end
style RESEARCH fill:#e3f2fd
style DOWNLOAD fill:#fff3e0
style LIBRARY fill:#e8f5e9
style INDEX fill:#f3e5f5
style REUSE fill:#fce4ec
- Research Completes → Sources are tracked in
ResearchResourcetable - Download Sources → Click "Get All Research PDFs" to queue downloads
- Smart downloaders for ArXiv, PubMed, Semantic Scholar, etc.
- Automatic text extraction from PDFs
- Build Library → Documents stored in encrypted database
- Deduplication via content hash
- Multiple storage modes: database (encrypted), filesystem, text-only
- Create Collections → Organize documents by topic/project
- Each collection can have different embedding settings
- Documents can belong to multiple collections
- Index for Search → Generate vector embeddings
- Configurable chunk size and overlap
- FAISS index for fast similarity search
- Use in Future Research → Select collection as search engine
- RAG search finds relevant passages
- Results cite back to your documents
| Component | Purpose |
|---|---|
DownloadService |
Manages PDF downloads with source-specific strategies |
LibraryService |
Queries and manages document library |
LibraryRAGService |
Creates vector indices for semantic search |
CollectionSearchEngine |
Searches collections using RAG |
Document |
Stores text content, metadata, file references |
DocumentChunk |
Stores indexed text chunks with embeddings |
Collection |
Groups documents with shared embedding settings |
| Mode | Security | Use Case |
|---|---|---|
| Database | AES-256 encrypted | Default, maximum security |
| Filesystem | Unencrypted | Need external tool access |
| Text Only | Encrypted text, no PDFs | Minimal storage |
Last updated: December 2024
This section provides a comprehensive technical analysis of the codebase, including quality metrics, architecture patterns, and project health indicators.
| Metric | Count |
|---|---|
| Test Classes | 809+ |
| Search Engine Implementations | 25 |
| LLM Provider Implementations | 9 |
| Search Strategies | 20+ |
| Abstract Base Classes | 26 |
| CI/CD Workflows | 57 |
| Security Scanners in CI | 22+ |
| Core Dependencies | 63 |
The codebase follows a consistent pattern for extensibility:
┌─────────────────────────────────────────────────────────────┐
│ Abstract Base Classes │
├─────────────────────────────────────────────────────────────┤
│ BaseSearchEngine │ Common interface for 25+ engines │
│ BaseSearchStrategy │ Strategy pattern for research │
│ BaseCitationHandler │ Citation processing abstraction │
│ BaseQuestionGenerator │ Question generation interface │
│ BaseExporter │ Export format abstraction │
└─────────────────────────────────────────────────────────────┘
New search engines can be added by:
- Creating a class inheriting from
BaseSearchEngine - Placing it in
web_search_engines/engines/ - Auto-discovery handles registration
# Example: Adding a new search engine
class SearchEngineCustom(BaseSearchEngine):
def run(self, query: str) -> List[Dict]:
# Implementation
passSupports 9 LLM providers with auto-discovery:
- Ollama (local)
- LM Studio (local)
- OpenAI
- Anthropic
- Google Gemini
- OpenRouter (100+ models)
- DeepSeek
- Mistral
- Groq
pie title Project Health Ratings
"Code Quality (86)" : 86
"Security (93)" : 93
"Extensibility (93)" : 93
"Performance (85)" : 85
"UX/DX (83)" : 83
"Maintenance (89)" : 89
| Category | Score | Highlights |
|---|---|---|
| Code Quality | 86/100 | 809+ test classes, ruff/mypy enforcement, comprehensive pre-commit hooks |
| Security | 93/100 | SQLCipher AES-256 encryption, full SHA pinning in CI, 22+ security scanners |
| Extensibility | 93/100 | 26 abstract base classes, plugin architecture, strategy pattern throughout |
| Performance | 85/100 | Adaptive rate limiting, cache stampede protection, parallel search execution |
| UX/Developer Experience | 83/100 | Real-time WebSocket updates, in-tool documentation, comprehensive error handling |
| Maintenance | 89/100 | 57 CI/CD workflows, automated security scanning, structured changelog |
| Overall | 88/100 | Production-ready with excellent security and extensibility |
flowchart TB
subgraph SECURITY["🔐 Security Layers"]
direction TB
AUTH[Flask Session Auth<br/>CSRF Protection]
ENCRYPT[SQLCipher<br/>AES-256 Encryption]
SCAN[22+ Security Scanners<br/>CodeQL, Semgrep, Bandit...]
PIN[Full SHA Pinning<br/>All 57 CI Workflows]
end
AUTH --> ENCRYPT
ENCRYPT --> SCAN
SCAN --> PIN
style SECURITY fill:#e8f5e9
Security Features:
- Per-user encrypted databases (SQLCipher with AES-256)
- Full GitHub Action SHA pinning (not tag-based)
- Comprehensive CI security scanning:
- CodeQL (Python, JavaScript)
- Semgrep (custom rulesets)
- Bandit (Python security)
- Trivy (container scanning)
- Dependency review
- Secret scanning
| Feature | Implementation |
|---|---|
| Parallel Search | concurrent.futures.ThreadPoolExecutor for multi-question search |
| Rate Limiting | Adaptive system with learning_rate=0.3 |
| Cache Protection | fetch_events + fetch_locks for stampede prevention |
| Progress Streaming | SocketIO for real-time UI updates |
| Cross-Engine Filtering | LLM-powered relevance scoring and deduplication |
While the project scores highly overall, these areas have room for growth:
- Integration Testing - More end-to-end tests for full research workflows
- API Documentation - OpenAPI/Swagger spec for REST endpoints
- Metrics Dashboard - Prometheus/Grafana integration for monitoring
- Container Optimization - Multi-stage Docker builds for smaller images
- Async Architecture - Migration to async/await for I/O-bound operations
| Component | Location | Purpose |
|---|---|---|
| Research Engine | src/local_deep_research/search_system.py |
Main AdvancedSearchSystem class |
| Strategies | src/local_deep_research/advanced_search_system/strategies/ |
20+ research strategies |
| Search Engines | src/local_deep_research/web_search_engines/engines/ |
25 search engine implementations |
| Report Generation | src/local_deep_research/report_generator.py |
IntegratedReportGenerator |
| Web API | src/local_deep_research/web/routes/ |
Flask routes and WebSocket handlers |
| Database | src/local_deep_research/web/database/ |
SQLCipher models and migrations |
| Settings | src/local_deep_research/config/ |
Configuration and LLM setup |
When extending the system:
- Adding Search Engines: Inherit from
BaseSearchEngine, implementrun()method - Adding Strategies: Inherit from
BaseSearchStrategy, implementanalyze_topic()method - Adding LLM Providers: Add to
config/llm_config.pywith proper initialization - Adding Export Formats: Inherit from base exporter pattern in
utilities/
See CONTRIBUTING.md for detailed guidelines.