δΈζ | English
A privacy-first, open-source alternative to NotebookLM
An AI-powered knowledge management application that lets you create intelligent notebooks from your documents.
Project URL: https://github.com/smallnest/notex
- Python clone: pynotex
- π Multiple Source Types - Upload PDFs, text files, Markdown, DOCX, and HTML documents
- π€ AI-Powered Chat - Ask questions and get answers based on your sources
- β¨ Multiple Transformations - Generate summaries, FAQs, study guides, outlines, timelines, glossaries, quizzes, mindmaps, infographics and podcast scripts
- π Infographic Generation - Create beautiful, hand-drawn style infographics from your content using Google's Gemini Nano Banana
- ποΈ Podcast Generation - Create engaging podcast scripts from your content
- πΎ Full Privacy - Local SQLite storage, optional cloud backends
- π Multi-Model Support - Works with OpenAI, Ollama, and other compatible APIs
- π¨ Academic Brutalist Design - Distinctive, research-focused interface
- Go 1.23 or later
- An LLM API key (OpenAI) or Ollama running locally
# Clone the repository
git clone https://github.com/smallnest/notex.git
cd notex
# Install dependencies
go mod tidy
# Run the server
go run . -serverOpen your browser to http://localhost:8080
Notex uses environment variables for configuration. The recommended way to configure the application is to create a .env file.
Copy the example configuration file to create your local configuration:
cp .env.example .envEdit the .env file and configure one of the following LLM providers:
OpenAI provides high-quality models but requires an API key and charges per usage.
- Get an API key from https://platform.openai.com/api-keys
- Edit
.envand configure:
# OpenAI Configuration
OPENAI_API_KEY=sk-your-actual-api-key-here
OPENAI_BASE_URL=https://api.openai.com/v1
OPENAI_MODEL=gpt-4o-mini
EMBEDDING_MODEL=text-embedding-3-smallAvailable OpenAI Models:
gpt-4o-mini- Fast and cost-effective (recommended)gpt-4o- Most capablegpt-3.5-turbo- Legacy option
Tips:
- You can also use compatible OpenAI APIs like Azure OpenAI or other providers by changing
OPENAI_BASE_URL - For example, to use DeepSeek:
OPENAI_BASE_URL=https://api.deepseek.com/v1andOPENAI_MODEL=deepseek-chat
Ollama runs locally on your machine and is completely free, but requires a capable computer.
- Install Ollama from https://ollama.com
- Pull a model (e.g.,
ollama pull llama3.2) - Start Ollama:
ollama serve - Edit
.envand configure:
# Ollama Configuration
OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_MODEL=llama3.2Available Ollama Models:
llama3.2- Recommended balance of speed and qualityqwen2.5- Excellent for Chinese contentmistral- Good English performancecodellama- Specialized for code
Tips:
- Ollama models run entirely on your machine - no data leaves your computer
- Make sure Ollama is running before starting Notex
- Larger models require more RAM and CPU
To use the infographic generation feature with Google's Gemini Nano Banana:
GOOGLE_API_KEY=your-google-api-key-hereGet your key from https://makersuite.google.com/app/apikey
After configuring your .env file, simply run:
go run . -serverThe application will automatically load your .env configuration and start at http://localhost:8080
If you prefer to build a binary instead of using go run:
go build -o notex .
./notex -server- Click "New Notebook" in the header
- Enter a name and optional description
- Click "Create Notebook"
You can add content to your notebook in three ways:
File Upload
- Click the "+" button in the Sources panel
- Drag and drop or browse for files
- Supported: PDF, TXT, MD, DOCX, HTML
Paste Text
- Select the "Text" tab
- Enter a title and paste your content
From URL
- Select the "URL" tab
- Enter the URL and optional title
- Switch to the "CHAT" tab
- Ask questions about your content
- Responses include references to relevant sources
Click any transformation card to generate:
| Transformation | Description |
|---|---|
| π Summary | Condensed overview of your sources |
| β FAQ | Common questions and answers |
| π Study Guide | Educational material with learning objectives |
| ποΈ Outline | Hierarchical structure of topics |
| ποΈ Podcast | Conversational script for audio content |
| π Timeline | Chronological events from sources |
| π Glossary | Key terms and definitions |
| βοΈ Quiz | Assessment questions with answer key |
| π Infographic | Hand-drawn style visual representation of your content |
| π§ Mindmap | Visual hierarchical diagram of your sources using Mermaid.js |
Or use the custom prompt field for any other transformation.
For advanced users, the .env file supports additional configuration options:
# Server Configuration
SERVER_HOST=0.0.0.0
SERVER_PORT=8080
# Vector Store (default: sqlite)
# Options: sqlite, memory, supabase, postgres, redis
VECTOR_STORE_TYPE=sqlite
# RAG Processing
MAX_SOURCES=5 # Maximum sources to retrieve for context
CHUNK_SIZE=1000 # Document chunk size for processing
CHUNK_OVERLAP=200 # Overlap between chunks
# Document Conversion
ENABLE_MARKITDOWN=true # Use Microsoft markitdown for better PDF/DOCX conversion
# Podcast Generation
ENABLE_PODCAST=true
PODCAST_VOICE=alloy # Options: alloy, echo, fable, onyx, nova, shimmer
# Feature Flags
ALLOW_DELETE=true
ALLOW_MULTIPLE_NOTES_OF_SAME_TYPE=truego test -v ./...go build -o notex .# Format
go fmt ./...
# Lint
golangci-lint run
# Vet
go vet ./...Contributions are welcome! Please feel free to submit a Pull Request.
Apache License 2.0 - see LICENSE for details.
- Inspired by Google's NotebookLM
- Built with LangGraphGo
- Inspired by open-notebook
- Report issues on GitHub
- Join discussions in the Notex community
Notex - A privacy-first, open-source alternative to NotebookLM https://github.com/smallnest/notex
