A premium AI-powered chat interface with long-term memory capabilities. Built with Next.js 16 and Memvid, this application allows you to upload documents and have intelligent conversations based on their content.
- Memvid Integration: Persistent knowledge base with semantic search
- Vector Embeddings: Automatic embedding generation for uploaded documents
- RAG (Retrieval Augmented Generation): Answers questions based on your uploaded content
- PDF Support: Upload and process PDF files with automatic text extraction
- DOCX Support: Extract content from Word documents
- Auto-Indexing: Automatic lexical and vector indexing for fast retrieval
- Real-time Streaming: Fast, responsive chat with streaming responses
- Context-Aware: Answers include source citations from your documents
- User-Friendly Messages: Helpful guidance when no information is found
- File Upload: Drag-and-drop or click to upload documents
- Dark Mode: Beautiful dark theme optimized for readability
- Responsive Design: Works seamlessly on desktop and mobile
- Sidebar Navigation: Easy access to chat history and new conversations
- Premium Aesthetics: Clean, modern interface with smooth animations
| Category | Technology |
|---|---|
| Framework | Next.js 16 (App Router) |
| Language | TypeScript 5.x |
| Styling | Tailwind CSS v4 |
| Memory Engine | Memvid SDK v2.0.151 |
| AI/LLM | Memvid Built-in LLM |
| Document Processing | pdf-parse, mammoth |
| UI Components | React 19.2.3 |
- Node.js 18 or higher
- npm or pnpm
- Memvid API Key (Get one here)
- Gemini API Key (Optional, for custom integrations)
-
Clone the repository:
git clone https://github.com/omartood/Chat-yourself.git cd Chat-yourself -
Install dependencies:
npm install
-
Configure environment variables:
Create
.envfile in the root directory:OPENAI_API_KEY= MEMVID_API_KEY=your_memvid_api_key_here GEMINI_API_KEY=your_gemini_api_key_here
-
Initialize the memory file:
npx memvid create knowledge.mv2
-
Run the development server:
npm run dev
-
Open your browser: Navigate to http://localhost:3000
- Click the π attachment button in the chat input
- Select a PDF or DOCX file
- Wait for the upload confirmation message
- Start asking questions about your document!
Once you've uploaded documents, you can ask questions like:
- "What is this document about?"
- "Summarize the main points"
- "What does it say about [specific topic]?"
The AI will search your uploaded documents and provide answers with source citations.
Check memory statistics:
npx memvid stats knowledge.mv2View uploaded documents:
npx memvid timeline knowledge.mv2Search your memory:
npx memvid find knowledge.mv2 --query "your search term"Chat-yourself/
βββ app/
β βββ api/
β β βββ chat/
β β β βββ route.ts # Chat API with RAG
β β βββ upload/
β β βββ route.ts # Document upload & embedding
β βββ globals.css # Global styles
β βββ layout.tsx # Root layout
β βββ page.tsx # Main chat interface
βββ components/
β βββ ChatBox.tsx # Message display component
β βββ InputBar.tsx # Chat input with file upload
β βββ Message.tsx # Individual message bubble
β βββ Sidebar.tsx # Navigation sidebar
βββ lib/
β βββ memory.ts # Memvid SDK initialization
βββ public/ # Static assets
βββ .env # Environment variables
βββ knowledge.mv2 # Memory database file
βββ package.json
| Variable | Description | Required |
|---|---|---|
MEMVID_API_KEY |
Memvid API key for memory management | Yes |
GEMINI_API_KEY |
Google Gemini API key (future use) | Optional |
OPENAI_API_KEY |
OpenAI API key (legacy support) | No |
The memory file (knowledge.mv2) stores:
- Uploaded document content
- Vector embeddings (384 dimensions)
- Lexical indexes for fast text search
- Metadata and timestamps
Capacity: 50 MB free tier (194 documents @ ~270KB each)
Documents are automatically converted to vector embeddings using the bge-small model (default). This enables semantic search - finding relevant information based on meaning, not just keywords.
When you ask a question:
- Your query is converted to a vector embedding
- Similar document chunks are retrieved from memory
- The LLM generates an answer based on the retrieved context
- Sources are cited in the response
The system uses adaptive retrieval to automatically determine how many document chunks to use based on relevance scores, ensuring high-quality answers without unnecessary context.
- β Fixed document upload with embedding generation
- β Improved "no information found" messages
- β Added upload success confirmations
- β Optimized chat error handling
- β Enhanced user guidance and help messages
| Command | Description |
|---|---|
npm run dev |
Start development server |
npm run build |
Build for production |
npm run start |
Start production server |
npm run lint |
Run ESLint |
Contributions are welcome! Here's how you can help:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Guidelines:
- Follow the existing code style
- Add comments for complex logic
- Update documentation for new features
- Test thoroughly before submitting
Distributed under the MIT License. See LICENSE for more information.
Omar Jibril Abdulkhadir (Omar Tood)
- Portfolio: omartood.com
- GitHub: @omartood
- Memvid - For the amazing memory engine
- Next.js - The React framework for production
- Tailwind CSS - For beautiful styling
- Vercel - For hosting and deployment
If you encounter any issues or have questions:
- Open an issue on GitHub
- Check the Memvid documentation
- Review the troubleshooting section below
Issue: "Chat request failed"
- Solution: Check that your Memvid API key is correctly set in
.env
Issue: "No relevant information found"
- Solution: Upload documents first using the π button
Issue: Upload fails
- Solution: Ensure the file is a valid PDF or DOCX file
Built with β€οΈ by Omar Tood