Lorph is an advanced AI chat application designed for interactive communication with various cloud-based Large Language Models (LLMs) via the Ollama framework. This project integrates robust file processing capabilities, enabling the extraction of textual content from diverse document types, and incorporates web search functionalities to provide contextually rich and accurate responses.
- Multi-Model AI Chat: Seamless interaction with a selection of powerful cloud LLMs through Ollama.
- Comprehensive File Processing: Supports text extraction from images (OCR), PDF documents, Microsoft Word (.docx), and Excel (.xlsx) files, enhancing contextual understanding for AI responses.
- Dynamic Web Search Integration: Leverages real-time web search to augment AI responses with current and relevant information.
- Intuitive User Interface: Features a responsive chat interface with chat history management, dynamic model selection, and file attachment capabilities.
Follow these steps to get the Lorph application running on your local machine.
git clone https://github.com/AL-MARID/Lorph.git
cd LorphLorph requires the Ollama client to connect to cloud models and an API key for authentication.
- Ollama Account: Create an account at Ollama.
- Email Verification: Verify your registered email address.
- Login Credentials: Have your Ollama login credentials readily available.
Install the Ollama client on your local machine.
- Linux & macOS:
curl -fsSL https://ollama.com/install.sh | sh - Windows: Download from ollama.com/download/windows
- Start Ollama Server: In a new terminal, initiate the server process:
ollama serve
- Device Pairing & Login: In a separate terminal, authenticate your device:
Follow the on-screen instructions to open the authentication URL and connect your device.
ollama signin
To enable Lorph to connect with Ollama's cloud models, an API key must be configured.
- Generate API Key: After completing the device pairing, generate a new API key from your Ollama settings: ollama.com/settings/keys.
- Create
.env.localfile: In the root of the Lorph project directory, create a new file named.env.local. - Add API Key: Insert the generated key into the
.env.localfile. The environment variable name must beOLLAMA_CLOUD_API_KEY.OLLAMA_CLOUD_API_KEY=your_api_key_here
Use your preferred package manager to install the necessary project dependencies.
# Using npm
npm install
# Or using pnpm
pnpm install
# Or using yarn
yarn install- Build the project:
pnpm build
- Run the application:
pnpm start
Access the application by navigating to http://localhost:3000 in your web browser.
The system is configured to interact with a diverse range of powerful, cloud-based LLMs through the Ollama framework.
| Model Name | Description |
|---|---|
deepseek-v3.1:671b-cloud |
A large-scale model for general tasks. |
gpt-oss:20b-cloud |
Open-source GPT variant (20B parameters). |
gpt-oss:120b-cloud |
Open-source GPT variant (120B parameters). |
kimi-k2:1t-cloud |
A massive model known for its context handling. |
qwen3-coder:480b-cloud |
Specialized model for coding and development tasks. |
glm-4.6:cloud |
General language model. |
glm-4.7:cloud |
General language model (updated version). |
minimax-m2:cloud |
High-performance model. |
mistral-large-3:675b-cloud |
A powerful model from the Mistral family. |
Lorph leverages a modern and robust technology stack to deliver its features:
| Category | Technology | Version |
|---|---|---|
| Framework | React | 19.2.4 |
| Language | TypeScript | 5.8.2 |
| Build Tool | Vite | 7.3.1 |
| Styling | Tailwind CSS | CDN (Runtime) |
| Icons | Lucide React | 0.563.0 |
| Markdown Rendering | React Markdown | 10.1.0 |
| Markdown Extensions | remark-gfm | 4.0.1 |
| Syntax Highlighting | React Syntax Highlighter | 16.1.0 |
| PDF Processing | PDF.js | 3.11.174 |
| OCR Engine | Tesseract.js | 5.0.4 |
| Word Document Parsing | Mammoth | 1.6.0 |
| Excel File Parsing | read-excel-file | 5.7.1 |
The application extracts text content from attached files and includes it in the conversation context, enabling the AI to process and respond based on the document's information.
| Format | Processing Method |
|---|---|
| Images (JPEG, PNG) | OCR text extraction via Tesseract.js (English language) |
| Multi-page text extraction via PDF.js with page-by-page output | |
| Word (DOCX) | Raw text extraction via Mammoth |
| Excel (XLSX) | Row-column parsing with pipe-delimited output via read-excel-file |
| Plaintext (TXT, MD, JSON, CSV, JS, TS, TSX, JSX, PY) | Direct file read |
The web search functionality in Lorph is engineered with a multi-layered architecture to ensure efficient and accurate information retrieval:
- Data Acquisition Layer: Handles raw HTTP requests and employs proxy rotation (using services like allorigins, corsproxy.io, and codetabs) to overcome potential access restrictions and ensure reliable data fetching.
- Processing Layer: Focuses on transforming raw HTML data into a queryable Document Object Model (DOM), including URL sanitization and resolution of redirects to obtain clean and usable links.
- Knowledge Extraction Layer: Extracts relevant entities and information from various sources, including structured data from the Wikipedia OpenSearch API and unstructured content from DuckDuckGo HTML DOM.
- Orchestration Layer: Manages the overall search workflow, performing parallel data fetching, merging results from different sources, deduplicating entries based on normalized URLs, and classifying results by type (e.g., web, video).
Contributions to the Lorph project are welcome. For suggestions, feature enhancements, or bug fixes, please adhere to the following workflow:
- Fork the repository.
- Create a new branch (
git checkout -b feature/YourFeature). - Implement your changes and commit them (
git commit -m 'Add some feature'). - Push your branch to your fork (
git push origin feature/YourFeature). - Open a Pull Request to the main repository.
This project is distributed under the MIT License. Refer to the LICENSE file for complete details.





