Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions LocalMind-Backend/.env.example
Original file line number Diff line number Diff line change
Expand Up @@ -45,3 +45,7 @@ BACKEND_URL=http://localhost:5000
OPENAI_API_KEY=your_openai_api_key_here
GOOGLE_API_KEY=your_google_api_key_here
GROQ_API_KEY=your_groq_api_key_here

# Ollama Configuration (Local LLM)
OLLAMA_HOST=http://localhost:11434
OLLAMA_DEFAULT_MODEL=llama3
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,11 @@ import { Request, Response } from 'express'
import { SendResponse } from '../../../../utils/SendResponse.utils'
import OllamaService from './Ollama.service'
import OllamaUtils from './Ollama.utils'
import axios from 'axios'
import { env } from '../../../../constant/env.constant'

class OllamaController {
async ChartWithOllama(req: Request, res: Response) {
async ChatWithOllama(req: Request, res: Response) {
try {
const { prompt, model } = req.body

Expand All @@ -20,6 +22,85 @@ class OllamaController {
SendResponse.error(res, 'Failed to generate AI response', 500, err)
}
}

async checkOllamaStatus(req: Request, res: Response) {
try {
const response = await axios.get(`${env.OLLAMA_HOST}/api/tags`)

const models = response.data.models || []

SendResponse.success(
res,
'Ollama is running and accessible',
{
status: 'online',
host: env.OLLAMA_HOST,
models: models.map((m: { name: string; size: number; modified_at: string }) => ({
name: m.name,
size: m.size,
modified: m.modified_at,
})),
totalModels: models.length,
},
200
)
} catch (error: any) {
if (error.code === 'ECONNREFUSED' || error.code === 'ECONNRESET') {
SendResponse.error(
res,
'Ollama server is not running. Please start it using: ollama serve',
503,
{ host: env.OLLAMA_HOST }
)
} else {
SendResponse.error(res, 'Failed to connect to Ollama', 500, error)
}
}
Comment on lines +47 to +58
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Catching errors as any is not type-safe in modern TypeScript. More importantly, sending the entire error object in the response can leak sensitive information like stack traces or system paths in a production environment. It's recommended to type the error as unknown, perform type checks (e.g., using axios.isAxiosError), log the full error on the server for debugging, and send a more generic, safe error message to the client.

    } catch (error: unknown) {
      if (axios.isAxiosError(error) && (error.code === 'ECONNREFUSED' || error.code === 'ECONNRESET')) {
        SendResponse.error(
          res,
          'Ollama server is not running. Please start it using: ollama serve',
          503,
          { host: env.OLLAMA_HOST }
        )
      } else {
        console.error('Failed to connect to Ollama:', error); // Log the full error for server-side debugging
        SendResponse.error(res, 'Failed to connect to Ollama. Check server logs for details.', 500);
      }
    }

}

async listModels(req: Request, res: Response) {
try {
const models = await OllamaUtils.listAvailableModels()

SendResponse.success(res, 'Models retrieved successfully', { models, count: models.length }, 200)
} catch (error: any) {
if (error.code === 'ECONNREFUSED' || error.code === 'ECONNRESET') {
SendResponse.error(
res,
'Ollama server is not running. Please start it using: ollama serve',
503,
{ host: env.OLLAMA_HOST }
)
} else {
SendResponse.error(res, 'Failed to list models', 500, error)
}
}
}

async testModel(req: Request, res: Response) {
try {
const { model } = req.params

// Test with a simple prompt
const testPrompt = 'Say hello in one sentence'

const response = await OllamaService.generateText(testPrompt, model)
Comment on lines +80 to +87
Copy link

Copilot AI Jan 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The testModel method doesn't validate whether the model parameter is provided or check if the model exists before attempting to test it. This could lead to unhelpful error messages if a user provides an invalid or non-existent model name. Consider adding validation similar to the ChartWithOllama method which calls OllamaUtils.isModelAvailable to check model availability before use.

Copilot uses AI. Check for mistakes.

SendResponse.success(
res,
`Model '${model}' is working correctly`,
{
model,
testPrompt,
response,
latency: '< 1s', // Could be measured accurately
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The latency is hardcoded as '< 1s'. While the comment acknowledges this, it would be more useful to measure and return the actual latency of the model's response. This provides more accurate diagnostic information. You can achieve this by recording the time before and after the OllamaService.generateText call, for example using performance.now().

},
Comment on lines +96 to +97
Copy link

Copilot AI Jan 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The testModel endpoint returns a hardcoded latency value "< 1s" which is misleading and not accurate. The comment on line 87 acknowledges this could be measured accurately. Consider either removing the latency field entirely or implementing actual latency measurement by recording the start and end time of the model invocation.

Copilot uses AI. Check for mistakes.
200
)
} catch (error: any) {
SendResponse.error(res, `Model '${req.params.model}' test failed`, 500, error)
}
}
}

export default new OllamaController()
12 changes: 11 additions & 1 deletion LocalMind-Backend/src/api/v1/Ai-model/Ollama/Ollama.routes.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,16 @@ import OllamaController from './Ollama.controller'

const router: Router = Router()

router.post('/v1/chat-with-ollama', OllamaController.ChartWithOllama)
// Chat endpoint
router.post('/v1/chat-with-ollama', OllamaController.ChatWithOllama)

// Health check and status
router.get('/v1/ollama/status', OllamaController.checkOllamaStatus)

// List all available models
router.get('/v1/ollama/models', OllamaController.listModels)

// Test a specific model
router.get('/v1/ollama/test/:model', OllamaController.testModel)

export { router as OllamaRouter }
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
import { OllamaEmbeddings, Ollama } from '@langchain/ollama'
import AiTemplate from '../../../../Template/v1/Ai.template'
import { env } from '../../../../constant/env.constant'

class OllamaService {
public async getVector(data: any): Promise<number[] | undefined> {
try {
const embeddings = new OllamaEmbeddings({
model: 'koill/sentence-transformers:paraphrase-multilingual-minilm-l12-v2',
maxRetries: 2,
baseUrl: 'http://localhost:11434',
baseUrl: env.OLLAMA_HOST,
})

const vector = await embeddings.embedDocuments(data)
Expand All @@ -24,7 +25,7 @@ class OllamaService {
const promptTemplate = await AiTemplate.getPromptTemplate()

const ollama = new Ollama({
baseUrl: 'http://localhost:11434',
baseUrl: env.OLLAMA_HOST,
model: model,
maxRetries: 2,
cache: false,
Expand Down
7 changes: 4 additions & 3 deletions LocalMind-Backend/src/api/v1/Ai-model/Ollama/Ollama.utils.ts
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
import axios from 'axios'
import { env } from '../../../../constant/env.constant'

class OllamaUtils {
async isModelAvailable(modelName: string): Promise<boolean> {
async assertModelAvailable(modelName: string): Promise<boolean> {
try {
const response = await axios.get('http://localhost:11434/api/tags')
const response = await axios.get(`${env.OLLAMA_HOST}/api/tags`)

if (!response.data || !response.data.models || !Array.isArray(response.data.models)) {
throw new Error('Please start the Ollama server to check model availability')
Expand All @@ -25,7 +26,7 @@ class OllamaUtils {

async listAvailableModels(): Promise<string[]> {
try {
const response = await axios.get('http://localhost:11434/api/tags')
const response = await axios.get(`${env.OLLAMA_HOST}/api/tags`)

if (!response.data || !response.data.models || !Array.isArray(response.data.models)) {
throw new Error('Unexpected response format from Ollama API')
Expand Down
3 changes: 3 additions & 0 deletions LocalMind-Backend/src/validator/env.ts
Original file line number Diff line number Diff line change
Expand Up @@ -52,4 +52,7 @@ export const EnvSchema = z.object({
GOOGLE_API_KEY: z.string().optional(),
OPENAI_API_KEY: z.string().optional(),
BACKEND_URL: z.string().default('http://localhost:5000'),

OLLAMA_HOST: z.string().default('http://localhost:11434'),

})
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -253,7 +253,7 @@ Ensure you have the following installed:
| **Node.js** | 18.x or higher | [nodejs.org](https://nodejs.org/) |
| **npm** | 9.x or higher | Included with Node.js |
| **Git** | Latest | [git-scm.com](https://git-scm.com/) |
| **Ollama** (optional) | Latest | [ollama.ai](https://ollama.ai/) |
| **Ollama** (optional) | Latest | [ollama.ai](https://ollama.ai/) - [Setup Guide](docs/OLLAMA_SETUP.md) |
Copy link

Copilot AI Jan 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an inconsistency in the Ollama URL. The README.md uses "ollama.ai" while the documentation file (OLLAMA_SETUP.md) uses "ollama.com" for official links (installation, downloads, docs, library). Based on the documentation's extensive use of ollama.com, the URL in README.md should likely be updated to https://ollama.com to maintain consistency across the project.

Suggested change
| **Ollama** (optional) | Latest | [ollama.ai](https://ollama.ai/) - [Setup Guide](docs/OLLAMA_SETUP.md) |
| **Ollama** (optional) | Latest | [ollama.com](https://ollama.com/) - [Setup Guide](docs/OLLAMA_SETUP.md) |

Copilot uses AI. Check for mistakes.

#### Verify Installation

Expand Down
Loading