Skip to content

Where in code can you change the inference from Ollama to something else? #8

@zxcvxzcv-johndoe

Description

@zxcvxzcv-johndoe

I have managed to get this to run with Windows and WSL otherwise but in the last part it fails with this error below.

Where in code can I try to change it from using Ollama to something else like OpenAI (koboldcpp in practice)?

Thanks!

[llm/error] [1:llm:Ollama] [6ms] LLM run errored with error: "Unexpected end of JSON input"

  • error SyntaxError: Unexpected end of JSON input
    at JSON.parse ()
    at parseJSONFromBytes (node:internal/deps/undici/undici:4553:19)
    at successSteps (node:internal/deps/undici/undici:4527:27)
    at fullyReadBody (node:internal/deps/undici/undici:1307:9)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async specConsumeBody (node:internal/deps/undici/undici:4536:7)
    at async createOllamaStream (webpack-internal:///(sc_server)/./node_modules/langchain/dist/util/ollama.js:26:21)
    at async Ollama._call (webpack-internal:///(sc_server)/./node_modules/langchain/dist/llms/ollama.js:313:26)
    at async Promise.all (index 0)
    at async Ollama._generate (webpack-internal:///(sc_server)/./node_modules/langchain/dist/llms/base.js:311:29)
    at async Ollama._generateUncached (webpack-internal:///(sc_server)/./node_modules/langchain/dist/llms/base.js:174:22)
    at async Ollama.call (webpack-internal:///(sc_server)/./node_modules/langchain/dist/llms/base.js:242:34)
    at async POST (webpack-internal:///(sc_server)/./src/app/api/qa-pg-vector/route.ts:61:24)
    at async eval (webpack-internal:///(sc_server)/./node_modules/next/dist/server/future/route-modules/app-route/module.js:242:37)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions