Skip to content

Poor Summarization When Using Thinking Models With Ollama #450

@JoshuaKimsey

Description

@JoshuaKimsey

Variant

Desktop app (Linux)

Affected area

AI Insights / World Brief

Bug description

It appears as though the "Summarize This Panel," when used with a local back-end like Ollama, does a very poor job of summarizing what is being shown in the panel, generating wildly wrong summaries of what is going on, or outputting nonsensical jibberish. I am using a competent model (Qwen3-vl:8b, also tried with the new Qwen3.5:35b-A3B. Both are Thinking Models.), so I don't believe this to be a model issue. I wasn't sure it was even using Ollama until I checked in my terminal for the Ollama instance, which does show it's loading the model when called.

Steps to reproduce

Be using Ollama as a back-end for the LLMs
Summarize some of the new panels
Observe the ones that were poorly summarized or which outputted jibberish.

Expected behavior

Summarization should be more accurate in what it says when scanning over the news panels. It should also not output jibberish.

Screenshots / Console errors

Very poorly summarized:

Image

Jibberish:

Image

Browser & OS

Bazzite 43 (Fedora-based Linux) with Ollama 0.17.2 via Docker

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions