Skip to content

OOMKill when using conversation.process and a large context in 2025.9 #77

@lJaffy

Description

@lJaffy

When triggering conversation.process with a larger context (~2000 tokens) and the custom conversation integration, memory usage in HA spikes dramatically and the whole HA instance seems to be oomkilled. (3GB -> 6GB usage in about 10 seconds)

Not sure if this comes straight from Home Assistant or from the libraries shared by the integration, but I can't replicate it when calling the ollama integration directly.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions