Skip to content

chat vs. generate #38

@dhicks

Description

@dhicks

I've been playing around with this today, and having a lot of fun trying out some beyond-text-mining stuff on a text dataset.

I noticed that m_backend_submit.mall_ollama() is using ollamar::chat() to access the LLM, rather than ollamar::generate(). As I understand it, Ollama's chat API endpoint "remembers" the conversation so far, while generate is for a single, isolated message and reply. So using chat() would mean the model's response to row n of the dataset would depend on the messages and replies for rows 1-n-1?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions