-
Notifications
You must be signed in to change notification settings - Fork 10
Open
Description
I've been playing around with this today, and having a lot of fun trying out some beyond-text-mining stuff on a text dataset.
I noticed that m_backend_submit.mall_ollama() is using ollamar::chat() to access the LLM, rather than ollamar::generate(). As I understand it, Ollama's chat API endpoint "remembers" the conversation so far, while generate is for a single, isolated message and reply. So using chat() would mean the model's response to row n of the dataset would depend on the messages and replies for rows 1-n-1?
jrosell
Metadata
Metadata
Assignees
Labels
No labels