Skip to content

bug: Git issue agent fails with Ollama models after crewai 1.10.1 upgrade #173

@mrsabath

Description

@mrsabath

Problem

After the crewai upgrade from 0.203.1 to 1.10.1 (PR #172), the git-issue-agent no longer works with local Ollama models. The agent connects to the MCP tool and lists available tools, but the LLM's responses are not properly translated into tool calls that crewai can execute.

Symptoms

Three different failure modes observed across Ollama models:

1. ibm/granite4:latest — ReAct text output instead of function call

The model generates a text-based ReAct response that crewai treats as the final answer instead of executing:

{"Thought": "The user wants to list issues...", "Action": "list_issues",
 "Action Input": {"owner": "kagenti", "repo": "kagenti"}, "Observation": null}

The MCP tool receives initialize and tools/list but never tools/call.

2. granite3.3:8b — Instructor multiple tool calls error

Instructor does not support multiple tool calls, use List[Model] instead

The model produces correct JSON as text content (finish_reason='stop', tool_calls=None) rather than as a structured tool call:

content='{"owner": "kagenti", "repo": "kagenti", "issue_numbers": []}'
tool_calls=None

3. llama3.2:3b-instruct-fp16 — String-serialized arrays + Pydantic validation

The model generates proper tool_calls sometimes, but serializes arrays as strings:

arguments='{"owner": "kagenti", "repo": "kagenti", "issue_numbers": "[1, 2, 3]"}'

This fails Pydantic v2 validation:

1 validation error for IssueSearchInfo
issue_numbers
  Input should be a valid array [type=list_type, input_value='[1, 2, 3]', input_type=str]

Also hits the same "Instructor does not support multiple tool calls" error on retries.

Root Cause

crewai 1.10.1 uses litellm + instructor for function calling. The Ollama models (via litellm's Ollama provider) don't consistently produce responses in the format crewai's instructor expects:

  • Some models output tool call intent as text content instead of structured tool_calls
  • Some models serialize array arguments as strings
  • The instructor integration rejects responses with multiple tool calls

This worked with crewai 0.203.1 which likely had different (more lenient) tool call parsing.

Environment

  • crewai 1.10.1, crewai-tools 1.10.1
  • litellm (via crewai[litellm])
  • Ollama models: granite4, granite3.3:8b, llama3.2:3b-instruct-fp16
  • Kubernetes with AuthBridge (confirmed networking/token exchange is working)

Workarounds

  1. Use OpenAI — function calling works reliably with gpt-4o-mini and above
  2. Partial fix for string arrays — Add a Pydantic field_validator to IssueSearchInfo that coerces string-encoded lists (json.loads on string inputs). This fixes symptom 3 but not 1 or 2.

Suggested Fixes

  1. Add a field_validator to IssueSearchInfo.issue_numbers to handle string-encoded arrays (quick fix for Pydantic validation)
  2. Investigate crewai 1.10.1's instructor configuration for Ollama — there may be settings to improve function calling compatibility
  3. Consider adding a fallback ReAct text parser for models that don't support native function calling
  4. Test with a broader set of Ollama models to find which ones work reliably with crewai 1.10.1

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions