Skip to content

Conversation

@dpaluy
Copy link
Owner

@dpaluy dpaluy commented Jan 6, 2026

Summary

BREAKING CHANGE: Removes provider-specific mappers and uses RubyLLM format exclusively.

RubyLLM abstracts all LLM providers into a consistent Message format with top-level input_tokens/output_tokens fields. Since we only support RubyLLM, there's no need for provider-specific mappers.

Removed (-850 lines):

  • lib/tracebook/mappers/openai.rb
  • lib/tracebook/mappers/anthropic.rb
  • lib/tracebook/mappers/gemini.rb
  • lib/tracebook/mappers/ollama.rb
  • lib/tracebook/mappers/base.rb
  • Provider-specific tests

Simplified:

  • lib/tracebook/mappers.rb - single normalize method for RubyLLM format
  • lib/tracebook/adapters/active_agent.rb - use unified Mappers.normalize

Test plan

  • Gemini provider with RubyLLM format - tokens captured
  • OpenAI provider with RubyLLM format - tokens captured
  • Anthropic provider with RubyLLM format - tokens captured
  • ActiveAgent adapter works with RubyLLM format
  • Request/response text extraction works
  • Metadata (project, session_id, tags, latency) stored correctly
  • Full test suite passes (124 tests, 373 assertions)

Closes #25

dpaluy added 4 commits January 6, 2026 14:36
- Create Gemini-specific mapper to handle Google Gemini API response format
- Extract token counts from usageMetadata (promptTokenCount, candidatesTokenCount)
- Support explicit token passing via meta hash (takes priority over response)
- Route "gemini" and "google" providers to the new Gemini mapper
- Add comprehensive tests for Gemini token extraction and meta override
- Fixes issue where RubyLLM adapter could not capture tokens from Gemini API responses
The Gemini mapper now checks for token counts at the top level of the
response hash (input_tokens, output_tokens) which is where RubyLLM
places them after normalizing the Gemini API response.

Token extraction priority:
1. Explicit meta (highest priority)
2. Top-level response (RubyLLM format)
3. usageMetadata (raw Gemini API format)

Closes #25
Instead of patching each provider mapper to handle RubyLLM's normalized
format, add a single RubyLLM mapper that handles all providers consistently.

The mapper routing now auto-detects RubyLLM format by checking for
top-level input_tokens/output_tokens fields. This approach:

- Works for all providers (OpenAI, Anthropic, Gemini, etc.)
- Keeps provider-specific mappers focused on raw API formats
- Preserves the original provider name in the interaction record

Reverts the band-aid fix to the Gemini mapper since the RubyLLM
mapper now handles this case properly.
BREAKING CHANGE: TraceBook now only supports RubyLLM normalized responses.
Provider-specific mappers (OpenAI, Anthropic, Gemini, Ollama) have been removed.

RubyLLM abstracts all providers into a consistent Message format with
top-level input_tokens/output_tokens fields. This simplifies the codebase
significantly - one mapper handles all providers.

Removed:
- lib/tracebook/mappers/openai.rb
- lib/tracebook/mappers/anthropic.rb
- lib/tracebook/mappers/gemini.rb
- lib/tracebook/mappers/ollama.rb
- lib/tracebook/mappers/base.rb
- Provider-specific tests

Updated:
- lib/tracebook/mappers.rb - single normalize method for RubyLLM format
- lib/tracebook/adapters/active_agent.rb - use unified Mappers.normalize
@dpaluy dpaluy merged commit d8c3a23 into master Jan 7, 2026
2 checks passed
@dpaluy dpaluy deleted the feature/fix-rubyllm-adapter-token-counts branch January 7, 2026 01:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

fix: RubyLLM adapter not capturing token counts

2 participants