Skip to content

Conversation

@JohannesMahne
Copy link

Summary

This PR adds two major features to the Simple MCP Client:

  1. System Prompt Configuration - Allows users to define custom system prompts per LLM configuration to guide assistant behaviour
  2. Gemini Function Calling - Full tool support for Google Gemini models with proper function declaration conversion

Changes

System Prompt Feature (Commit 1)

Backend:

  • Added system_prompt field to llm_configs database table with migration support
  • Updated add_llm_config() and added update_llm_system_prompt() database methods
  • Added system_prompt to Pydantic models (LLMConfigCreate, LLMConfig, LLMConfigUpdate)
  • Modified chat endpoint to inject system prompt as first message with role="system"

Frontend:

  • Added system_prompt field to TypeScript interfaces (LLMConfigCreate, LLMConfig)
  • Added textarea in Settings UI for configuring system prompts during LLM config creation
  • Added collapsible section in existing configs to view/edit system prompts
  • Implemented handleUpdateSystemPrompt() function for updating system prompts

Gemini Function Calling Support (Commit 2)

Backend:

  • Rewrote _generate_gemini_response() to support function calling
  • Added conversion from OpenAI tool format to Gemini function declarations
  • Implemented proper Gemini message format handling (user/model/tool roles)
  • Added _map_type_to_gemini() for JSON schema to protobuf type mapping
  • Enhanced debug logging in MCP parameter corrector for better troubleshooting
  • Added importlib-metadata>=7.0.0 dependency to fix compatibility issue

Benefits

  • System Prompts: Users can now customize LLM behavior per configuration (e.g., "You are a data analysis expert specializing in Elasticsearch queries")
  • Gemini Support: Gemini models can now properly use MCP tools with function calling, matching OpenAI capabilities
  • Better Debugging: Enhanced logging helps troubleshoot MCP parameter validation issues

Testing

Tested system prompt configuration and injection in chat flow
Verified Gemini function calling with MCP tools
Confirmed backward compatibility (system prompt is optional)

Files Changed

  • database.py - Database schema and methods
  • schemas.py - Pydantic models
  • routes.py - API endpoints
  • llm_service.py - Gemini implementation
  • mcp_parameter_corrector.py - Debug logging
  • requirements.txt - Dependencies
  • api.ts - TypeScript types
  • useStore.ts - Store interface
  • SettingsPage.tsx - UI components

Screenshots

image

- Add system_prompt field to llm_configs database table
- Add system_prompt parameter to LLM config creation and update APIs
- Inject system prompt as first message with role='system' in chat endpoint
- Add system prompt textarea in Settings UI for creating and editing configs
- Update TypeScript types to include system_prompt field

The system prompt allows users to guide the LLM's behavior by providing
context that is prepended to every conversation. This is optional and
can be configured per LLM configuration.
- Implement full Gemini function calling with proper tool support
- Convert OpenAI tool format to Gemini function declarations
- Handle Gemini-specific message format and tool responses
- Add debug logging to MCP parameter corrector for better troubleshooting
- Map JSON schema types to Gemini protobuf types
- Add importlib-metadata dependency to fix compatibility issue

This enables Gemini models to properly use MCP tools with function calling,
matching the capabilities available with OpenAI-compatible models.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant