Skip to content

Fix eval prompt collision with model flag#25

Open
KevinMeisel wants to merge 1 commit intohuggingface:mainfrom
KevinMeisel:fix/eval-prompt-model-collision
Open

Fix eval prompt collision with model flag#25
KevinMeisel wants to merge 1 commit intohuggingface:mainfrom
KevinMeisel:fix/eval-prompt-model-collision

Conversation

@KevinMeisel
Copy link

Summary

  • Prevent FastAgent from consuming upskill CLI args by setting parse_cli_args=False.
  • Normalize provider-prefixed model IDs when --provider is used.

Root Cause

FastAgent parses CLI args and treats -m/--message as a user prompt. Upskill’s -m model option collides with this, so the model string is injected into the chat request.

Changes

  • src/upskill/cli.py: set parse_cli_args=False on FastAgent
  • src/upskill/cli.py: provider/model normalization helper

Files

  • src/upskill/cli.py

@KevinMeisel
Copy link
Author

Fixes #24

@sysradium
Copy link

Oh yeah, I had that problem as well. Took me some time to figure out white -m "qwen3" just silently fails.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants