Bug Description
When running upskill eval ./skills/my-skill -m qwen3:4b -v, the -m (model) flag is not parsed correctly by the eval command. Instead, the model name (e.g., qwen3:4b) is passed as a chat prompt to the default fast-agent, resulting in a chat response instead of an evaluation.
Environment
- upskill version: 0.2.1
-
- fast-agent-mcp version: 0.4.41+
-
-
Steps to Reproduce
- Create a skill with test cases in
skill_meta.json
-
- Run:
upskill eval ./skills/my-skill -m qwen3:4b -v
-
- Observe that the model name is treated as a chat prompt
Expected Behavior
The -m flag should configure the model used for evaluation.
Actual Behavior
The model name is passed to the default agent as a chat message, resulting in output like:
|> default
qwen3:4b
|< default gpt-5-mini
Qwen3 is the latest iteration in the Qwen series...
Root Cause Analysis
In cli.py, the _fast_agent_context() function creates a FastAgent with ignore_unknown_args=True:
@asynccontextmanager
async def _fast_agent_context() -> AsyncIterator[object]:
fast = FastAgent(
"upskill",
ignore_unknown_args=True, # <-- This causes the issue
)
This setting causes fast-agent to consume unknown CLI arguments (like -m qwen3:4b) instead of letting Click parse them for the eval command.
Workaround
Run upskill eval without the -m flag and configure the model via fastagent.config.yaml:
Then run:
upskill eval ./skills/my-skill -v
Suggested Fix
The --model/-m and --base-url flags need to be properly parsed before fast-agent initialization, or fast-agent-mcp's argument parsing should be disabled for these specific CLI commands.