Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 2 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,8 +63,6 @@ upskill generate TASK [OPTIONS]
- `-o, --output PATH` - Output directory for skill
- `--no-eval` - Skip evaluation and refinement
- `--eval-model MODEL` - Different model to evaluate skill on
- `--eval-provider [anthropic|openai|generic]` - API provider for eval model
- `--eval-base-url URL` - Custom API endpoint for eval model
- `--runs-dir PATH` - Directory for run logs (default: ./runs)
- `--log-runs / --no-log-runs` - Log run data (default: enabled)

Expand All @@ -83,10 +81,8 @@ upskill generate "add more error handling examples" --from ./skills/api-errors/
# Generate from an agent trace file (auto-detected as file)
upskill generate "document the pattern" --from ./trace.json

# Evaluate on local model (llama.cpp server)
upskill generate "parse YAML" \
--eval-model "unsloth/GLM-4.7-Flash-GGUF:Q4_0" \
--eval-base-url http://localhost:8080/v1
# Skip evaluation during generation (evaluate separately with upskill eval)
upskill generate "parse YAML" --no-eval
```

**Output:**
Expand Down
5 changes: 2 additions & 3 deletions src/upskill/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -220,10 +220,9 @@ def generate(

upskill generate "extract patterns" --from trace.json

# Evaluate on a local model (Ollama):
# Skip evaluation (evaluate separately with upskill eval)

upskill generate "parse YAML" --eval-model llama3.2:latest \\
--eval-base-url http://localhost:11434/v1
upskill generate "parse YAML" --no-eval

upskill generate "document code" --no-log-runs
"""
Expand Down