This is a skill I use to govern how models should be used in any task created by OpenClaw. It's meant to optimize token consumption, while directing more powerful models to the appropriate tasks. This includes my current model stack: gpt-5.2 >> mercury-2 >> gpt-5-nano. For a full understanding of the methodology, read this article: https://www.theneuron.ai/explainer-articles/i-was-bleeding-tokens-in-openclaw-heres-the-system-that-fixed-it/
cnoles1980/Model-Policy-Skill
Folders and files
| Name | Name | Last commit date | ||
|---|---|---|---|---|