Skip to content

Rebuilt your taxonomy using positive routing instead of restriction prompting #10

@Chaosman-One

Description

@Chaosman-One

I used your diagnostic taxonomy as the foundation for a restructured skill that approaches the same problem from the opposite direction.

Your taxonomy is accurate — those are real AI tells and you catalogued them well. The issue I ran into is architectural: when those patterns are loaded into an LLM's context window as a "do not" list, the model orients around the banned tokens at inference time. It spends generation energy resisting patterns that would not have been attractive if they had not been injected. The blocklist primes the basin it is trying to suppress.

I rebuilt the skill using positive routing constraints instead. The model never sees the failure patterns. It completes a derivation (reader profile, target register, scene anchor, texture constraint) before generating, then writes under six simultaneous presence tests: human subject in every sentence, material anchor in every claim, single-pass assertion, rhythm variation, register lock, and earned emphasis. The slop patterns become structurally impossible without being named.

The diagnostic taxonomy from your original skill is preserved as a human-facing editorial reference, separated from the generation context. Full attribution in the README.

Repo: https://github.com/Chaosman-One/Stop-Slop-v2

I'd welcome your thoughts on the approach. Your original work made this possible — I needed the taxonomy before I could build the routing layer.

PS. This response was structured using Stop-Slop-v2. This is a first pass.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions