Guidelines for AI agents working on the csstokens repository.
This project is an open-source CLI tool that extracts design tokens from existing frontend codebases and generates structured tokens and reports.
Primary goals:
- Fast CLI
- Deterministic output
- Minimal dependencies
- Excellent DX
- Simple architecture
- Easy maintenance by a solo developer
Agents should prioritize simplicity and predictability over cleverness.
Given the same input files, output must be identical.
Always:
- Sort object keys
- Sort arrays
- Use stable naming
- Avoid randomness
- Avoid timestamps in output
Never:
- Depend on file system ordering
- Depend on object iteration order
This ensures snapshot tests remain stable.
Pure logic only.
Allowed:
- parsing
- normalization
- grouping
- token generation
- emitters
Not allowed:
- console output
- CLI flags logic
- process.exit
- filesystem writes (except through passed interfaces)
Core must be reusable as a library.
Responsible for:
- command parsing
- user messages
- file IO
- configuration
- exit codes
CLI must be a thin wrapper around core.
The tool should handle medium repositories quickly.
Targets:
- <5 seconds for small repos
- <10 seconds for medium repos
Avoid:
- full AST parsing unless absolutely necessary
- loading entire repo into memory if avoidable
- unnecessary object copying
Prefer:
- streaming where reasonable
- simple regex extraction
- linear passes
Keep dependencies minimal.
Preferred:
- fast-glob
- commander or yargs
- picocolors or chalk (optional)
- vitest
Avoid:
- heavy AST frameworks
- webpack
- babel
- runtime transpilers
Justify any new dependency.
Goals:
- readable
- small functions
- predictable structure
Prefer:
- pure functions
- explicit types
- early returns
Avoid:
- deep abstraction
- unnecessary generics
- class hierarchies
This is a utility tool, not a framework.
Core structure:
packages/core/src/
extractors/
colors.ts
lengths.ts
shadows.ts
cssVars.ts
grouping/
colors.ts
spacing.ts
radius.ts
shadow.ts
emitters/
json.ts
css.ts
ts.ts
report.ts
model/
RawValue.ts
TokenCandidate.ts
utils/
normalizeColor.ts
sortStable.ts
CLI structure:
packages/cli/src/
commands/
analyze.ts
extract.ts
config.ts
logger.ts
index.ts
Names must be stable.
Example:
color.blue.500 space.2 radius.1 shadow.1
Never rename tokens without strong reason.
Naming changes break users.
Snapshot tests are critical.
When changing extraction logic:
- Update snapshots intentionally
- Verify output is still reasonable
Required tests:
- color normalization
- spacing sorting
- deterministic output
Golden test:
extract examples/messy-ui → snapshot tokens.json → snapshot tokens.css → snapshot report.md
Output formats are public API.
Do not change:
- tokens.json structure
- CSS variable naming
- report sections
Without updating:
- README
- tests
- snapshots
Errors must be clear.
CLI must feel lightweight.
Good:
✔ Found 128 colors ✔ Generated tokens.json ✔ Generated report.md
Avoid:
- verbose logs by default
- debug noise
Optional later:
--verbose
This is a token extractor, not a platform.
Do NOT implement:
- UI dashboards
- web servers
- cloud sync
- login systems
Allowed future features:
- refactor command
- tailwind export
- dtcg format
When implementing a feature:
- Modify core
- Add tests
- Update CLI if needed
- Run tests
- Verify deterministic output
Always ensure:
pnpm install pnpm build pnpm test
passes.
If two approaches exist:
Pick the simpler one.
Simple > Clever.
Readable > Abstract.
protohiro-tokens should feel like:
- small
- sharp
- fast
- reliable
Similar spirit to:
- prettier
- eslint (early versions)
- simple unix tools
Not like:
- enterprise platforms
- design systems SaaS