Token-optimized JSON compression for GPT-4, Claude, and all Large Language Models. Reduce LLM API costs by 20-60% with lossless compression. Perfect for RAG systems, function calling, analytics data, and any structured arrays sent to LLMs. ASON 2.0 uses smart compression with tabular arrays, semantic references, and pipe delimiters.
๐ฎ Try Interactive Playground โข ๐ View Benchmarks โข ๐ Read Documentation
- โ
Sections (
@section) - Organize related data - โ
Tabular Arrays (
[N]{fields}) - CSV-like format with explicit count - โ
Semantic References (
$email,&address) - Human-readable variable names - โ
Pipe Delimiter (
|) - More token-efficient than commas - โ Advanced Optimizations - Inline objects, dot notation in schemas, array fields
- โ Lexer-Parser Architecture - Robust parsing with proper AST
npm install @ason-format/asonimport { SmartCompressor } from '@ason-format/ason';
const compressor = new SmartCompressor();
const data = {
users: [
{ id: 1, name: "Alice", email: "alice@ex.com" },
{ id: 2, name: "Bob", email: "bob@ex.com" }
]
};
// Compress
const ason = compressor.compress(data);
console.log(ason);
// Output:
// @users [2]{id,name,email}
// 1|Alice|alice@ex.com
// 2|Bob|bob@ex.com
// Decompress (perfect round-trip)
const original = compressor.decompress(ason);# Compress JSON to ASON
npx ason input.json -o output.ason
# Decompress ASON to JSON
npx ason data.ason -o output.json
# Show token savings with --stats
npx ason data.json --stats
# ๐ COMPRESSION STATS:
# โโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโฌโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโ
# โ Format โ Tokens โ Size โ Reduction โ
# โโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโผโโโโโโโโโโโโโผโโโโโโโโโโโโโโโค
# โ JSON โ 59 โ 151 B โ - โ
# โ ASON 2.0 โ 23 โ 43 B โ 61.02% โ
# โโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโดโโโโโโโโโโโโโดโโโโโโโโโโโโโโโ
# โ Saved 36 tokens (61.02%) โข 108 B (71.52%)
# Pipe from stdin
echo '{"name": "Ada"}' | npx ason
cat data.json | npx ason > output.asonBenchmarks use GPT-5 o200k_base tokenizer. Results vary by model and tokenizer.
Tested on 5 real-world datasets:
๐ Shipping Record
โ
ASON โโโโโโโโโโโโโโโโโโโโ 148 tokens (+9.76% vs JSON)
JSON โโโโโโโโโโโโโโโโโโโโ 164 tokens (baseline)
Toon โโโโโโโโโโโโโโโโโโโโ 178 tokens (-8.54% vs JSON)
๐ E-commerce Order
โ
ASON โโโโโโโโโโโโโโโโโโโโ 263 tokens (+10.24% vs JSON)
JSON โโโโโโโโโโโโโโโโโโโโ 293 tokens (baseline)
Toon โโโโโโโโโโโโโโโโโโโโ 296 tokens (-1.02% vs JSON)
๐ Analytics Time Series
โ
ASON โโโโโโโโโโโโโโโโโโโโ 235 tokens (+23.45% vs JSON)
Toon โโโโโโโโโโโโโโโโโโโโ 260 tokens (+15.31% vs JSON)
JSON โโโโโโโโโโโโโโโโโโโโ 307 tokens (baseline)
๐ GitHub Repositories (Non-uniform)
โ
JSON โโโโโโโโโโโโโโโโโโโโ 347 tokens (baseline)
ASON โโโโโโโโโโโโโโโโโโโโ 384 tokens (-10.66% vs JSON)
Toon โโโโโโโโโโโโโโโโโโโโ 415 tokens (-19.60% vs JSON)
๐ Deeply Nested Structure (Non-uniform)
โ
JSON โโโโโโโโโโโโโโโโโโโโ 186 tokens (baseline)
ASON โโโโโโโโโโโโโโโโโโโโ 201 tokens (-8.06% vs JSON)
Toon โโโโโโโโโโโโโโโโโโโโ 223 tokens (-19.89% vs JSON)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ OVERALL (5 datasets) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
ASON Average: +4.94% reduction
Toon Average: -6.75% reduction
ASON WINS: 3 out of 5 datasets
ASON performs better on: Uniform arrays, mixed structures
Both struggle with: Non-uniform/deeply nested data (but ASON loses less)
| Format | Best For | Token Efficiency |
|---|---|---|
| ASON | Uniform arrays, nested objects, mixed data | โญโญโญโญโญ (4.94% avg) |
| Toon | Flat tabular data only | โญโญโญ (-6.75% avg) |
| JSON | Non-uniform, deeply nested | โญโญ (baseline) |
| CSV | Simple tables, no nesting | โญโญโญโญโญโญ (best for flat data) |
- โ 100% Automatic - Zero configuration, detects patterns automatically
- โ Lossless - Perfect round-trip fidelity
- โ Up to 23% Token Reduction - Saves money on LLM API calls (+4.94% average)
- โ
Object References - Deduplicates repeated structures (
&obj0) - โ Inline-First Dictionary - Optimized for LLM readability
- โ
TypeScript Support - Full
.d.tstype definitions included - โ
CLI Tool - Command-line interface with
--statsflag - โ ESM + CJS - Works in browser and Node.js
- ๐ฎ Interactive Playground - Try ASON in your browser with real-time token counting
- ๐ Complete Documentation - Format specification, API guide, and best practices
- ๐ Benchmarks & Comparisons - ASON vs JSON vs TOON vs YAML performance tests
- ๐ง API Reference - Detailed Node.js API documentation
- ๐ข Token Counter Tool - Visual token comparison across formats
- ๐ฆ Release Guide - How to publish new versions
- ๐ Changelog - Version history and updates
import { SmartCompressor } from '@ason-format/ason';
import OpenAI from 'openai';
const compressor = new SmartCompressor({ indent: 1 });
const openai = new OpenAI();
const largeData = await fetchDataFromDB();
const compressed = compressor.compress(largeData);
// Saves ~33% on tokens = 33% cost reduction
const response = await openai.chat.completions.create({
messages: [{
role: "user",
content: `Analyze this data: ${compressed}`
}]
});// Save to Redis/localStorage with less space
const compressor = new SmartCompressor({ indent: 1 });
localStorage.setItem('cache', compressor.compress(bigObject));
// Retrieve
const data = compressor.decompress(localStorage.getItem('cache'));// Compress document metadata before sending to LLM
import { SmartCompressor } from '@ason-format/ason';
const docs = await vectorDB.similaritySearch(query, k=10);
const compressed = compressor.compress(docs.map(d => ({
content: d.pageContent,
score: d.metadata.score,
source: d.metadata.source
})));
// 50-60% token reduction on document arrays
const response = await llm.invoke(`Context: ${compressed}\n\nQuery: ${query}`);// Reduce token overhead in OpenAI function calling
const users = await db.query('SELECT id, name, email FROM users LIMIT 100');
const compressed = compressor.compress(users);
await openai.chat.completions.create({
messages: [...],
tools: [{
type: "function",
function: {
name: "process_users",
parameters: {
type: "object",
properties: {
users: { type: "string", description: "User data in ASON format" }
}
}
}
}],
tool_choice: { type: "function", function: { name: "process_users" } }
});// 65% token reduction on metrics/analytics
const metrics = await getHourlyMetrics(last24Hours);
const compressed = compressor.compress(metrics);
// Perfect for dashboards, logs, financial data
const analysis = await llm.analyze(compressed);app.get('/api/data/compact', (req, res) => {
const data = getDataFromDB();
const compressed = compressor.compress(data);
res.json({
data: compressed,
format: 'ason',
savings: '33%'
});
});# Clone repository
git clone https://github.com/ason-format/ason.git
cd ason
# Install dependencies
cd nodejs-compressor
npm install
# Run tests
npm test
# Run benchmarks
npm run benchmark
# Build for production
npm run build
# Test CLI locally
node src/cli.js data.json --stats- ๐ฌ GitHub Discussions - Ask questions, share use cases
- ๐ Issue Tracker - Report bugs or request features
- ๐ง Tools & Extensions - MCP Server, npm packages, CLI
We welcome contributions! Please see:
- CONTRIBUTING.md - Contribution guidelines
- CODE_OF_CONDUCT.md - Community standards
- SECURITY.md - Security policies
MIT ยฉ 2025 ASON Project Contributors
LLM optimization โข GPT-4 cost reduction โข Claude API โข Token compression โข JSON optimization โข RAG systems โข Function calling โข OpenAI API โข Vector database โข LangChain โข Semantic kernel โข AI cost savings โข ML engineering โข Data serialization โข API optimization
๐ฎ Try Interactive Playground
Reduce LLM API costs by 20-60%. Used in production by companies processing millions of API calls daily.
