English | 中文
Zero-allocation JSON library for Go — up to 24× faster parsing, SIMD-accelerated scanning (SSE2/AVX2 · NEON), lazy path queries, drop-in encoding/json replacement.
go get github.com/uniyakcom/yakjsonRequires Go 1.25+
- Zero-allocation parsing — strings reference the original JSON bytes directly, no copies
- SWAR/SIMD acceleration — amd64 AVX2/SSE2 and arm64 NEON assembly paths for escape scanning
- Pool-backed reuse —
Parser/Writer/Decoderall usesync.Pool, goroutine-safe - Lazy
Getqueries — no DOM construction; scanning stops as soon as the path is matched encoding/jsoncompatible —Marshal/Unmarshal+Encoder/Decoderbehavior aligned with stdlib- Built-in safety limits — depth, key length, string length, array length, object key count — all configurable at runtime to guard against malformed-input DoS
Environment: Intel Xeon E-2186G @ 3.80GHz · Go 1.26.1 · Linux 6.17.0-14-generic · amd64
Source: bench_linux_6c12t.txt (median of 3 runs per group)
| Benchmark | yakjson | encoding/json | Speedup | yakjson allocs | std allocs |
|---|---|---|---|---|---|
| Parse (DOM, ~130 B) | 160 ns/op · 814 MB/s | 3,831 ns/op · 34 MB/s | 24× | 0 | 35 |
| Marshal (struct) | 98 ns/op | 409 ns/op | 4.2× | 0 | 1 |
| Unmarshal (struct) | 306 ns/op | 1,002 ns/op | 3.3× | 2 | 6 |
| Writer (manual build, nested) | 164 ns/op | — | — | 0 | — |
| EachLineString (in-memory NDJSON) | 8,951 ns/op | — | — | 0 | — |
| Delete | 72 ns/op | — | — | 1 | — |
| Benchmark | ns/op | B/op | allocs | Notes |
|---|---|---|---|---|
AppendMarshal (single *struct) |
36 | 0 | 0 | Append to existing slice |
AppendMarshal ([]T as any) |
~530 | 24 | 1 | 24 B from []T interface boxing; use *[]T to avoid |
AppendMarshal (*[]T fast path) |
~450 | 0 | 0 | Pass &slice — same perf as AppendMarshalSlice |
| AppendMarshalSlice (20 structs) | 440 | 0 | 0 | Generic API; use when caller owns typed slice |
| AppendMarshalIndent | 300 | 48 | 1 | With indentation formatting |
| Writer_World (2-field struct) | 46 | 0 | 0 | Simple Writer vs AppendMarshal |
| Writer (nested obj+array) | 164 | 0 | 0 | Complex multi-field Writer payload |
func Marshal(v any) ([]byte, error)
func AppendMarshal(dst []byte, v any) ([]byte, error)
func AppendMarshalSlice[T any](dst []byte, items []T) ([]byte, error)
func Unmarshal(data []byte, v any) error
func UnmarshalAny(data []byte, pa *any) error // zero-alloc hot-path for `any` targets
// buffer pool (high-throughput scenarios)
func AcquireBuf() *[]byte
func ReleaseBuf(bp *[]byte)When to use: Serialize structs/maps to JSON or deserialize JSON into known types. Drop-in replacement for encoding/json.Marshal/Unmarshal — change the import alias only.
// Serialize
type User struct {
Name string `json:"name"`
Age int `json:"age,omitempty"`
}
data, err := json.Marshal(User{Name: "yak", Age: 3})
// → {"name":"yak","age":3}
// Deserialize
var u User
err = json.Unmarshal(data, &u)
// Reuse buffer — avoids per-Marshal allocation in high-throughput HTTP handlers
buf := json.AcquireBuf()
defer json.ReleaseBuf(buf)
*buf, err = json.AppendMarshal(*buf, User{Name: "yak"})
// Generic batch serialization of an entire slice
users := []User{{Name: "a"}, {Name: "b"}}
out, err := json.AppendMarshalSlice(nil, users)
// → [{"name":"a"},{"name":"b"}]Differences from stdlib:
AppendMarshal/AppendMarshalSlice— append mode not available in stdlib; eliminates intermediate allocationsmapkeys are sorted lexicographically by default (SortMapKeys: true), matching stdlib; disable viaSetOptionsNaN/Infdefaults tonulloutput (configurable to error); stdlib always errors- Embedded struct promoted fields with
,stringtag serialize correctly regardless of nesting depth (stdlib behavior preserved)
func Get(json, path string) Res
func GetBytes(json []byte, path string) Res
func GetOrDefault(json, path string, def string) string
func GetBytesOrDefault(json []byte, path string, def string) string
func GetAll(json, path string) []Res
func GetAllBytes(json []byte, path string) []ResRes methods:
// Value access
(r Res) String() string
(r Res) Int() int64
(r Res) Float64() float64
(r Res) Float64Err() (float64, error)
(r Res) Bool() bool
(r Res) Raw() string // raw JSON fragment
(r Res) Type() Type
// Type checks
(r Res) Exists() bool
(r Res) IsNull() bool
(r Res) IsArray() bool
(r Res) IsObject() bool
// Iterate object / array
(r Res) Each(fn func(key string, val Res) bool)When to use: Extract one or two fields from JSON without allocating a full object. Ideal for hot paths in logging, event processing, and API proxies.
raw := `{"user":{"name":"yak","scores":[10,20,30]},"active":true}`
name := json.Get(raw, "user.name").String() // "yak"
score0 := json.Get(raw, "user.scores.0").Int() // 10
active := json.Get(raw, "active").Bool() // true
// Default value when path is absent
city := json.GetOrDefault(raw, "user.city", "unknown") // "unknown"
// Wildcard * — collect all array elements
scores := json.GetAll(raw, "user.scores.*")
for _, s := range scores {
fmt.Println(s.Int()) // 10 20 30
}
// Iterate object fields
json.Get(raw, "user").Each(func(key string, val json.Res) bool {
fmt.Println(key, "=", val.Raw())
return true
})Differences from stdlib:
- No equivalent in stdlib; the closest is
Unmarshalintomap[string]any, which builds a full DOM Getis zero-allocation; scanning stops at the matched path — significantly faster than full parseRes.Exists()distinguishes "field present withnullvalue" from "field absent"
func Set(json, path string, newValue any) (string, error)
func SetBytes(json []byte, path string, newValue any) ([]byte, error)
func SetMany(json string, ops ...SetOp) (string, error)
func SetManyBytes(json []byte, ops ...SetOp) ([]byte, error)
func Delete(json, path string) (string, error)
func DeleteBytes(json []byte, path string) ([]byte, error)
func DeleteMany(json string, paths ...string) (string, error)
func DeleteManyBytes(json []byte, paths ...string) ([]byte, error)
type SetOp struct {
Path string
Value any
}
type RawMessage []byte // inject a pre-encoded JSON fragmentWhen to use: Modify, add, or delete fields in a JSON document without deserializing the whole thing. Useful in API gateways, config patching, and incremental event updates.
orig := `{"name":"yak","meta":{"v":1}}`
// Update an existing field
s, _ := json.Set(orig, "name", "yak2")
// → {"name":"yak2","meta":{"v":1}}
// Add a field — path is created automatically if absent
s, _ = json.Set(orig, "meta.env", "prod")
// → {"name":"yak","meta":{"v":1,"env":"prod"}}
// Array element
arr := `{"ids":[1,2,3]}`
s, _ = json.Set(arr, "ids.1", 99)
// → {"ids":[1,99,3]}
// Batch update (single scan — faster than multiple Set calls)
s, _ = json.SetMany(orig,
json.SetOp{Path: "name", Value: "yak3"},
json.SetOp{Path: "meta.v", Value: 2},
)
// Inject a pre-encoded JSON fragment
s, _ = json.Set(orig, "extra", json.RawMessage(`{"a":1}`))
// → {...,"extra":{"a":1}}
// Delete
s, _ = json.Delete(orig, "meta.v")
// → {"name":"yak","meta":{}}
// Batch delete
s, _ = json.DeleteMany(orig, "name", "meta.v")Differences from stdlib:
- No in-place modification API in stdlib; equivalent requires
Unmarshal → modify → Marshal(two full traversals) Set/Deletesplice bytes directly — no intermediate DOM, minimal allocationsSetcreates the path automatically when absent;Deletesilently ignores missing paths
// Pool helpers
func AcquireParser() *Parser
func ReleaseParser(p *Parser)
// Parser methods
func (p *Parser) Parse(s string) (*Value, error)
func (p *Parser) ParseBytes(b []byte) (*Value, error)
// Value access
func (v *Value) Type() Type
func (v *Value) IsNull() bool
func (v *Value) IsObject() bool
func (v *Value) IsArray() bool
func (v *Value) Get(keys ...string) *Value
func (v *Value) GetString(keys ...string) string
func (v *Value) GetStringBytes(keys ...string) []byte
func (v *Value) GetInt(keys ...string) int
func (v *Value) GetInt64(keys ...string) int64
func (v *Value) GetFloat64(keys ...string) float64
func (v *Value) GetBool(keys ...string) bool
func (v *Value) Len() int
func (v *Value) Values() []*Value // array elements
func (v *Value) KVs() []KV // object key-value pairs
func (v *Value) ArrayEach(fn func(i int, val *Value) bool)
func (v *Value) ObjectEach(fn func(key string, val *Value) bool)
func (v *Value) Raw() string
func (v *Value) Clone() *Value
type KV struct {
Key string
Value *Value
}When to use: When you need to access multiple fields in the same JSON document. Building a DOM once and querying it repeatedly is faster than multiple Get calls.
p := json.AcquireParser()
defer json.ReleaseParser(p)
v, err := p.Parse(`{"users":[{"id":1,"name":"a"},{"id":2,"name":"b"}]}`)
if err != nil {
panic(err)
}
users := v.Get("users")
fmt.Println(users.Len()) // 2
users.ArrayEach(func(i int, u *json.Value) bool {
fmt.Printf("%d: %s\n", u.GetInt("id"), u.GetString("name"))
return true
})
// Safe access — type mismatch returns zero value, no panic
missing := v.GetString("not", "exist") // ""
// Detach from Parser lifetime for long-lived use
snapshot := v.Clone()
json.ReleaseParser(p) // safe to release — snapshot is independentDifferences from stdlib:
Unmarshalintomap[string]anycopies all strings; yakjson DOM references the original JSON bytesValuepreserves field insertion order (mapdoes not)Clone()makes theValueindependent of the Parser's internal buffer — safe to pass across goroutinesAcquireParser/ReleaseParserpool pattern has no stdlib equivalent
func AcquireWriter() *Writer
func ReleaseWriter(w *Writer)
// Output
func (w *Writer) Bytes() []byte
func (w *Writer) String() string
func (w *Writer) Len() int
func (w *Writer) Reset()
func (w *Writer) AppendTo(dst []byte) []byte
// Object fields
func (w *Writer) Object(fn func(w *Writer))
func (w *Writer) Field(key, value string)
func (w *Writer) FieldBytes(key string, value []byte)
func (w *Writer) FieldInt(key string, value int)
func (w *Writer) FieldInt64(key string, value int64)
func (w *Writer) FieldUint64(key string, value uint64)
func (w *Writer) FieldFloat(key string, value float64)
func (w *Writer) FieldBool(key string, value bool)
func (w *Writer) FieldNull(key string)
func (w *Writer) FieldObject(key string, fn func(w *Writer))
func (w *Writer) FieldArray(key string, fn func(w *Writer))
func (w *Writer) FieldRaw(key string, rawJSON []byte)
// Array items
func (w *Writer) Array(fn func(w *Writer))
func (w *Writer) Item(value string)
func (w *Writer) ItemInt(value int)
func (w *Writer) ItemFloat(value float64)
func (w *Writer) ItemBool(value bool)
func (w *Writer) ItemNull()
func (w *Writer) ItemObject(fn func(w *Writer))
func (w *Writer) ItemArray(fn func(w *Writer))When to use: Manually construct JSON on hot paths (logging, metrics export, API responses) with full control over field order and allocation.
w := json.AcquireWriter()
defer json.ReleaseWriter(w)
w.Object(func(w *json.Writer) {
w.Field("service", "api")
w.FieldInt("code", 200)
w.FieldBool("ok", true)
w.FieldArray("tags", func(w *json.Writer) {
w.Item("go")
w.Item("json")
})
w.FieldObject("meta", func(w *json.Writer) {
w.FieldFloat("latency", 1.23)
w.FieldNull("error")
})
})
fmt.Println(w.String())
// {"service":"api","code":200,"ok":true,"tags":["go","json"],"meta":{"latency":1.23,"error":null}}Differences from stdlib:
- No direct equivalent in stdlib (
json.NewEncoder(w).Encode(v)requires a Go value first) Writerappends directly to[]byte— no intermediateio.Writerlayer, zero heap allocations on hot path- Small integers (0–9999) use inline fast-write, no
strconvcall - Exceeding
MaxMarshalDepthoutputsnullrather than panicking
// Iterate a JSON object or array (zero DOM)
func Each(json, path string, fn func(key string, value Res) bool)
func EachBytes(json []byte, path string, fn func(key string, value Res) bool)
// NDJSON / JSON Lines
func EachLine(r io.Reader, fn func(i int, raw string) bool) error
func EachLineBytes(data []byte, fn func(i int, raw string) bool) error
func EachLineString(s string, fn func(i int, raw string) bool)When to use:
Each: iterate all keys of a JSON object or all elements of an array without deserializing unwanted valuesEachLine: process NDJSON / JSON Lines log files where each line is an independent JSON value
// Iterate array — return false to stop early
data := `{"items":[{"id":1},{"id":2},{"id":3}]}`
json.Each(data, "items", func(key string, val json.Res) bool {
id := json.Get(val.Raw(), "id").Int()
if id == 2 {
return false
}
fmt.Println(id)
return true
})
// Output: 1
// NDJSON file
f, _ := os.Open("events.ndjson")
defer f.Close()
json.EachLine(f, func(i int, raw string) bool {
event := json.Get(raw, "event").String()
fmt.Printf("line %d: %s\n", i, event)
return true
})
// In-memory NDJSON string (zero allocation)
ndjson := "{\"id\":1}\n{\"id\":2}\n{\"id\":3}"
json.EachLineString(ndjson, func(i int, raw string) bool {
fmt.Println(json.Get(raw, "id").Int())
return true
})
// Output: 1 2 3Differences from stdlib:
- stdlib
Decodercan process NDJSON but eachDecodecall fully deserializes into a Go value Eachis zero-DOM and streaming — only fields actually requested are parsedEachLineStringis completely zero-allocation for in-memory strings;EachLineusesbufio.Scannerto handle large lines automatically
// Encoder
func NewEncoder(w io.Writer) *Encoder
func (e *Encoder) SetEscapeHTML(on bool)
func (e *Encoder) SetIndent(prefix, indent string)
func (e *Encoder) Encode(v any) error
// Decoder
func NewDecoder(r io.Reader) *Decoder
func (d *Decoder) Decode(v any) error
func (d *Decoder) DecodeAny(pa *any) error // zero-alloc hot-path for `any` targets
func (d *Decoder) Token() (any, error)
func (d *Decoder) More() bool
func (d *Decoder) UseNumber()
func (d *Decoder) DisallowUnknownFields()
func (d *Decoder) Buffered() io.Reader
func (d *Decoder) InputOffset() int64
func (d *Decoder) Reset(r io.Reader)
func (d *Decoder) Detach()
// Decoder pool (high-concurrency)
func NewDecoderPool() *DecoderPool
func (p *DecoderPool) Get(r io.Reader) *Decoder
func (p *DecoderPool) Put(dec *Decoder)
func (p *DecoderPool) Detach() *DecoderPoolWhen to use: Drop-in replacement for encoding/json.NewEncoder/NewDecoder when processing streamed JSON (HTTP bodies, WebSocket frames, files).
// Encoder — stream to HTTP response
json.NewEncoder(w).Encode(myStruct)
// Encoder — pretty-print
enc := json.NewEncoder(os.Stdout)
enc.SetIndent("", " ")
enc.Encode(map[string]any{"key": "val"})
// Decoder — consume multiple JSON values from a stream
dec := json.NewDecoder(r)
for dec.More() {
var item Item
dec.Decode(&item)
}
// DecodeAny — hot-loop with any targets: avoids interface boxing on each call.
// Benchmarks show ~23 allocs/call with Decode(&v) vs single-digit with DecodeAny.
dec2 := json.NewDecoder(r)
for {
var v any
if err := dec2.DecodeAny(&v); err != nil {
break
}
// use v
}
// UseNumber — preserve number representation (avoids float64 precision loss)
dec.UseNumber()
var m map[string]any
dec.Decode(&m)
price := m["price"].(json.Number) // "1.99999999999999"
// DecoderPool — high-concurrency HTTP handler
pool := json.NewDecoderPool()
dec2 := pool.Get(req.Body)
var req MyRequest
dec2.Decode(&req)
pool.Put(dec2)
// Detach — persist decoded strings beyond the request lifecycle
// (default zero-copy mode: strings reference the internal buffer; after Put they are invalid)
dec3 := pool.Get(r)
dec3.Detach() // subsequent Decode calls produce independently allocated stringsDifferences from stdlib:
Decoder.Detach()is unique to yakjson: the default zero-copy mode keeps strings in the internal buffer;Detachswitches to safe independent allocationDecoderPoolprovides ready-to-use pooling; stdlib requires manual pool managementToken()returnsany(type-aliased; same behaviour asencoding/json.Token)- Internally uses yakjson's fast parser — significantly faster than stdlib
Decoder
// Validation
func Valid(data []byte) bool
func Validate(data []byte) error // descriptive error with byte offset
func ValidateString(s string) error
// Formatting
func MarshalIndent(v any, prefix, indent string) ([]byte, error)
func AppendMarshalIndent(dst []byte, v any, prefix, indent string) ([]byte, error)
func MarshalWrite(w io.Writer, v any) error
func UnmarshalRead(r io.Reader, v any) error
// Compact and escape
func Compact(dst *bytes.Buffer, src []byte) error
func HTMLEscape(dst *bytes.Buffer, src []byte)// Validation
json.Valid([]byte(`{"ok":true}`)) // true
json.Valid([]byte(`{bad}`)) // false
err := json.Validate([]byte(`{"a":}`))
// → "unexpected character '}' at offset 5"
// Pretty-print (equivalent to encoding/json.MarshalIndent)
data, _ := json.MarshalIndent(v, "", " ")
// Append pretty-print (reuse buffer)
buf, _ = json.AppendMarshalIndent(buf[:0], v, "", " ")
// Streaming (equivalent to json.NewEncoder(w).Encode(v))
json.MarshalWrite(os.Stdout, v)
json.UnmarshalRead(req.Body, &v)
// Remove whitespace
var out bytes.Buffer
json.Compact(&out, []byte(`{ "a" : 1 }`))
// → {"a":1}
// HTML-safe escaping (for JSON embedded in <script> tags)
var esc bytes.Buffer
json.HTMLEscape(&esc, []byte(`{"url":"http://x.com?a=1&b=2"}`))
// → {"url":"http://x.com?a=1\u0026b=2"}Differences from stdlib:
Validateprovides richer errors thanValid(includes byte offset); stdlib only hasValidAppendMarshalIndentallows reusing an existing buffer; stdlibMarshalIndentalways allocates a new slice
func SetLimits(l Limits) []string // returns warnings for values exceeding recommended thresholds
func SetOptions(o Options)
func ResetDefaults()
func GetConfig() ConfigInternal mechanism: yakjson uses an atomic.Pointer[Config] snapshot — goroutine-safe and dynamically adjustable at runtime. Each Parse/Marshal/Unmarshal/AcquireWriter call does a single atomic.Load (~1 ns) at entry to obtain a *Config snapshot; the entire operation uses that one snapshot. Config can be updated at any time without affecting in-progress operations.
SetLimits: zero-value fields are not modified (preserved as-is); returns warning strings for fields exceedingDefaultMax*recommended thresholds, but does not force-clampSetOptions: zero-value fields mean "keep default optimal" — won't silently disable sorting etc.GetConfig: returns a read-only snapshot copy
| Constant | Default | Guards Against |
|---|---|---|
DefaultMaxDepth |
512 | Deeply nested JSON bombs / stack overflow |
DefaultMaxKeyLength |
64 KB | Giant key DoS |
DefaultMaxStringLength |
16 MB | Giant string OOM (checked in Parser + Get + Unmarshal) |
DefaultMaxArrayLength |
1 M elements | Giant array memory exhaustion (Parse + Unmarshal) |
DefaultMaxObjectKeys |
64 K keys | Too many keys causing O(n²) lookup degradation |
DefaultMaxMarshalDepth |
1000 | Self-referential pointer chains causing Marshal stack overflow |
DefaultMaxNumberLength |
1024 B | Oversized number literals causing CPU bomb |
DefaultMaxInputSize |
128 MB | Decoder single-read max, guards OOM |
DefaultMaxDecodeSteps |
1 M calls | Max Decoder.Decode invocations |
DefaultPoolBufMax |
1 MB | Max buffer retained in sync.Pool |
| Control characters | Reject 0x00–0x1F | RFC 8259 §7 compliance — not configurable |
| Option | Default | Description |
|---|---|---|
EscapeHTML |
false |
true: escape </>/& as \uXXXX (XSS-safe) |
NaNInf |
NaNInfNull |
NaNInfError: return error when marshaling NaN/Inf |
NumberMode |
NumberInt64First |
NumberFloat64Mode: encoding/json-compatible float64 semantics; NumberJSONMode: preserve raw precision |
StrictIntParse |
false |
true: "1.9"→int errors instead of truncating |
StrictNumbers |
false |
true: reject leading-zero integers (01) and float64 overflow (1e9999) |
SortMapKeys |
true |
false: skip map key sorting (faster, non-deterministic output order) |
| Kind | Corresponding Limits field |
|---|---|
LimitDepth |
MaxDepth |
LimitMarshalDepth |
MaxMarshalDepth |
LimitKeyLength |
MaxKeyLength |
LimitStringLength |
MaxStringLength |
LimitArrayLength |
MaxArrayLength |
LimitObjectKeys |
MaxObjectKeys |
LimitNumberLength |
MaxNumberLength |
LimitInputSize |
MaxInputSize |
When to use:
- Tighten
MaxDepth/MaxObjectKeysat service startup to prevent malicious nesting DoS - Enable
EscapeHTMLwhen embedding JSON in HTML - Use
NumberJSONModefor financial systems that require exact decimal precision - Lower
MaxInputSize/MaxDecodeStepsto cap upload request sizes
func init() {
// Fields exceeding DefaultMax* recommended thresholds return warning strings
if warnings := json.SetLimits(json.Limits{
MaxDepth: 128, // reduce parse nesting limit (default 512)
MaxArrayLength: 10000, // cap array elements (default 1 M)
MaxInputSize: 4 << 20, // 4 MB (default 128 MB)
// Zero-value fields are not modified — keep their current values
}); len(warnings) > 0 {
log.Printf("yakjson config warnings: %v", warnings)
}
json.SetOptions(json.Options{
EscapeHTML: true, // escape <, >, & (XSS-safe)
NaNInf: json.NaNInfError, // return error for NaN/Inf
NumberMode: json.NumberFloat64Mode, // encoding/json-compatible number semantics
StrictIntParse: true, // "1.9"→int errors instead of truncating
StrictNumbers: true, // reject leading-zero integers and float64 overflow
})
}
// Read current config
cfg := json.GetConfig()
fmt.Printf("MaxDepth: %d, EscapeHTML: %v\n", cfg.MaxDepth, cfg.EscapeHTML)
// Structured error inspection
var le *json.LimitError
if errors.As(err, &le) {
switch le.Kind {
case json.LimitDepth:
log.Printf("nesting too deep: %d levels > limit %d", le.Actual, le.Limit)
case json.LimitArrayLength:
log.Printf("array too long: %d elements > limit %d", le.Actual, le.Limit)
case json.LimitInputSize:
log.Printf("input too large: %d bytes > limit %d", le.Actual, le.Limit)
default:
log.Printf("safety limit exceeded: %s actual=%d limit=%d", le.Kind, le.Actual, le.Limit)
}
}
// Reset between tests — avoid affecting other test cases
func TestSomething(t *testing.T) {
defer json.ResetDefaults()
json.SetLimits(json.Limits{MaxDepth: 5})
// ...
}Differences from stdlib:
- stdlib has no global safety limits; yakjson limits are adjusted at runtime via
atomic.Pointer[Config]snapshots — zero lock contention LimitErrorexposesKind/Actual/Limitfields, inspectable viaerrors.As; stdlib errors are plain stringsSetLimitsreturns warnings whenDefaultMax*recommended thresholds are exceeded, but does not force-clamp — policy stays with the application- Control character rejection (0x00–0x1F) is built into the parser and cannot be disabled (RFC 8259 §7 requirement)
Internal defense-in-depth (no configuration required):
| Protection | Description |
|---|---|
| String intern cache capacity | Sharded fixed-capacity cache (16 shards × 4096 entries = 65536 max); excess strings bypass the cache to prevent unbounded memory growth |
Writer.String() safe copy |
Returns an independent string copy; safe to use after ReleaseWriter (unlike Bytes() which shares the pool buffer) |
FieldRaw validation |
First-byte whitelist check; empty or invalid input emits null, preventing JSON structure injection |
RawMessage boundary check |
First-byte whitelist + skipVal boundary validation — only a single complete JSON value passes through; trailing garbage or invalid content emits null |
| Exponent digit limit | parseFloat short-circuits after 25 exponent digits, preventing CPU waste on crafted inputs like 1e000...001 (millions of leading zeros) |
| Slice header sanity check | appendArray / executePlan verifies Len ≤ Cap before unsafe iteration; corrupted headers fall back to safe reflection or emit null |
type MarshalOptions struct {
EscapeHTML bool
NaNInf NaNInfMode
SortMapKeys bool
}
func (o MarshalOptions) Marshal(v any) ([]byte, error)
func (o MarshalOptions) MarshalWrite(w io.Writer, v any) error
type UnmarshalOptions struct {
NumberMode NumberMode
StrictIntParse bool
StrictNumbers bool
}
func (o UnmarshalOptions) Unmarshal(data []byte, v any) error
func (o UnmarshalOptions) UnmarshalRead(r io.Reader, v any) errorWhen to use: Different serialization policies in the same process (e.g. API responses with HTML escaping, internal logs without; one route using NumberFloat64Mode, another using NumberJSONMode) without touching global config.
// Coexisting strategies, no interference
apiOpts := json.MarshalOptions{EscapeHTML: true, SortMapKeys: true}
logOpts := json.MarshalOptions{EscapeHTML: false, SortMapKeys: false}
apiData, _ := apiOpts.Marshal(resp) // safe for HTML embedding
logData, _ := logOpts.Marshal(event) // performance-first
// Financial API — preserve number precision
finOpts := json.UnmarshalOptions{NumberMode: json.NumberJSONMode}
var order map[string]any
finOpts.Unmarshal(data, &order)
amount := order["amount"].(json.Number) // "9999999999999999.99"Differences from stdlib:
- stdlib has no per-call options; configuration is only possible at the
Encoder/Decoderobject level - yakjson option structs are value types — safe for concurrent use, zero shared state
Runs all benchmarks and saves results to bench_{os}_{cores}c{threads}t.txt, automatically recording timestamp, kernel, Go version, and CPU model.
./bench.sh # defaults: benchtime=1s, count=3
./bench.sh 5s 5 # custom: benchtime=5s, repeat 5 timesUses a ratio-comparison strategy: computes yakjson_ns / stdlib_ns and compares against a stored baseline, eliminating the impact of CI runner hardware variance (which can exceed 50%). The ratio fluctuates only ±3% on the same CPU.
./bench_guard.sh # CI mode: ratio comparison, exits 1 on regression
./bench_guard.sh --update-baseline # update baseline for current platform
BENCH_THRESHOLD=15 ./bench_guard.sh # custom threshold (default 10%)Baseline files: bench_ratio_baseline_{linux|windows}.txt
Auto-discovers all Fuzz* functions in the package and runs them one by one. Ctrl+C interrupts the current target and moves to the next; a second Ctrl+C exits the whole script. Logs are saved to fuzz_logs/fuzz_<timestamp>.log.
./fuzz.sh # all targets, 5m each
./fuzz.sh 2m # all targets, 2m each
./fuzz.sh FuzzDeleteMany # single target, 5m
./fuzz.sh FuzzDeleteMany 2m # single target, custom duration
FUZZ_TIME=10m ./fuzz.sh # set duration via environment variableRuns go vet + golangci-lint with optional auto-fix and formatting modes.
./lint.sh # full check (gofmt -s -w + vet + golangci-lint)
./lint.sh --vet # gofmt -s -w + go vet (skip golangci-lint)
./lint.sh --fix # gofmt -s -w + vet + golangci-lint --fix
./lint.sh --fmt # format only (gofmt -s -w), no vet/lint
./lint.sh --test # quick test (go test ./... -race -count=1 -timeout=120s)Runs CPU and memory pprof benchmarks and archives results under _pprof/<timestamp>[_<label>]/.
./pprof_archive.sh # default benchtime=5s
./pprof_archive.sh 10s # custom benchtime
./pprof_archive.sh 5s v1-final # archive with labelOutput directory structure:
_pprof/<timestamp>[_<label>]/
├── cpu_marshal_binding.prof
├── cpu_marshal_generic.prof
├── cpu_unmarshal_binding.prof
├── cpu_unmarshal_generic.prof
├── mem_marshal_binding.prof
├── mem_unmarshal_binding.prof
├── bench.txt # benchstat format
└── meta.txt # system / compiler / commit info