-
Notifications
You must be signed in to change notification settings - Fork 0
Workflow Reference
I use this document as my deterministic, runnable workflow reference for L0.
I use this when I want a minimal verify/build/run loop.
./bin/l0c verify docs/examples/01_arithmetic_add_wrap.l0
./bin/l0c build docs/examples/01_arithmetic_add_wrap.l0 /tmp/l0_wf_add.img
./bin/l0c imgcheck /tmp/l0_wf_add.img
./bin/l0c imgmeta /tmp/l0_wf_add.img
./bin/l0c run /tmp/l0_wf_add.img 5 8Expected stable outputs:
-
verify:ok -
imgcheck:ok -
imgmetacontains:kernel_kind 1,code_size 7 -
runprints:13
I use this when I want to confirm conditional branch behavior and emitted branch kernel metadata.
./bin/l0c verify docs/examples/03_control_cbr_select.l0
./bin/l0c build docs/examples/03_control_cbr_select.l0 /tmp/l0_wf_cbr.img
./bin/l0c imgcheck /tmp/l0_wf_cbr.img
./bin/l0c imgmeta /tmp/l0_wf_cbr.img
./bin/l0c run /tmp/l0_wf_cbr.img 1
./bin/l0c run /tmp/l0_wf_cbr.img 0Expected stable outputs:
-
verify:ok -
imgcheck:ok -
imgmetacontains:kernel_kind 25,code_size 4 -
run ... 1prints:1 -
run ... 0prints:0
I use this when I need to debug instruction ids and correlate trace records to code ranges.
./bin/l0c verify docs/examples/10_intrinsic_trace.l0
./bin/l0c build docs/examples/10_intrinsic_trace.l0 /tmp/l0_wf_trace.img \
--debug-map /tmp/l0_wf_trace.map \
--trace-schema /tmp/l0_wf_trace.schema
./bin/l0c imgcheck /tmp/l0_wf_trace.img
./bin/l0c imgmeta /tmp/l0_wf_trace.img
./bin/l0c schemacat /tmp/l0_wf_trace.schema
./bin/l0c mapcat /tmp/l0_wf_trace.map
./bin/l0c run /tmp/l0_wf_trace.img 123 >/tmp/l0_wf_trace.out 2>/tmp/l0_wf_trace.bin
./bin/l0c tracecat /tmp/l0_wf_trace.bin
./bin/l0c tracejoin /tmp/l0_wf_trace.bin /tmp/l0_wf_trace.mapExpected stable outputs:
-
verify:ok -
imgcheck:ok -
imgmetacontains:kernel_kind 24 -
schemacatoutput:version 1record_size 16fields 2
-
mapcatoutput:entries 2code_size 51inst_id 1 start 0 end 17inst_id 2 start 17 end 51
-
runstdout prints:0 -
tracecatprints:id 1val 123
-
tracejoinprints:id 1val 123start 0end 17
I keep the scripted equivalents of these workflows in tests/run.sh so make test enforces them every run.
I use this when I want to stress parser robustness against malformed and byte-mutated inputs.
bash tests/parser_fuzz_regress.sh ./bin/l0c tests/fuzz/parser_seedsExpected stable output:
ok
When I need to reproduce one specific crashing or suspicious case, I run:
bash tests/parser_fuzz_regress.sh --repro /tmp/suspect_input.l0 ./bin/l0cExpected stable output:
-
okwhen the parser handles that input without crashing
I use this when I need to prove that every documented verifier rule has both a passing fixture and a failing fixture.
bash tests/verifier_matrix.sh ./bin/l0c tests/verifier_matrix.tsvExpected stable output:
ok
I use this when I want to validate generalized multi-block CFG lowering for branch-local constant returns.
./bin/l0c verify docs/examples/15_cfg_branch_const_select.l0
./bin/l0c build docs/examples/15_cfg_branch_const_select.l0 /tmp/l0_wf_cfg_branch_const.img
./bin/l0c imgcheck /tmp/l0_wf_cfg_branch_const.img
./bin/l0c imgmeta /tmp/l0_wf_cfg_branch_const.img
./bin/l0c run /tmp/l0_wf_cfg_branch_const.img 1
./bin/l0c run /tmp/l0_wf_cfg_branch_const.img 0Expected stable outputs:
-
verify:ok -
imgcheck:ok -
imgmetacontains:kernel_kind 26,code_size 27 -
run ... 1prints:11 -
run ... 0prints:22
I use this when I want to validate merge-point value selection lowered from a multi-block branch/store/join shape.
./bin/l0c verify docs/examples/16_cfg_merge_mem_select.l0
./bin/l0c build docs/examples/16_cfg_merge_mem_select.l0 /tmp/l0_wf_cfg_merge.img
./bin/l0c imgcheck /tmp/l0_wf_cfg_merge.img
./bin/l0c imgmeta /tmp/l0_wf_cfg_merge.img
./bin/l0c run /tmp/l0_wf_cfg_merge.img 1
./bin/l0c run /tmp/l0_wf_cfg_merge.img 0Expected stable outputs:
-
verify:ok -
imgcheck:ok -
imgmetacontains:kernel_kind 27,code_size 27 -
run ... 1prints:11 -
run ... 0prints:22
I use this when I want deterministic spill/reload-path coverage in generalized lowering.
./bin/l0c verify docs/examples/17_spill_stress_kernel.l0
./bin/l0c build docs/examples/17_spill_stress_kernel.l0 /tmp/l0_wf_spill.img
./bin/l0c imgcheck /tmp/l0_wf_spill.img
./bin/l0c imgmeta /tmp/l0_wf_spill.img
./bin/l0c run /tmp/l0_wf_spill.img 7 3
./bin/l0c run /tmp/l0_wf_spill.img 5 2Expected stable outputs:
-
verify:ok -
imgcheck:ok -
imgmetacontains:kernel_kind 28,code_size 35 -
run ... 7 3prints:23 -
run ... 5 2prints:9
I use this when I want to validate SysV integer-argument register mapping across all six entry arguments.
./bin/l0c verify docs/examples/18_sysv_abi_sum6_kernel.l0
./bin/l0c build docs/examples/18_sysv_abi_sum6_kernel.l0 /tmp/l0_wf_sysv6.img
./bin/l0c imgcheck /tmp/l0_wf_sysv6.img
./bin/l0c imgmeta /tmp/l0_wf_sysv6.img
./bin/l0c run /tmp/l0_wf_sysv6.img 1 2 3 4 5 6Expected stable outputs:
-
verify:ok -
imgcheck:ok -
imgmetacontains:kernel_kind 32,code_size 19 -
run ... 1 2 3 4 5 6prints:21
I use this when I want to emit a relocatable ELF object and link it with a minimal native harness.
./bin/l0c verify docs/examples/18_sysv_abi_sum6_kernel.l0
./bin/l0c build-elf docs/examples/18_sysv_abi_sum6_kernel.l0 /tmp/l0_wf_sum6.o
cat >/tmp/l0_wf_sum6_harness.s <<'EOF'
.intel_syntax noprefix
.global _start
.extern f0
_start:
mov rdi, 1
mov rsi, 2
mov rdx, 3
mov rcx, 4
mov r8, 5
mov r9, 6
call f0
mov rdi, rax
mov rax, 60
syscall
EOF
as --64 -o /tmp/l0_wf_sum6_harness.o /tmp/l0_wf_sum6_harness.s
ld -o /tmp/l0_wf_sum6_exec /tmp/l0_wf_sum6_harness.o /tmp/l0_wf_sum6.o
/tmp/l0_wf_sum6_exec ; echo $?Expected stable outputs:
-
verify:ok -
build-elf:ok - linked executable exit status:
21
I use this when I want to enforce the frozen intrinsic contract surface (intrinsics.v1) in one command.
bash tests/intrinsic_contracts.sh ./bin/l0c .Expected stable output:
ok
I use this when I want to enforce the frozen debug-map schema contract surface (debugmap.v1) in one command.
bash tests/debug_map_schema.sh ./bin/l0cExpected stable output:
ok
I use this when I want to enforce the frozen trace schema contract surface (traceschema.v1) in one command.
bash tests/trace_schema_contracts.sh ./bin/l0cExpected stable output:
ok
I use this when I want to verify the pinned native toolchain baseline.
bash tests/toolchain_policy.shExpected stable output:
ok
I use this when I want a quick performance regression signal suitable for CI.
make
bash tests/ci_smoke_bench.sh ./bin/l0c .Expected stable output:
ok
I use this when I want an operational comparison between l0c and available host compilers.
make
bash tests/benchmark_compare.sh ./bin/l0c .Expected stable output:
ok- refreshed
docs/PERFORMANCE_COMPARISON.md
I use this when I want a stricter function-level comparison with the same runtime loop harness.
make
L0_A2A_PIN_CPU=0 \
L0_A2A_RUNTIME_SAMPLES=5 \
L0_A2A_BUILD_SAMPLES=3 \
L0_A2A_TRIM_COUNT=1 \
L0_A2A_RUNTIME_CI95_PCT_WARN=20 \
bash tests/benchmark_apples_to_apples.sh ./bin/l0c . \
docs/PERFORMANCE_COMPARISON_APPLES_TO_APPLES.md \
docs/PERFORMANCE_COMPARISON_APPLES_TO_APPLES.jsonExpected stable output:
ok- refreshed
docs/PERFORMANCE_COMPARISON_APPLES_TO_APPLES.md - refreshed
docs/PERFORMANCE_COMPARISON_APPLES_TO_APPLES.json
I use this when I want deterministic usability/token-efficiency metrics for L0 in LLM workflows.
make
bash tests/llm_usability_bench.sh ./bin/l0c . reference \
docs/LLM_BENCHMARK_RESULTS.json \
docs/LLM_BENCHMARK_RESULTS.mdExpected stable output:
ok- refreshed
docs/LLM_BENCHMARK_RESULTS.json - refreshed
docs/LLM_BENCHMARK_RESULTS.md
I also use cmd-mode with verify-guided repair:
make
L0_LLM_ADAPTER_CMD='<your command>' \
L0_LLM_MAX_ATTEMPTS=3 \
L0_LLM_ENABLE_REPAIR_LOOP=1 \
L0_LLM_AUTO_CANON_FIX=1 \
bash tests/llm_usability_bench.sh ./bin/l0c . cmd \
docs/LLM_BENCHMARK_RESULTS.json \
docs/LLM_BENCHMARK_RESULTS.mdFor deep failure analysis, I also enable:
L0_LLM_KEEP_WORK_DIR=1This keeps per-attempt artifacts (raw output, sanitized output, canonical-fix output, and verifier stderr) and prints the temp work dir path.
I can run this directly with OpenAI:
OPENAI_API_KEY='<your key>' \
L0_OPENAI_MODEL='gpt-4.1-mini' \
L0_LLM_ADAPTER_CMD='tests/llm_bench/adapters/openai_l0.sh' \
L0_LLM_MAX_ATTEMPTS=3 \
bash tests/llm_usability_bench.sh ./bin/l0c . cmd \
docs/LLM_BENCHMARK_RESULTS.json \
docs/LLM_BENCHMARK_RESULTS.mdExpected stable output:
ok- report includes
pass@k,avg_attempts_used, anderror_class_counts
I use this when I want apples-to-apples LLM usability comparisons across multiple models.
make
L0_LLM_ADAPTER_CMD='<your command>' \
L0_LLM_MODELS='model-a,model-b,model-c' \
bash tests/llm_model_matrix.sh ./bin/l0c . \
docs/LLM_MODEL_LEADERBOARD.json \
docs/LLM_MODEL_LEADERBOARD.md \
docs/LLM_MODEL_LEADERBOARD_HISTORY.jsonl \
docs/LLM_MODEL_LEADERBOARD_TRENDS.mdI can run this as an OpenAI model matrix:
OPENAI_API_KEY='<your key>' \
L0_LLM_ADAPTER_CMD='tests/llm_bench/adapters/openai_l0.sh' \
L0_LLM_MODELS='gpt-4.1-mini,gpt-4.1' \
L0_LLM_MAX_ATTEMPTS=1 \
bash tests/llm_model_matrix.sh ./bin/l0c . \
docs/LLM_MODEL_LEADERBOARD.json \
docs/LLM_MODEL_LEADERBOARD.md \
docs/LLM_MODEL_LEADERBOARD_HISTORY.jsonl \
docs/LLM_MODEL_LEADERBOARD_TRENDS.mdExpected stable output:
ok- refreshed
docs/LLM_MODEL_LEADERBOARD.json - refreshed
docs/LLM_MODEL_LEADERBOARD.md - refreshed
docs/LLM_MODEL_LEADERBOARD_HISTORY.jsonl - refreshed
docs/LLM_MODEL_LEADERBOARD_TRENDS.md
I use this when I want to enforce byte-for-byte reproducibility guarantees (detbuild.v1) in one command.
bash tests/deterministic_builds.sh ./bin/l0c .Expected stable output:
ok
I use this when I want to enforce deterministic runtime-equivalence checks (diffsem.v1) across paired equivalent fixtures.
bash tests/differential_semantics.sh ./bin/l0c .Expected stable output:
ok
I use this when I want to enforce deterministic crash-free malformed-input stress checks (fuzzstress.v1) across parser, verifier, image, and trace tooling surfaces.
bash tests/m65_fuzz_stress.sh ./bin/l0c .Expected stable output:
ok
I use this when I want to enforce pinned throughput floor checks (perfbase.v1) for representative verify/build/run/build-elf and trace tooling operations.
bash tests/performance_gates.sh ./bin/l0c .Expected stable output:
ok
I use this when I want to enforce deterministic CLI exit-code and stderr behavior (errmodel.v1) across representative failure classes.
bash tests/error_model.sh ./bin/l0c .Expected stable output:
ok
I use this when I want to enforce scripted, reproducible release-candidate packaging with checksum verification (relpipe.v1).
bash tests/release_pipeline.sh ./bin/l0c .Expected stable output:
ok
I use this when I want to enforce source/image/tool compatibility slices plus upgrade-policy coverage (compat.v1) across prior milestone fixtures.
bash tests/compatibility_matrix.sh ./bin/l0c .Expected stable output:
ok
I use this when I want to enforce final production-readiness closure (prodready.v1) across milestone-gate health, frozen contract docs, and release-candidate artifact validation.
bash tests/production_readiness.sh ./bin/l0c .Expected stable output:
ok
I use this when I want to enforce that my generated wiki/ mirror is fully synchronized with canonical docs/ content.
bash tests/wiki_sync.sh .Expected stable output:
ok
I use this when I want to enforce that command/op/example documentation coverage stays complete and deterministic.
bash tests/docs_coverage.sh .Expected stable output:
ok
I use this when I want to enforce that internal documentation links resolve to real files.
bash tests/docs_links.sh .Expected stable output:
ok
I use this when I want to enforce duplicate-heading hygiene across top-level docs pages.
bash tests/docs_headings.sh .Expected stable output:
ok
I use this when I want to enforce that every frozen contract doc is referenced by index/spec docs and mirrored into wiki mapping.
bash tests/docs_contract_refs.sh .Expected stable output:
ok
I use this when I want to publish my generated wiki/ mirror to the remote GitHub Wiki repository.
bash scripts/publish_wiki_remote.shExpected stable output:
ok
Note:
- this requires GitHub Wiki to be enabled in repository settings so
<repo>.wiki.gitexists.
- How-To-Write-L0
- Language-Reference
- Instruction-Set
- CLI-and-Compiler-Spec
- Implementable-Spec
- Command-Reference
- Examples-Catalog
- LLM-Quick-Reference
- Opcode-Examples
- LLM-Doc-Index