Easy interprocess communication.
What would it take to make IPC easier and more robust and more fun?
- Reading and writing processes come and go... so message channels should outlast them
- Machines crash... so channels should persist on disk
- Disks are finite... so channels should be bounded in size
- Message brokers bring complexity and ceremony... so for local IPC, don't require a broker
- Observability is crucial... so messages must be inspectable
- Schemas are great... but schemas should be optional
- Latency matters... so IPC should be fast, zero-copy wherever possible
So, there's Plasmite.
| Alice's terminal | Bob's terminal |
|---|---|
# Alice creates a channel
pls pool create my-channel |
|
# Bob starts reading
pls follow my-channel |
|
# Alice writes a message
pls feed my-channel \
'{"from": "alice",
"msg": "hello world"}' |
|
# Bob sees it arrive
{ "data": {"from": "alice", "msg": "hello world"}, ... } |
Plasmite is a CLI and library suite (Rust, Python, Go, Node, C) for sending and receiving JSON messages through persistent, disk-backed channels called "pools", which are ring buffers. There's no daemon, no broker, and no fancy config required, and it's quick (~600k msg/sec on a laptop).
For IPC across machines, pls serve exposes your local pools securely, and serves a minimal web UI too.
| Drawbacks | Plasmite | |
|---|---|---|
| Kafka or RabbitMQ | Lots of machinery: partitions, groups, exchanges, bindings, oh my. | pls feed / pls follow for local IPC. Add pls serve for remote access. No cluster required. |
| Redis / NATS | Still a server you run, monitor, and connect to — even for same-machine messaging. Messages live in server memory; if the server dies, messaging stops. | No server process for local IPC. Pools persist on disk independent of any process. Add pls serve when you need remote access. |
Log files / tail -f |
You parse with regex and it breaks when the format changes. Logs grow until you rotate, and rotation breaks tail -f. No way to replay from a specific point. No remote access without setting up syslog. |
Structured JSON with sequence numbers. Bounded disk usage. Replay from any point with --since or --from. pls serve for remote access. |
| Ad-hoc files (temp files, locks, polled dirs) | Readers poll for new files. Locking is manual — a crash leaves a stale lock. Files accumulate and you write your own cleanup. No ordering unless you bake it into filenames. | Readers stream in real time. Writers append concurrently without explicit locks. Ring buffer keeps disk bounded, messages stay ordered. pls serve for remote access. |
| SQLite as a queue | No LISTEN/NOTIFY — readers poll. Writers contend on the write-ahead log. You design a schema, write migrations, vacuum. SQLite explicitly discourages network access to the DB file. |
Follow/replay without polling. No SQLITE_BUSY. No schema, no migrations, no cleanup. pls serve for remote access. |
| OS primitives (pipes, sockets, shm) | Named pipes: if the reader dies, the writer blocks or gets SIGPIPE. One reader only, nothing survives a reboot. Unix sockets: you implement your own framing and reconnection. Shared memory: you coordinate with semaphores, and a crash while holding a lock is a mess. None work across machines. | Multiple readers and writers, crash-safe, persistent across reboots. pls serve to go cross-machine. |
| ZeroMQ | Messages vanish when processes restart. The pattern matrix (PUB/SUB, PUSH/PULL, ROUTER/DEALER) is powerful but complex to get right. Binary protocol — can't inspect messages with standard tools. | Messages persist across restarts. Human-readable JSON you can inspect with jq. pls serve for remote. |
Your build script writes progress to a pool. In another terminal, you follow it in real time.
pls feed build --create '{"step": "compile", "status": "done"}'
pls feed build '{"step": "test", "status": "running"}'
# elsewhere:
pls follow buildYour deploy script waits for the test runner to say "green" — no polling loops, no lock files, no shared database.
# deploy.sh
pls follow ci --where '.data.status == "green"' --one > /dev/null && ./deploy-to-staging.sh
# test-runner.sh
pls feed ci --create '{"status": "green", "commit": "abc123"}'Pipe your system logs into a bounded pool. It won't fill your disk, and you can replay anything later.
journalctl -o json-seq -f | pls feed syslog --create # Linux
pls follow syslog --since 30m --replay 1 # replay last 30 minTag events when you write them, then filter and replay on the read side.
pls feed incidents --create --tag sev1 '{"msg": "payment gateway timeout"}'
pls follow incidents --tag sev1 --where '.data.msg | test("timeout")'
pls follow incidents --since 1h --replay 10Two processes share a pool and talk to each other in real time — no broker, no sockets, no protocol to design.
# Terminal 1 — Alice
pls duplex chat --create --me alice
# Terminal 2 — Bob joins and catches up on the last 20 messages
pls duplex chat --me bob --tail 20Each line you type becomes a message. Bob sees Alice's messages as they arrive (and vice versa). Pipe JSON instead of typing for scripted use.
Start a server and your pools are available over HTTP. Clients use the same CLI — just pass a URL.
pls serve # loopback-only by default
pls serve init # bootstrap TLS + token for LAN access
pls feed http://server:9700/events '{"sensor": "temp", "value": 23.5}'
pls follow http://server:9700/events --tail 20A built-in web UI lives at /ui:
For CORS, auth, and deployment details, see Serving & remote access and the remote protocol spec.
More examples — polyglot producer/consumer, multi-writer event bus, API stream ingest, CORS setup — in the Cookbook.
Plasmite is designed for single-host and host-adjacent messaging. If you need multi-host cluster replication, schema registries, or workflow orchestration, see When Plasmite Isn't the Right Fit.
brew install sandover/tap/plasmiteInstalls the CLI (plasmite + pls) and the full SDK (libplasmite, C header, pkg-config). Go bindings link against this SDK, so install Homebrew first if you're using Go.
cargo install plasmite # CLI only (plasmite + pls)
cargo add plasmite # use as a library in your Rust projectuv tool install plasmite # standalone CLI + Python bindings
uv add plasmite # add to an existing uv-managed projectThe wheel includes pre-built native bindings.
npm i -g plasmiteThe package includes pre-built native bindings.
go get github.com/sandover/plasmite/bindings/go/localBindings only (no CLI). Links against libplasmite via cgo, so you'll need the SDK on your system first — via Homebrew on macOS, or from a GitHub Releases tarball on Linux.
Tarballs for Linux and macOS are on GitHub Releases. Each archive contains bin/, lib/, include/, and lib/pkgconfig/.
Windows builds (x86_64-pc-windows-msvc) are available via npm and PyPI. See the distribution docs for the full install matrix.
| Command | What it does |
|---|---|
feed POOL DATA |
Send a message (--create to auto-create the pool) |
follow POOL |
Follow messages (--create auto-creates missing local pools) |
fetch POOL SEQ |
Fetch one message by sequence number |
pool create NAME |
Create a pool (--size 8M for larger) |
pool list |
List pools |
pool info NAME |
Show pool metadata and metrics |
pool delete NAME... |
Delete one or more pools |
duplex POOL |
Read and write from one command (--me for chat mode) |
doctor POOL | --all |
Validate pool integrity |
serve |
HTTP server (loopback default; non-loopback opt-in) |
pls and plasmite are the same binary. Shell completion: plasmite completion bash|zsh|fish.
Remote pools support read and write; --create is local-only.
For scripting, use --json with pool create, pool list, pool delete, doctor, and serve check.
A pool is a single .plasmite file containing a persistent ring buffer:
- Multiple writers append concurrently (serialized via OS file locks)
- Multiple readers follow concurrently (lock-free, zero-copy)
- Bounded retention — old messages overwritten when full (default 1 MB, configurable)
- Crash-safe — processes crash and restart; torn writes never propagate
Every message carries a seq (monotonic), a time (nanosecond precision), optional tags, and your JSON data. Tags and --where (jq predicates) compose for filtering. See the CLI spec § pattern matching.
Default pool directory: ~/.plasmite/pools/.
| Metric | |
|---|---|
| Append throughput | ~600k msg/sec (single writer, M3 MacBook) |
| Read | Lock-free, zero-copy via mmap |
| On-disk format | Lite3 (zero-copy, JSON-compatible binary); field access without deserialization |
| Message overhead (framing) | 72-79 bytes per message (64B header + 8B commit marker + alignment) |
| Default pool size | 1 MB |
How reads work: The pool file is memory-mapped. Readers walk committed frames directly from the mapped region — no read syscalls, no buffer copies. Payloads are stored in Lite3, a zero-copy binary format that is byte-for-byte JSON-compatible — every valid JSON document has an equivalent Lite3 representation and vice versa. Lite3 supports field lookup by offset, so tag filtering and --where predicates run without deserializing the full message. JSON conversion happens only at the output boundary.
How writes work: Writers acquire an OS file lock, plan frame placement (including ring wrap), write the frame as Writing, then flip it to Committed and update the header. The lock is held only for the memcpy + header update — no allocation or encoding happens under the lock.
How lookups work: Each pool includes an inline index — a fixed-size hash table mapping sequence numbers to byte offsets. fetch POOL 42 usually jumps directly to the right frame. If the slot is stale or collided, the reader scans forward from the tail. You can tune this with --index-capacity at pool creation time.
Algorithmic complexity below uses N = visible messages in the pool (depends on message sizes and pool capacity), M = index slot count.
| Operation | Complexity | Notes |
|---|---|---|
| Append | O(1) + O(payload bytes) | Writes one frame, updates one index slot, publishes the header. durability=flush adds OS flush cost. |
Get by seq (fetch POOL SEQ) |
Usually O(1); O(N) worst case | If the index slot matches, it's a direct jump. If the slot is overwritten/stale/invalid (or M=0), it scans forward from the tail until it finds (or passes) the target seq. |
Tail / follow (follow --tail) |
O(k) to emit k; then O(1)/message | Steady-state work is per message. Tag filters are cheap; --where runs a jq predicate per message. |
Replay window (follow --since ... --replay) |
O(R) | Linear in the number of replayed messages. |
Validate (doctor, pool info warnings) |
O(N) | Full ring scan. Index checks are sampled/best-effort diagnostics. |
Native bindings:
client, _ := plasmite.NewClient("./data")
pool, _ := client.CreatePool(plasmite.PoolRefName("events"), 1024*1024)
pool.Append(map[string]any{"sensor": "temp", "value": 23.5}, nil, plasmite.DurabilityFast)from plasmite import Client, Durability
client = Client("./data")
pool = client.create_pool("events", 1024*1024)
pool.append_json(b'{"sensor": "temp", "value": 23.5}', [], Durability.FAST)const { Client, Durability } = require("plasmite")
const client = new Client("./data")
const pool = client.createPool("events", 1024 * 1024)
pool.appendJson(Buffer.from('{"sensor": "temp", "value": 23.5}'), [], Durability.Fast)See Go bindings, Python bindings, and Node bindings.
Specs: CLI | API | Remote protocol
Guides: Serving & remote access | Distribution
Contributing: See AGENTS.md for CI hygiene; docs/record/releasing.md for release process
Changelog | Inspired by Oblong Industries' Plasma.
MIT. See THIRD_PARTY_NOTICES.md for vendored code.
