The open protocol for real-time sync to client applications
HTTP-based durable streams for streaming data reliably to web browsers, mobile apps, and native clients with offset-based resumability.
Durable Streams provides a simple, production-proven protocol for creating and consuming ordered, replayable data streams with support for catch-up reads and live tailing.
Tip
Read the Annoucing Durable Streams post on the Electric blog.
Modern applications frequently need ordered, durable sequences of data that can be replayed from arbitrary points and tailed in real time. Common patterns include:
- AI conversation streaming - Stream LLM token responses with resume capability across reconnections
- Agentic apps - Stream tool outputs and progress events with replay and clean reconnect semantics
- Database synchronization - Stream database changes to web, mobile, and native clients
- Collaborative editing - Sync CRDTs and operational transforms across devices
- Real-time updates - Push application state to clients with guaranteed delivery
- Event sourcing - Build event-sourced architectures with client-side replay
- Workflow execution - Stream workflow state changes with full history
While durable streams exist throughout backend infrastructure (database WALs, Kafka topics, event stores), they aren't available as a first-class primitive for client applications. There's no simple, HTTP-based durable stream that sits alongside databases and object storage as a standard cloud primitive.
WebSocket and SSE connections are easy to start, but they're fragile in practice: tabs get suspended, networks flap, devices switch, pages refresh. When that happens, you either lose in-flight data or build a bespoke backend storage and client resume protocol on top.
AI products make this painfully visible. Token streaming is the UI for chat and copilots, and agentic apps stream progress events, tool outputs, and partial results over long-running sessions. When the stream fails, the product fails—even if the model did the right thing.
Durable Streams addresses this gap. It's a minimal HTTP-based protocol for durable, offset-based streaming designed for client applications across all platforms: web browsers, mobile apps, native clients, IoT devices, and edge workers. Based on 1.5 years of production use at Electric for real-time Postgres sync, reliably delivering millions of state changes every day.
What you get:
- Refresh-safe - Users refresh the page, switch tabs, or background the app—they pick up exactly where they left off
- Share links - A stream is a URL. Multiple viewers can watch the same stream together in real-time
- Never re-run - Don't repeat expensive work because a client disconnected mid-stream
- Multi-device - Start on your phone, continue on your laptop, watch from a shared link—all in sync
- Multi-tab - Works seamlessly across browser tabs without duplicating connections or missing data
- Massive fan-out - CDN-friendly design means one origin can serve millions of concurrent viewers
The protocol is:
- 🌐 Universal - Works anywhere HTTP works: web browsers, mobile apps, native clients, IoT devices, edge workers
- 📦 Simple - Built on standard HTTP with no custom protocols
- 🔄 Resumable - Offset-based reads let you resume from any point
- ⚡ Real-time - Long-poll and SSE modes for live tailing with catch-up from any offset
- 💰 Economical - HTTP-native design leverages CDN infrastructure for efficient scaling
- 🎯 Flexible - Content-type agnostic byte streams
- 🔌 Composable - Build higher-level abstractions on top (like Electric's real-time Postgres sync engine)
npm install @durable-streams/client # TypeScript client
npm install @durable-streams/server # Reference Node.js server
npm install @durable-streams/state # State Protocol primitives
npm install @durable-streams/cli # Development & testing CLIClient libraries for Go, Python, Elixir, .NET, Swift, PHP, Java, Rust, and Ruby are available in the packages/ directory. See the Client Libraries section below for details.
For production use, download the Caddy-based server binary from GitHub releases. Available for macOS (Intel & ARM), Linux (AMD64 & ARM64), and Windows.
# Start the server
./durable-streams-server dev
# Server runs on http://localhost:4437This monorepo contains:
| Package | Language | Description |
|---|---|---|
| @durable-streams/client | TypeScript | Reference client with full read/write support |
| client-py | Python | Python client library |
| client-go | Go | Go client library |
| client-elixir | Elixir | Elixir client library |
| client-dotnet | C#/.NET | .NET client library |
| client-swift | Swift | Swift client library |
| client-php | PHP | PHP client library |
| client-java | Java | Java client library |
| client-rust | Rust | Rust client library |
| client-rb | Ruby | Ruby client library |
- @durable-streams/server - Node.js reference server implementation
- caddy-plugin - Production Caddy server plugin
- @durable-streams/cli - Command-line tool
- Test UI - Visual web interface for testing and exploring streams
- @durable-streams/server-conformance-tests - Server protocol compliance tests (124 tests)
- @durable-streams/client-conformance-tests - Client protocol compliance tests (221 tests)
- @durable-streams/benchmarks - Performance benchmarking suite
Run the local server and use either the web-based Test UI or the command-line CLI:
# Clone and install
git clone https://github.com/durable-streams/durable-streams.git
cd durable-streams
pnpm install
pnpm build
# Terminal 1: Start the local server
pnpm start:dev
# Terminal 2: Launch the Test UI
cd examples/test-ui
pnpm devOpen http://localhost:3000 to:
- Create and manage streams with different content types (text/plain, application/json, binary)
- Write messages with keyboard shortcuts
- Monitor real-time stream updates
- View the stream registry to see all active streams
- Inspect stream metadata and content-type rendering
See the Test UI README for details.
# Clone and install
git clone https://github.com/durable-streams/durable-streams.git
cd durable-streams
pnpm install
pnpm build
# Terminal 1: Start the local server
pnpm start:dev
# Terminal 2: Link the CLI globally (one-time setup)
pnpm link:dev
# Set the server URL
export STREAM_URL=http://localhost:4437
# Create a stream
durable-stream-dev create my-stream
# Terminal 3: Start reading (will show data as it arrives)
durable-stream-dev read my-stream
# Back in Terminal 2: Write data and watch it appear in Terminal 3
durable-stream-dev write my-stream "Hello, world!"
durable-stream-dev write my-stream "More data..."
echo "Piped content!" | durable-stream-dev write my-streamSee the CLI README for details.
The Test UI and CLI share the same __registry__ system stream, so streams created in one are visible in the other.
The @durable-streams/client package provides full read/write support. For high-throughput writes with exactly-once delivery, use IdempotentProducer:
import { DurableStream, IdempotentProducer } from "@durable-streams/client"
// Create a stream
const stream = await DurableStream.create({
url: "https://your-server.com/v1/stream/my-stream",
contentType: "application/json",
})
// Create an idempotent producer for reliable writes
const producer = new IdempotentProducer(stream, "my-service-1", {
autoClaim: true, // Auto-recover from epoch conflicts
onError: (err) => console.error("Batch failed:", err),
})
// High-throughput writes - fire-and-forget, automatically batched & pipelined
for (const event of events) {
producer.append(event) // Don't await - errors go to onError callback
}
// Ensure all messages are delivered before shutdown
await producer.flush()
await producer.close()
// Read with the streaming API
const res = await stream.stream<{ event: string; userId: string }>({
live: false,
})
const items = await res.json()
console.log(items) // [{ event: "user.created", userId: "123" }, ...]// Read and save the offset
const res = await handle.stream({ live: false })
const text = await res.text()
const savedOffset = res.offset // Save this for later
// Resume from saved offset (catch-up mode returns immediately)
const resumed = await handle.stream({ offset: savedOffset, live: false })// Subscribe to live updates
const res = await handle.stream({ live: "auto" })
res.subscribeJson(async (batch) => {
for (const item of batch.items) {
console.log("Received:", item)
}
saveCheckpoint(batch.offset) // Persist for resumption
})Here's the protocol in action with raw HTTP:
Create a stream:
curl -X PUT https://your-server.com/v1/stream/my-stream \
-H "Content-Type: application/json"Append data:
curl -X POST https://your-server.com/v1/stream/my-stream \
-H "Content-Type: application/json" \
-d '{"event":"user.created","userId":"123"}'
# Server returns:
# Stream-Next-Offset: abc123xyzRead from beginning:
curl "https://your-server.com/v1/stream/my-stream?offset=-1"
# Server returns:
# Stream-Next-Offset: abc123xyz
# Cache-Control: public, max-age=60
# [response body with data]Resume from offset:
curl "https://your-server.com/v1/stream/my-stream?offset=abc123xyz"Live tail (long-poll):
curl "https://your-server.com/v1/stream/my-stream?offset=abc123xyz&live=long-poll"
# Waits for new data, returns when available or times outThe key headers:
Stream-Next-Offset- Resume point for next read (exactly-once delivery)Cache-Control- Enables CDN/browser caching for historical readsContent-Type- Set at stream creation, preserved for all reads
Durable Streams operates in two modes for handling message boundaries:
By default, Durable Streams is a raw byte stream with no message boundaries. When you append data, it's concatenated directly. Each read returns all bytes from your offset to the current end of the stream, but these boundaries don't align with application-level "messages."
// Append multiple messages
await handle.append("hello")
await handle.append("world")
// Read from beginning - returns all data concatenated
const res = await handle.stream({ live: false })
const text = await res.text()
// text = "helloworld" (complete stream from offset to end)
// If more data arrives and you read again from the returned offset
await handle.append("!")
const next = await handle.stream({ offset: res.offset, live: false })
const newText = await next.text()
// newText = "!" (complete new data from last offset to new end)You must implement your own framing. For example, newline-delimited JSON (NDJSON):
// Write with newlines
await handle.append(JSON.stringify({ event: "user.created" }) + "\n")
await handle.append(JSON.stringify({ event: "user.updated" }) + "\n")
// Parse line by line
const res = await handle.stream({ live: false })
const text = await res.text()
const messages = text.split("\n").filter(Boolean).map(JSON.parse)When creating a stream with contentType: "application/json", the server guarantees message boundaries. Each read returns a complete JSON array of the messages appended since the last offset.
// Create a JSON-mode stream
const handle = await DurableStream.create({
url: "https://your-server.com/v1/stream/my-stream",
contentType: "application/json",
})
// Append individual JSON values
await handle.append({ event: "user.created", userId: "123" })
await handle.append({ event: "user.updated", userId: "123" })
// Stream individual JSON messages
const res = await handle.stream({ live: false })
for await (const message of res.jsonStream()) {
console.log(message)
// { event: "user.created", userId: "123" }
// { event: "user.updated", userId: "123" }
}In JSON mode:
- Each
append()stores one message - Supports all JSON types: objects, arrays, strings, numbers, booleans, null
- Message boundaries are preserved across reads
- Reads return JSON arrays of all messages
- Ideal for structured event streams
SSE mode supports all content types, including binary. For binary content types (e.g., application/octet-stream), the server automatically base64-encodes data events and signals this via the Stream-SSE-Data-Encoding: base64 response header. Clients detect this header and decode automatically:
// TypeScript
const stream = await DurableStream.create({
url: "https://your-server.com/v1/stream/my-binary-stream",
contentType: "application/octet-stream",
})
const response = await stream.read({ live: "sse" })
response.subscribe((chunk) => {
console.log(chunk.data) // Uint8Array - automatically decoded from base64
})# Python
for chunk in stream.read(live="sse"):
print(chunk.data) # bytes - automatically decoded from base64// Go
iter := stream.Read(ctx, durable.WithLive(durable.LiveModeSSE))
for iter.Next() {
fmt.Println(iter.Data()) // []byte - automatically decoded
}The server automatically base64-encodes data for any non-text content type in SSE mode. Clients decode transparently.
Offsets are opaque tokens that identify positions within a stream:
- Opaque strings - Treat as black boxes; don't parse or construct them
- Lexicographically sortable - You can compare offsets to determine ordering
"-1"means start - Useoffset: "-1"to read from the beginning"now"means tail - Useoffset: "now"to skip existing data and read only new data- Server-generated - Always use the
offsetvalue returned in responses - Unique - Each offset is unique within a stream
- Monotonically increasing - All offsets are monotonically increasing within a stream
// Start from beginning (catch-up mode)
const res = await stream({
url: "https://your-server.com/v1/stream/my-stream",
offset: "-1",
live: false,
})
// Resume from last position (always use returned offset)
const next = await stream({
url: "https://your-server.com/v1/stream/my-stream",
offset: res.offset,
live: false,
})The special offset values are "-1" for stream start and "now" for the current tail. All other offsets are opaque strings returned by the server—never construct or parse them yourself.
Durable Streams is built on a simple HTTP-based protocol. See PROTOCOL.md for the complete specification.
Core operations:
PUT /stream/{path}- Create a new streamPOST /stream/{path}- Append bytes to a streamGET /stream/{path}?offset=X- Read from a stream (catch-up)GET /stream/{path}?offset=X&live=long-poll- Live tail (long-poll)GET /stream/{path}?offset=X&live=sse- Live tail (Server-Sent Events)DELETE /stream/{path}- Delete a streamHEAD /stream/{path}- Get stream metadata
Key features:
- Exactly-once delivery guarantee with offset-based resumption
- Opaque, lexicographically sortable offsets for resumption
- Optional sequence numbers for writer coordination
- TTL and expiry time support
- Content-type preservation
- CDN-friendly caching and request collapsing
This is where the economics change. Because reads are offset-based URLs, CDNs can cache and collapse requests—meaning one origin can serve millions of concurrent viewers without breaking a sweat.
# Request
GET /v1/stream/my-stream?offset=abc123
# Response
HTTP/1.1 200 OK
Cache-Control: public, max-age=60, stale-while-revalidate=300
ETag: "stream-id:abc123:xyz789"
Stream-Next-Offset: xyz789
Content-Type: application/json
[response body]How it works:
- Shared streams - Backends write to a stream, multiple clients subscribe to it in real-time
- Offset-based URLs - Same offset = same data, perfect for caching
- Request collapsing - 10,000 viewers at the same offset become one upstream request
- Edge delivery - CDN edges serve catch-up reads; origin only handles new writes
What this enables:
- Live dashboards - Thousands of viewers watch the same metrics stream
- Collaborative apps - Multiple users sync to the same document or workspace
- Shared debugging - Watch a user's session in real-time to troubleshoot together
Durable Streams is built for production scale:
- Low latency - Sub-15ms end-to-end delivery in production deployments
- High concurrency - Tested with millions of concurrent clients subscribed to a single stream without degradation
- Minimal overhead - The protocol itself adds minimal overhead; throughput scales with your infrastructure
- Horizontal scaling - Offset-based design enables aggressive caching at CDN edges, so read-heavy workloads (common in sync and AI scenarios) scale horizontally without overwhelming origin servers
Backend streaming systems like Kafka, RabbitMQ, and Kinesis excel at server-to-server messaging and backend event processing. Durable Streams complements these systems by solving a different problem: reliably streaming data to client applications.
The challenges of streaming to clients are distinct from server-to-server streaming:
- Client diversity - Supporting web browsers, mobile apps, native clients, each with different capabilities and constraints
- Network unreliability - Clients disconnect constantly (backgrounded tabs, network switches, page refreshes)
- Resumability requirements - Clients need to pick up exactly where they left off without data loss
- Economics - Per-connection costs make dedicated connections to millions of clients prohibitive
- Protocol compatibility - Kafka/AMQP protocols don't run in browsers or on most mobile platforms
- Data shaping and authorization - Backend streams typically contain raw, unfiltered events; client streams need per-user filtering, transformation, and authorization applied
Complementary architecture:
Kafka/RabbitMQ → Application Server → Durable Streams → Clients
(server-to-server) (shapes data, (server-to-client)
authorizes)
Your application server consumes from backend streaming systems, applies authorization logic, shapes data for specific clients, and fans out via Durable Streams. This separation allows:
- Backend systems to optimize for throughput, partitioning, and server-to-server reliability
- Application servers to enforce authorization boundaries and transform data
- Durable Streams to optimize for HTTP compatibility, CDN leverage, and client resumability
- Each layer to use protocols suited to its environment
SSE and WebSockets are good transports for real-time delivery. Durable Streams uses SSE as one of its delivery modes (live=sse). The difference isn't the transport, it's what sits behind it.
SSE/WebSockets give you a connection. Durable Streams gives you a log.
SSE can reconnect (via Last-Event-ID), but it doesn't standardize what happens on the server side: there's no defined log format, no retention semantics, no catch-up protocol, no multi-reader coordination. Most real-world SSE endpoints don't persist history—if you disconnect, the data is gone.
Durable Streams standardizes the durable log underneath:
- Persistent storage - Data survives disconnections, refreshes, and server restarts
- Offset-based resumption - Resume from any position with well-defined semantics
- Unified catch-up and live - Same API for historical replay and real-time tailing
- Multi-reader support - Multiple clients can subscribe to the same stream
- CDN-friendly - Offset-based URLs enable aggressive caching and request collapsing
- Stateless servers - Clients track their own offsets; no connection state to manage
- Standard protocol - Reusable clients and framework integrations that work with any conforming server
The result: SSE (or long-poll) for the last mile, durable log semantics everywhere else. You get the simplicity of HTTP streaming with the reliability of an append-only log.
The protocol is designed to support implementations in any language or platform. A conforming server implementation requires:
- HTTP API - Implement the protocol operations (PUT, POST, GET, DELETE, HEAD) as defined in PROTOCOL.md
- Durable storage - Persist stream data with offset tracking (in-memory, file-based, database, object storage, etc.)
- Offset management - Generate opaque, lexicographically sortable offset tokens
Client implementations need only support standard HTTP requests and offset tracking.
This monorepo includes client libraries for 10 languages: TypeScript, Python, Go, Elixir, .NET, Swift, PHP, Java, Rust, and Ruby. All pass the conformance test suite. Use the test suite to verify your own implementations:
import { runConformanceTests } from "@durable-streams/server-conformance-tests"
runConformanceTests({
baseUrl: "http://localhost:4437",
})import { DurableStreamTestServer } from "@durable-streams/server"
const server = new DurableStreamTestServer({
port: 4437,
host: "127.0.0.1",
})
await server.start()
console.log(`Server running on ${server.baseUrl}`)See @durable-streams/server for more details.
We welcome community implementations! If you've built a Durable Streams client or server, please open a PR to add it here.
Go (External)
- ahimsalabs/durable-streams-go: An alternative Go client and server implementation.
Java (External)
- Clickin/durable-streams-java: An alternative Java client with framework adapters.
# Set the server URL (defaults to http://localhost:4437)
export STREAM_URL=https://your-server.com# Create a stream
durable-stream create my-stream
# Write to a stream
echo "hello world" | durable-stream write my-stream
# Read from a stream
durable-stream read my-stream
# Delete a stream
durable-stream delete my-streamStream database changes to web and mobile clients for real-time synchronization:
// Server: stream database changes
for (const change of db.changes()) {
await handle.append(change) // JSON objects batched automatically
}
// Client: receive and apply changes (works in browsers, React Native, native apps)
const res = await handle.stream<Change>({
offset: lastSeenOffset,
live: "auto",
})
res.subscribeJson(async (batch) => {
for (const change of batch.items) {
applyChange(change)
}
saveOffset(batch.offset)
})Build event-sourced systems with durable event logs:
// Append events
await handle.append({ type: "OrderCreated", orderId: "123" })
await handle.append({ type: "OrderPaid", orderId: "123" })
// Replay from beginning (catch-up mode for full replay)
const res = await handle.stream<Event>({ offset: "-1", live: false })
const events = await res.json()
const state = events.reduce(applyEvent, initialState)LLM inference is expensive. When a user's tab gets suspended, their network flaps, or they refresh the page, you don't want to re-run the generation—you want them to pick up exactly where they left off.
import {
DurableStream,
IdempotentProducer,
StaleEpochError,
} from "@durable-streams/client"
// Server: stream tokens to a durable stream (continues even if client disconnects)
const stream = await DurableStream.create({
url: `https://your-server.com/v1/stream/generation/${generationId}`,
contentType: "text/plain",
})
// Use IdempotentProducer for reliable, exactly-once token delivery
let fenced = false
const producer = new IdempotentProducer(stream, generationId, {
autoClaim: true, // Auto-recover if another worker took over
lingerMs: 10, // Send batches every 10ms for low latency
onError: (error) => {
if (error instanceof StaleEpochError) {
fenced = true
console.log("Another worker took over, stopping gracefully")
}
},
})
for await (const token of llm.stream(prompt)) {
if (fenced) break
producer.append(token) // Batched & deduplicated automatically
}
await producer.flush() // Ensure all tokens are delivered
await producer.close()
// Client: resume from last seen position (refresh-safe)
const res = await stream.stream({ offset: lastSeenOffset, live: "auto" })
res.subscribe((chunk) => {
renderTokens(chunk.data)
saveOffset(chunk.offset) // Persist for next resume
})What this gives you:
- Tab suspended? User comes back, catches up from saved offset—no re-generation
- Page refresh? Continues from last token, not from the beginning
- Share the generation? Multiple viewers watch the same stream in real-time
- Switch devices? Start on mobile, continue on desktop, same stream
- Worker failover? Another worker can take over with a new epoch—no duplicate tokens
Use the conformance test suite to verify your server implements the protocol correctly:
The easiest way to run conformance tests against your server:
# Run tests once (for CI)
npx @durable-streams/server-conformance-tests --run http://localhost:4437
# Watch mode - reruns tests when source files change (for development)
npx @durable-streams/server-conformance-tests --watch src http://localhost:4437You can also run the tests programmatically in your own test suite:
import { runConformanceTests } from "@durable-streams/server-conformance-tests"
runConformanceTests({
baseUrl: "http://localhost:4437",
})Measure your server's performance:
import { runBenchmarks } from "@durable-streams/benchmarks"
runBenchmarks({
baseUrl: "http://localhost:4437",
environment: "local",
})We welcome contributions! This project follows the Contributor Covenant code of conduct.
# Clone the repository
git clone https://github.com/durable-streams/durable-streams.git
cd durable-streams
# Install dependencies
pnpm install
# Build all packages
pnpm build
# Run tests
pnpm test:run
# Lint and format
pnpm lint:fix
pnpm formatWe use changesets for version management:
# Add a changeset
pnpm changeset
# Version packages (done by CI)
pnpm changeset:version
# Publish (done by CI)
pnpm changeset:publishApache 2.0 - see LICENSE
Status: Early development - API subject to change