Simple durability, made flexible.
Heads-up: Breaking change konserve 0.9. now requires a UUID under :id in the configuration in general. See below.*
A simple document store protocol defined with synchronous and core.async
semantics to allow Clojuresque collection operations on associative key-value
stores, both from Clojure and ClojureScript for different backends. Data is
generally serialized with edn semantics or, if supported, as native binary blobs
and can be accessed similarly to clojure.core functions get-in, assoc-in
and update-in. update-in especially allows to run functions atomically and
returns old and new value. Each operation is run atomically and must be
consistent (in fact ACID), but further consistency across keys is, depending on the backend, only optionally supported.
- cross-platform between Clojure and ClojureScript
- lowest-common denominator interface for an associative datastructure with
ednsemantics - thread-safety with atomicity over key operations
- fast serialization options (fressian, transit, …), independent of the underlying kv-store
- very low overhead protocol, including direct binary access for high throughput
- no additional dependencies and setup required for IndexedDB in the browser and the file backend on the JVM and Node.js
- avoids blocking io, the filestore for instance will not block any thread on reading
(require '[konserve.core :as k])
;; All stores require a UUID :id for global identification
;; Generate once: (java.util.UUID/randomUUID) or (random-uuid)
;; Then use the literal in your config
(def config {:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-446655440000"})
;; Create new store, pass opts as separate argument
(def store (k/create-store config {:sync? true}))
;; Use the store
(k/assoc-in store [:user] {:name "Alice"} {:sync? true})
(k/get-in store [:user] nil {:sync? true})
;; => {:name "Alice"}
(k/update-in store [:user :age] (fnil inc 0) {:sync? true})
;; => [nil 1]
;; Clean up
(k/delete-store config)All konserve stores require a globally unique :id field containing a UUID type.
This ensures stores can be uniquely identified and matched across different backends,
machines, and synchronization contexts.
Why UUID IDs are required:
- Global identifiability: Match stores regardless of backend type or file path
- Cross-machine sync: Identify the same logical store across different systems
- High entropy: 128-bit UUIDs prevent collisions
- Backend-agnostic: Same ID works for memory, file, S3, Redis, etc.
How to use UUIDs:
;; 1. Generate a UUID once (in your REPL or terminal)
(java.util.UUID/randomUUID) ;; Clojure
(random-uuid) ;; ClojureScript
;; => #uuid "550e8400-e29b-41d4-a716-446655440000"
;; 2. Copy the UUID and use it as a literal in your config
{:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-446655440000"}
;; 3. Pass opts as separate argument to store functions
(k/create-store config {:sync? true})
;; 4. Use the SAME UUID every time for the same logical store
;; 5. Use DIFFERENT UUIDs for different stores (dev, test, prod)Important:
- Generate a UUID once and use it consistently for the same store
- Store the UUID in your application config (EDN files support
#uuidliterals) - Different stores (dev, test, prod) should have different UUIDs
- Never generate UUIDs dynamically in your code - use fixed literals
Add to your deps.edn:
{:deps {org.replikativ/konserve {:mvn/version "LATEST"}}}Or to your project.clj:
[org.replikativ/konserve "LATEST"]Konserve supports both synchronous and asynchronous execution modes via core.async.
Synchronous mode (:sync? true):
(def config {:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-446655440000"})
(def store (k/create-store config {:sync? true}))
(k/assoc-in store [:key] "value" {:sync? true})
(k/get-in store [:key] nil {:sync? true})
;; => "value"Asynchronous mode (:sync? false):
(require '[clojure.core.async :refer [go <!]])
(def config {:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-446655440001"})
(go
(def store (<! (k/create-store config {:sync? false})))
(<! (k/assoc-in store [:key] "value"))
(println (<! (k/get-in store [:key]))))
;; => "value"Konserve provides five key lifecycle functions:
create-store- Create a new store, errors if already existsconnect-store- Connect to existing store, errors if doesn’t existstore-exists?- Check if store exists at the given configurationrelease-store- Release connections and resources held by a storedelete-store- Delete underlying storage
(def config {:backend :file
:id #uuid "550e8400-e29b-41d4-a716-446655440002"
:path "/tmp/my-store"})
;; Check if store exists
(k/store-exists? config {:sync? true}) ;; => false
;; Create new store (errors if already exists)
(def store (k/create-store config {:sync? true}))
;; Use the store...
(k/assoc-in store [:data] {:value 42} {:sync? true})
;; Later, connect to existing store (errors if doesn't exist)
;; (def store (k/connect-store config {:sync? true}))
;; Clean up resources
(k/release-store config store {:sync? true})
;; Delete underlying storage
(k/delete-store config {:sync? true})
;; Verify deletion
(k/store-exists? config {:sync? true}) ;; => falseAll backends follow consistent strict semantics:
Strict semantics (All backends: File, S3, DynamoDB, Redis, LMDB, RocksDB, IndexedDB, Memory with :id):
create-store- Creates new store, errors if already existsconnect-store- Connects to existing store, errors if doesn’t existstore-exists?- Checks for existence before create/connect
An in-memory store wrapping an Atom, available for both Clojure and ClojureScript.
(require '[konserve.core :as k])
;; Persistent registry-based store with ID
(def config {:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-446655440003"})
(def my-db (k/create-store config {:sync? true}))
;; Later sessions can reconnect:
;; (def my-db (k/connect-store config {:sync? true}))A file-system store using fressian serialization. No setup or additional dependencies needed.
(require '[konserve.core :as k])
(def config {:backend :file
:id #uuid "550e8400-e29b-41d4-a716-446655440004"
:path "/tmp/konserve-store"})
;; Create new store
(def my-db (k/create-store config {:sync? true}))
;; Or connect to existing
;; (def my-db (k/connect-store config {:sync? true}))The file store supports:
- Optional fsync control via
:sync-blob? falsefor better performance - Custom
java.nio.file.FileSysteminstances via:filesystemparameter - Thoroughly tested using Jimfs (Google’s in-memory NIO filesystem)
For Node.js environments, require the Node.js-specific file store:
(require '[konserve.core :as k]
'[konserve.node-filestore] ;; Registers :file backend for Node.js
'[clojure.core.async :refer [go <!]])
(go
(def config {:backend :file
:id #uuid "550e8400-e29b-41d4-a716-446655440005"
:path "/tmp/konserve-store"})
(def my-db (<! (k/create-store config {:sync? false}))))IndexedDB backend for ClojureScript browser applications. Async-only, must be explicitly required.
(require '[konserve.core :as k]
'[konserve.indexeddb] ;; Register :indexeddb backend
'[clojure.core.async :refer [go <!]])
(go
(def config {:backend :indexeddb
:id #uuid "550e8400-e29b-41d4-a716-446655440006"
:name "my-app-db"})
;; Create new store
(def my-idb-store (<! (k/create-store config {:sync? false})))
;; Use the store
(<! (k/assoc-in my-idb-store [:user] {:name "Alice" :age 30}))
(<! (k/get-in my-idb-store [:user]))
;; Multi-key atomic operations
(<! (k/multi-assoc my-idb-store {:user1 {:name "Alice"}
:user2 {:name "Bob"}}))
;; Efficient bulk retrieval - returns sparse map of found keys
(<! (k/multi-get my-idb-store [:user1 :user2 :nonexistent]))
;; => {:user1 {:name "Alice"} :user2 {:name "Bob"}}
;; Atomic bulk delete
(<! (k/multi-dissoc my-idb-store [:user1 :user2]))
;; Clean up
(<! (k/delete-store config {:sync? false})))The IndexedDB implementation supports atomic multi-key operations through IndexedDB’s native transaction model.
External backends integrate seamlessly through the unified store interface. After requiring a backend module, it automatically registers with the multimethod dispatch system.
(require '[konserve.core :as k])
(require '[konserve-s3.core]) ;; Registers :s3 backend
(def config {:backend :s3
:id #uuid "550e8400-e29b-41d4-a716-446655440007"
:bucket "my-bucket"
:region "us-east-1"})
;; Create new store
(def s3-store (k/create-store config {:sync? true}))
;; Use the store
(k/assoc-in s3-store [:data] {:value 42} {:sync? true})
;; Later, connect to existing store
;; (def s3-store (k/connect-store config {:sync? true}))
;; Clean up
(k/delete-store config {:sync? true}):s3- AWS S3 (konserve-s3):dynamodb- AWS DynamoDB (konserve-dynamodb):redis- Redis (konserve-redis):lmdb- LMDB (konserve-lmdb):rocksdb- RocksDB (konserve-rocksdb):jdbc- JDBC databases (konserve-jdbc)
- konserve-gcs - Google Cloud Storage
The following projects are incompatible with the latest konserve release, but describe the usage of the underlying store API:
- LevelDB: konserve-leveldb
- CouchDB: konserve-clutch
- Riak: konserve-welle
Konserve supports tiered storage with a frontend cache layer and backend persistence layer. Combines a fast frontend store (e.g., in-memory) with a durable backend store (e.g., filesystem).
(require '[konserve.core :as k])
(def config {:backend :tiered
:id #uuid "550e8400-e29b-41d4-a716-446655440008"
:frontend-config {:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-446655440009"}
:backend-config {:backend :file
:id #uuid "550e8400-e29b-41d4-a716-44665544000a"
:path "/tmp/store"}
:write-policy :write-through
:read-policy :frontend-first})
;; Create tiered store (creates both frontend and backend)
(def tiered-store (k/create-store config {:sync? true}))
;; Use the store
(k/assoc-in tiered-store [:data] {:value 42} {:sync? true})
;; Clean up
(k/delete-store config {:sync? true})Write policies:
:write-through- Write to backend, then frontend synchronously:write-around- Write only to backend, invalidate frontend
Read policies:
:frontend-first- Check frontend first, fallback to backend (populates frontend):frontend-only- Only read from frontend
The tiered store supports multi-key operations (multi-get, multi-assoc, multi-dissoc)
when both stores support them. During initialization, multi-get combined with multi-assoc
enables efficient bulk sync from backend to frontend.
Multi-key operations provide atomic bulk operations for supported backends:
(require '[konserve.core :as k])
;; Check if backend supports multi-key operations
(k/multi-key-capable? store) ;; => true/false
;; Atomic bulk write
(k/multi-assoc store {:user1 {:name "Alice"}
:user2 {:name "Bob"}
:user3 {:name "Carol"}}
{:sync? true})
;; Efficient bulk read - returns sparse map (only found keys)
(k/multi-get store [:user1 :user2 :nonexistent] {:sync? true})
;; => {:user1 {:name "Alice"} :user2 {:name "Bob"}}
;; Atomic bulk delete
(k/multi-dissoc store [:user1 :user2] {:sync? true})Backends with multi-key support:
- Memory store
- IndexedDB
- Tiered store (when both layers support it)
Write hooks are invoked after every successful write operation, enabling reactive patterns like store synchronization, change logging, or triggering side effects.
(require '[konserve.core :as k])
(def config {:backend :memory
:id #uuid "550e8400-e29b-41d4-a716-44665544000b"})
(def store (k/create-store config {:sync? true}))
;; Register a hook to log all writes
(k/add-write-hook! store ::my-logger
(fn [{:keys [api-op key value]}]
(println "Write:" api-op key "->" value)))
;; Writes now trigger the hook
(k/assoc-in store [:user] {:name "Alice"} {:sync? true})
;; Prints: Write: :assoc-in :user -> {:name "Alice"}
;; Remove hook when done
(k/remove-write-hook! store ::my-logger)Hook function receives:
:api-op- The operation (:assoc-in,:update-in,:dissoc,:bassoc,:multi-assoc,:multi-dissoc):key- The top-level key being written:key-vec- Full key path (forassoc-in/update-in):value- The value written:old-value- Previous value (for update operations):kvs- Map of key->value (formulti-assoc):keys- Collection of keys (formulti-dissoc)
Hooks are invoked at the API layer (in konserve.core), so they work consistently
across all store backends. Stores must implement the PWriteHookStore protocol.
Konserve has a garbage collector that can be called manually when the store gets too crowded.
(require '[konserve.gc :as gc])
;; Evict keys older than cutoff date, keep whitelisted keys
(gc/sweep! store cutoff-date whitelist {:sync? true})The function konserve.gc/sweep! allows you to provide a cut-off date to evict old keys
and a whitelist for keys that should be kept.
Compression and encryption are supported by the default store implementation used by all current backends except lmdb and memory.
;; Store configuration with compression and encryption
(def config {:backend :file
:id #uuid "550e8400-e29b-41d4-a716-44665544000c"
:path "/tmp/secure-store"
:config {:encryptor {:type :aes
:key "s3cr3t"}
:compressor {:type :lz4}}})
(def store (k/create-store config {:sync? true}))Compression:
- LZ4 compression (JVM only)
Encryption:
- AES/CBC/PKCS{5/7}Padding with 256 bit
- Different salt for each written value
- Same cold storage format for JVM and JS (cross-runtime compatible)
Different formats for edn serialization like fressian, transit or a simple
pr-str version are supported and can be combined with different stores. Stores
have a reasonable default setting. You can extend the serialization
protocol to other formats if needed. Incognito support is available for
custom records.
You can read and write custom records according to incognito.
For synchronous execution, normal exceptions are thrown. For asynchronous
error handling, we follow the semantics of go-try and <? introduced here.
The superv.async library provides error handling for core.async. You just need two
macros: <? checks for an exception and rethrows, go-try catches and passes it
along as a return value so errors don’t get lost.
We provide a backend implementation guide.
New in 2025: External backends can register with the unified store dispatch system by defining multimethod implementations for:
konserve.store/create-store- Create new store, error if existskonserve.store/connect-store- Connect to existing store, error if doesn’t existkonserve.store/store-exists?- Check if store existskonserve.store/delete-store- Delete underlying storagekonserve.store/release-store- Release resources
All backends must implement strict semantics where create-store errors if the store
already exists and connect-store errors if the store doesn’t exist. See existing
external backends (konserve-s3, konserve-lmdb, konserve-rocksdb, konserve-redis,
konserve-dynamodb) for reference implementations.
- The protocol is used in production and originates as an elementary storage protocol for replikativ and datahike.
- kampbell maps collections of entities to konserve and enforces specs.
Konserve assumes it accesses its keyspace in the store exclusively. It uses hasch to support arbitrary edn keys and hence does not normally clash with outside usage even when the same keys are used. To support multiple konserve clients in the store, the backend must support locking and proper transactions on keys internally, which is the case for backends like CouchDB, Redis and Riak.
Copyright © 2014-2026 Christian Weilbach and contributors
Distributed under the Eclipse Public License either version 1.0 or (at your option) any later version.