A distributed key-value store built on the Raft consensus protocol, implementing strong consistency across multiple nodes.
Inspired by Diego Ongaro and John Ousterhout's Raft paper and MIT's 6.824 Distributed Systems course lectures.
Raft is a consensus algorithm designed to ensure strong consistency and fault tolerance in distributed systems. It achieves consensus by electing a leader that coordinates log replication and decision-making across nodes. Unlike Paxos, which—while theoretically robust—can be complex to understand and implement, Raft's leader-based approach simplifies the process of reaching agreement. This clarity makes Raft not only easier to implement and debug but also highly effective in building resilient, high-availability systems.
Most of the implementation details is based on the Raft paper
- Leader election with randomized timeouts
- Log replication via AppendEntries
- Linearizable reads and writes through leader forwarding
- Safety guaranteed through consensus
- Persistent state (term, log entries)
- Automatic log compaction via snapshots
- Dynamic server joining
- Automatic leader election
Start at least 3 servers to form a cluster:
go run cmd/server/main.go -datadir ./tmp/server_1 -port 8001
go run cmd/server/main.go -datadir ./tmp/server_2 -port 8002
go run cmd/server/main.go -datadir ./tmp/server_2 -port 8003Set a value
go run cmd/client/main.go \
--servers http://localhost:8001,http://localhost:8002,http://localhost:8003 \
-op set \
-key mykey \
-value myvalueGet a value:
go run cmd/client/main.go \
--servers http://localhost:8001,http://localhost:8002,http://localhost:8003 \
-op get \
-key mykey- Go
go build ./cmd/server
go build ./cmd/clientgo test ./tests/... -v