Skip to content

A multi-threaded, persistent Key-Value store in Go. Features a custom RESP parser, fine-grained concurrency (RWMutex), and sustains 440k+ req/s in benchmarks

Notifications You must be signed in to change notification settings

faisal-990/redis-go

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

High-Performance Redis-Compatible Server

A multi-threaded, persistent TCP key-value store engineered in Go from scratch.

This project implements a subset of the Redis Serialization Protocol (RESP), capable of handling 440,000+ requests per second on a standard laptop while ensuring 100% thread safety.

🚀 Features

  • Core Commands: SET, GET, PING, ECHO.
  • Data Structures: Lists (RPUSH, LPUSH, LRANGE, LPOP, LLEN).
  • Concurrency: Thread-safe architecture using sync.RWMutex and atomic operations to handle thousands of concurrent clients.
  • Protocol: Custom-built strict RESP parser (supports Arrays, Bulk Strings, Integers, Errors).
  • Persistence: In-memory storage engine with support for TTL (Time-To-Live) expiration.

📊 Performance & Reliability Benchmarks

Environment: Intel Core i5-11300H (4 Cores) @ 3.10GHz | 50 Concurrent Clients

The following benchmarks measure throughput (Requests Per Second) for core operations.

  • Raw Throughput: Runs with compiler optimizations enabled.
  • Thread-Safe Verified: Runs with Go's Data Race Detector (-race) enabled, validating memory safety and lock correctness under load.
Operation Raw Throughput (Optimized) Thread-Safe Verified (-race)
PING (Connection) 447,900 req/s 310,542 req/s
SET (Write Lock) 429,523 req/s 274,971 req/s
GET (Read Lock) 425,571 req/s 300,008 req/s
LRANGE (List Read) 167,662 req/s 115,739 req/s

Note: The -race benchmarks incur significant CPU overhead to track memory access, yet the engine sustains 275k+ req/s, proving efficient locking strategies (sync.RWMutex).

🔍 Engineering Insight: The LPUSH Trade-off

You might notice LPUSH performance drops significantly at high volumes (from ~40k req/s down to ~4k req/s).

  • Observation: Throughput degrades as the list grows.
  • Root Cause: The current implementation uses Go Slices. Prepending to a slice (LPUSH) is an O(N) operation because it requires allocating a new backing array and copying all existing elements to shift them.
  • Future Optimization: Migrating to a Doubly Linked List or ZipList (like standard Redis) to achieve O(1) write performance.

🛠️ Usage

1. Build and Run the Server

go build -o redis-server cmd/main.go
./redis-server


### 2\. Connect with Redis Client

You can use the standard `redis-cli` to interact with this server.

```bash
redis-cli -p 6379

127.0.0.1:6379> SET mykey "Hello System Design"
OK
127.0.0.1:6379> GET mykey
"Hello System Design"

🧪 Running Tests

This project includes both unit tests for logic and integration benchmarks for performance.

Run Logic Tests:

go test -v ./db ./cmd/ -run TestTCPServerCommands

Run Benchmarks:

# Ensure server is running on port 6379 in another terminal first!
go test -v -bench=. ./cmd/ -run=^$

About

A multi-threaded, persistent Key-Value store in Go. Features a custom RESP parser, fine-grained concurrency (RWMutex), and sustains 440k+ req/s in benchmarks

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •