A fault-tolerant, observable, and concurrent key-value store service built with Go, gRPC, Redis, Kubernetes, and Prometheus.
- gRPC API: High-performance SET/GET operations
- Concurrent Handling: Leverages Go goroutines for handling many simultaneous requests
- Persistent Storage: Uses Redis for distributed storage
- Containerized: Single Docker image deployment
- Kubernetes Orchestration: 3 replicas with load balancing and health probes
- Observability: Prometheus metrics for request count and latency
- Go (Golang): Core application language
- gRPC: API protocol
- Redis: Backend storage
- Docker: Containerization
- Kubernetes: Orchestration
- Prometheus: Metrics collection
- Grafana: Metrics visualization
- Go 1.21 or later
- Docker
- Kubernetes cluster (Minikube or Kind)
protoc(Protocol Buffers compiler)protoc-gen-goandprotoc-gen-go-grpcplugins
# Install Go dependencies
make deps
# Or manually:
go mod download
go mod tidymake protomake buildFirst, start Redis:
docker run -d -p 6379:6379 redis:7-alpineThen run the service:
make run
# Or:
./bigtablelite -redis-addr localhost:6379The service will start:
- gRPC server on port
50051 - Prometheus metrics on port
9090at/metrics
Build the Docker image:
make docker-build
# Or:
docker build -t bigtablelite:latest .Ensure you have a Kubernetes cluster running (Minikube or Kind):
# For Minikube
minikube start
# For Kind
kind create cluster- Build and load the Docker image (for local clusters):
# Build the image
make docker-build
# Load into Minikube
minikube image load bigtablelite:latest
# Or for Kind
kind load docker-image bigtablelite:latest- Deploy Redis:
kubectl apply -f k8s/redis-deployment.yaml- Deploy BigTableLite:
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml- Deploy Prometheus (optional, for monitoring):
kubectl apply -f k8s/prometheus-deployment.yaml- Deploy Grafana (optional, for visualization):
kubectl apply -f k8s/grafana-deployment.yaml# Check pods
kubectl get pods
# Check services
kubectl get services
# Check logs
kubectl logs -l app=bigtablelite --tail=50# Get service URLs
minikube service bigtablelite-service --url
minikube service prometheus-service --url
minikube service grafana-service --urlInstall grpcurl:
# macOS
brew install grpcurl
# Or download from: https://github.com/fullstorydev/grpcurlTest the API:
# Set a value
grpcurl -plaintext -d '{"key": "test", "value": "hello world"}' \
localhost:50051 bigtablelite.BigTableLite/Set
# Get a value
grpcurl -plaintext -d '{"key": "test"}' \
localhost:50051 bigtablelite.BigTableLite/GetCreate a simple test client:
package main
import (
"context"
"log"
"bigtablelite/proto"
"google.golang.org/grpc"
)
func main() {
conn, _ := grpc.Dial("localhost:50051", grpc.WithInsecure())
client := proto.NewBigTableLiteClient(conn)
// Set
client.Set(context.Background(), &proto.SetRequest{
Key: "test", Value: "hello",
})
// Get
resp, _ := client.Get(context.Background(), &proto.GetRequest{
Key: "test",
})
log.Println(resp)
}The service exposes the following metrics at http://localhost:9090/metrics:
bigtablelite_requests_total: Total number of requests (labeled by method and status)bigtablelite_request_duration_seconds: Request latency histogram (labeled by method)
# Local
curl http://localhost:9090/metrics
# In Kubernetes
kubectl port-forward svc/bigtablelite-service 9090:9090
curl http://localhost:9090/metrics- Access Grafana at the service URL (default: http://localhost:3000)
- Login with
admin/admin - Add Prometheus as a data source:
- URL:
http://prometheus-service:9090
- URL:
- Create dashboards to visualize:
- Request rate
- Request latency (p50, p95, p99)
- Error rate
The service accepts the following command-line flags:
-grpc-port: gRPC server port (default:50051)-metrics-port: Prometheus metrics port (default:9090)-redis-addr: Redis address (default:localhost:6379)
make testmake cleanStores a key-value pair.
Request:
message SetRequest {
string key = 1;
string value = 2;
}Response:
message SetResponse {
bool success = 1;
string message = 2;
}Retrieves a value by key.
Request:
message GetRequest {
string key = 1;
}Response:
message GetResponse {
bool found = 1;
string value = 2;
string message = 3;
}Ensure Redis is running and accessible:
# Check Redis connection
redis-cli ping
# In Kubernetes, check Redis service
kubectl get svc redis-service
kubectl logs -l app=redisVerify the service is running:
# Check if port is listening
lsof -i :50051
# In Kubernetes
kubectl get pods -l app=bigtablelite
kubectl logs -l app=bigtableliteCheck the metrics endpoint:
curl http://localhost:9090/metrics