A simple, framework-agnostic rate limiter for Gleam with pluggable storage. π«
- β¨ Simple and easy to use.
- π Rate limits based on any key (e.g. IP address, or user ID).
- πͺ£ Uses a Token Bucket algorithm to rate limit requests.
- ποΈ Works out of the box with in-memory storage; no back-end service needed.
- π Pluggable store backend for distributed rate limiting (e.g. Redis, Postgres).
A very minimalistic example of how to use glimit would be the following snippet:
import glimit
let limiter =
glimit.new()
|> glimit.per_second(2)
|> glimit.identifier(fn(x) { x })
|> glimit.on_limit_exceeded(fn(_req) { "Too many requests" })
let handler =
fn(_req) { "Hello, world!" }
|> glimit.apply(limiter)
handler("π") // "Hello, world!"
handler("π«") // "Hello, world!"
handler("π«") // "Hello, world!"
handler("π«") // "Too many requests"
handler("π") // "Hello, world!"
handler("π") // "Too many requests"You can also use glimit.build and glimit.hit for direct rate limit checks
without wrapping a function:
import glimit
let assert Ok(limiter) =
glimit.new()
|> glimit.per_second(10)
|> glimit.identifier(fn(x) { x })
|> glimit.on_limit_exceeded(fn(_) { "Stop!" })
|> glimit.build
case glimit.hit(limiter, "user_123") {
Ok(Nil) -> // allowed
Error(glimit.RateLimited) -> // rejected
Error(_) -> // store unavailable, fails open
}More practical examples can be found in the examples/ directory, such as Wisp or Mist servers, or a Redis backend.
By default, rate limit state is stored in-memory using an OTP actor. For distributed rate limiting across multiple nodes, you can provide a custom Store that persists bucket state in an external service like Redis or Postgres.
All token bucket logic stays in glimit β adapters only implement lock_and_get / set_and_unlock / unlock operations. The glimit/bucket module provides to_pairs/from_pairs helpers for serialization.
See examples/redis/ for a complete Redis adapter using valkyrie.
When no store is configured, the rate limiter uses the default in-memory backend. This is simple and fast, but scoped to the BEAM VM cluster it runs in. If your application runs across multiple BEAM VM clusters, rate limits will not be shared between them.
Every hit goes through the pluggable Store interface (lock_and_get / set_and_unlock). In-memory mode backs this with an OTP actor (two messages per hit); external store mode calls the adapter directly.
- Memory (in-memory mode): One dict entry per unique identifier. Full and idle buckets are automatically swept every 10 seconds.
- Fail-open: If the store is unavailable or a lock cannot be acquired, the request is allowed through rather than rejected.
Further documentation can be found at https://hexdocs.pm/glimit/glimit.html.
Contributions like PR's, bug reports or suggestions are more than welcome!