Conversation
ETS provides lower-latency rate limiting by using atomic table operations directly, avoiding the overhead of OTP actor messages. Suitable for single-node deployments where low latency is important. - ets_store.gleam: EtsStore type implementing bucket.Store interface - ets_store_ffi.erl: Erlang FFI for ETS operations (new/get/set/delete/sweep/size) - glimit.ets_store() builder for easy integration - Periodic sweep timer for cleaning up full and idle buckets - 10 new tests covering all ETS store functionality
Document the new ETS-backed storage backend with usage example, performance characteristics, and comparison to in-memory mode.
|
Hi @rapind, thank you very much for these PR's. I'll code review them this weekend. Would this be an option to use as the default in-memory backend? What do you think? |
Yes, 100% it's an upgrade. Same behaviour without the serialization bottleneck. I can rework this PR to replace the default if you like. |
|
@rapind That would be great! I think there would be no use for the current in-memory backend (OTP actor), so it would be awesome if this could completely replace that. Let me know if I could help, thank you very much for contributing! |
Replace the OTP actor-based in-memory store with ETS as the default. ETS provides lower-latency, lock-free rate limiting without actor message overhead. Remove memory_store module entirely — custom stores can still be plugged in via glimit.store().
|
All set. |
| //// of OTP actor messages. Suitable for single-node deployments where | ||
| //// low latency is important. | ||
| //// | ||
| //// Unlike the default in-memory store (which serializes through an actor), |
There was a problem hiding this comment.
Now that we've changed the "default in-memory store" (even removed it), this comment might be a bit confusing.
There was a problem hiding this comment.
Also, could the explanation about atomicity be a bit misleading? It is true that ETS operations are atomic, but the get and set operations are done separately, so there is a race condition. Two separate BEAM processes could get the same value and try to set the same value, so the second set is wrong.
This problem is not new, it was already in the OTP actor setup, but it was a trade-off between low code complexity and low latency at the cost of a small chance of overshooting and letting a small amount of logs through that should have been throttled.
What do you think? Please let me know if I misunderstand something.
| * 📏 Rate limits based on any key (e.g. IP address, or user ID). | ||
| * 🪣 Uses a Token Bucket algorithm to rate limit requests. | ||
| * 🗄️ Works out of the box with in-memory storage; no back-end service needed. | ||
| * ⚡ ETS-backed by default for low-latency, lock-free rate limiting. |
There was a problem hiding this comment.
I know that some people use this package mainly because it doesn't need a separate backend service like Redis. Could we keep it being mentioned? Also, I'm wondering if "lock-free" is something people look for in a package like this. Maybe something like this?
| * ⚡ ETS-backed by default for low-latency, lock-free rate limiting. | |
| * ⚡ ETS-backed by default for low-latency rate limiting; no separate back-end service needed. |
| ## Pluggable Store Backend | ||
|
|
||
| By default, rate limit state is stored in-memory using an OTP actor. For distributed rate limiting across multiple nodes, you can provide a custom `Store` that persists bucket state in an external service like Redis or Postgres. | ||
| By default, rate limit state is stored in ETS (Erlang Term Storage) using lock-free atomic operations. For distributed rate limiting across multiple nodes, you can provide a custom `Store` that persists bucket state in an external service like Redis or Postgres. |
There was a problem hiding this comment.
The same as what I said earlier: "lock-free atomic operations" might be a bit misleading as the code does have a race condition.
| By default, rate limit state is stored in ETS (Erlang Term Storage) using lock-free atomic operations. For distributed rate limiting across multiple nodes, you can provide a custom `Store` that persists bucket state in an external service like Redis or Postgres. | |
| By default, rate limit state is stored in ETS (Erlang Term Storage). For distributed rate limiting across multiple nodes, you can provide a custom `Store` that persists bucket state in an external service like Redis or Postgres. |
Summary
glimit/ets_storemodule, an ETS-backed implementation ofbucket.Storemessages. Suitable for single-node deployments where low latency matters.
glimit.ets_store()builder for easy integrationUsage
Test plan