Skip to content

Add ETS-backed storage backend#29

Open
rapind wants to merge 3 commits intonootr:mainfrom
pairshaped:ets-store
Open

Add ETS-backed storage backend#29
rapind wants to merge 3 commits intonootr:mainfrom
pairshaped:ets-store

Conversation

@rapind
Copy link

@rapind rapind commented Mar 13, 2026

Summary

  • Adds glimit/ets_store module, an ETS-backed implementation of bucket.Store
  • ETS operations are lock-free and concurrent, avoiding the overhead of OTP actor
    messages. Suitable for single-node deployments where low latency matters.
  • Adds glimit.ets_store() builder for easy integration
  • Includes periodic sweep timer for cleaning up full and idle buckets
  • 10 new tests covering all ETS store functionality
  • README updated with ETS mode documentation

Usage

let limiter =
  glimit.new()
  |> glimit.per_second(10)
  |> glimit.ets_store()
  |> glimit.identifier(fn(request) { request.ip })
  |> glimit.on_limit_exceeded(fn(_) { "Rate limit reached" })

Test plan

  • All 10 new ETS store tests pass
  • All 67 existing tests still pass (77 total)

rapind added 2 commits March 13, 2026 08:52
ETS provides lower-latency rate limiting by using atomic table operations
directly, avoiding the overhead of OTP actor messages. Suitable for
single-node deployments where low latency is important.

- ets_store.gleam: EtsStore type implementing bucket.Store interface
- ets_store_ffi.erl: Erlang FFI for ETS operations (new/get/set/delete/sweep/size)
- glimit.ets_store() builder for easy integration
- Periodic sweep timer for cleaning up full and idle buckets
- 10 new tests covering all ETS store functionality
Document the new ETS-backed storage backend with usage example,
performance characteristics, and comparison to in-memory mode.
@nootr
Copy link
Owner

nootr commented Mar 14, 2026

Hi @rapind, thank you very much for these PR's. I'll code review them this weekend.

Would this be an option to use as the default in-memory backend? What do you think?

@rapind
Copy link
Author

rapind commented Mar 14, 2026

Would this be an option to use as the default in-memory backend? What do you think?

Yes, 100% it's an upgrade. Same behaviour without the serialization bottleneck. I can rework this PR to replace the default if you like.

@nootr
Copy link
Owner

nootr commented Mar 14, 2026

@rapind That would be great! I think there would be no use for the current in-memory backend (OTP actor), so it would be awesome if this could completely replace that.

Let me know if I could help, thank you very much for contributing!

Replace the OTP actor-based in-memory store with ETS as the default.
ETS provides lower-latency, lock-free rate limiting without actor
message overhead. Remove memory_store module entirely — custom stores
can still be plugged in via glimit.store().
@rapind
Copy link
Author

rapind commented Mar 15, 2026

All set.

Copy link
Owner

@nootr nootr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @rapind,

thanks again for your time and effort. This looks great!

I have a couple of questions, but overall this is a great improvement. Also, the pipeline seems to fail due to a formatting error.

//// of OTP actor messages. Suitable for single-node deployments where
//// low latency is important.
////
//// Unlike the default in-memory store (which serializes through an actor),
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that we've changed the "default in-memory store" (even removed it), this comment might be a bit confusing.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, could the explanation about atomicity be a bit misleading? It is true that ETS operations are atomic, but the get and set operations are done separately, so there is a race condition. Two separate BEAM processes could get the same value and try to set the same value, so the second set is wrong.

This problem is not new, it was already in the OTP actor setup, but it was a trade-off between low code complexity and low latency at the cost of a small chance of overshooting and letting a small amount of logs through that should have been throttled.

What do you think? Please let me know if I misunderstand something.

* 📏 Rate limits based on any key (e.g. IP address, or user ID).
* 🪣 Uses a Token Bucket algorithm to rate limit requests.
* 🗄️ Works out of the box with in-memory storage; no back-end service needed.
* ⚡ ETS-backed by default for low-latency, lock-free rate limiting.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that some people use this package mainly because it doesn't need a separate backend service like Redis. Could we keep it being mentioned? Also, I'm wondering if "lock-free" is something people look for in a package like this. Maybe something like this?

Suggested change
* ⚡ ETS-backed by default for low-latency, lock-free rate limiting.
* ⚡ ETS-backed by default for low-latency rate limiting; no separate back-end service needed.

## Pluggable Store Backend

By default, rate limit state is stored in-memory using an OTP actor. For distributed rate limiting across multiple nodes, you can provide a custom `Store` that persists bucket state in an external service like Redis or Postgres.
By default, rate limit state is stored in ETS (Erlang Term Storage) using lock-free atomic operations. For distributed rate limiting across multiple nodes, you can provide a custom `Store` that persists bucket state in an external service like Redis or Postgres.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same as what I said earlier: "lock-free atomic operations" might be a bit misleading as the code does have a race condition.

Suggested change
By default, rate limit state is stored in ETS (Erlang Term Storage) using lock-free atomic operations. For distributed rate limiting across multiple nodes, you can provide a custom `Store` that persists bucket state in an external service like Redis or Postgres.
By default, rate limit state is stored in ETS (Erlang Term Storage). For distributed rate limiting across multiple nodes, you can provide a custom `Store` that persists bucket state in an external service like Redis or Postgres.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants