Skip to content

Granular rate limits for table requests #7

@boid-com

Description

@boid-com

This feature primarily impacts get_table_rows and get_table_by_scope but should be implemented for all GET requests.

Node operators should have granular control over rate limiting GET table data requests. This would make it easier for application specific endpoints to be provided to the public by mitigating abuse.

Example: Node operator wants to provide table data for users accessing DAPP1, so they link the DAPP1 contracts/tables to a rate limiter profile which is more relaxed. All other requests such as contracts/tables for DAPP2 will go through a much more struct rate-limiting profile or even be completely blacklisted.

Rate limiting should not just be based on frequency but also volume, meaning the firewall understands how many rows ( and mabye even the size of the individual rows) of data it has shared, for example if a client requests table data in 100 row chunks the node would understand this is a more expensive request than 10 row chunks (but 10 requests for 10 rows should probably be more expensive than 1 request for 100).

(some requests like get_info are necessary for constructing a signed transaction so it would be important to always have this available)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions