Skip to content

Retry backoff with attempt distribution on buckets #243

@sunnyagg

Description

@sunnyagg

In the existing retry implmentation, we have backoff based on buckets (buckets itself are defined with backoff intervals). the logic of comuting the bucket is

next_bucket_interval = current_bucket_interval + min_backoff*(2^attempts)

now we are doing interval calculations and fitting that into the bucket intervals, issue with this approach is that it can skip the defined buckets

for e.g
min_backoff = 30
max_backoff = 300

buckets will be [30,150] since next_bucket_interval will be 90 (30+30*2^1).
this is happening because current_bucket_interval is ensuring we always move to next bucket and min_backoff*(2^attempts) is delta which can be large and can cause to skip a bucket

we dont really need to comput the next_bucket_interval since we have already defined the buckets
we just need to distribute the retry attempts to the applicable buckets

like for the above example applicable buckets are [30,60,150, 300]
so if max_retry_attempt are defined to be 10, we just distribute the counts in round robin fashion which means the actual retry attempts will happen with [30,30,30,60,60,60,150,150,300,300]

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions