Skip to content

Conversation

@d4rp4t
Copy link
Contributor

@d4rp4t d4rp4t commented Jul 20, 2025

Fixes: #328

Description

Adds internal support for retrying cached requests according to NUT-19, based on exponential backoff.

Changes

request.ts

  • extended request() with optional retry logic
  • added requestWithRetry() with exponential backoff

MintInfo.ts

  • added support for NUT-19 via checkNut19() method

Other

  • added Nut19Policy type

Note: Cached request usage is not yet implemented in the CashuMint class — waiting for approval.

PR Tasks

  • Open PR
  • run npm run test --> no failing unit tests
  • run npm run format

@gandlafbtc
Copy link
Collaborator

thanks for the PR @d4rp4t ! do you think this could be tested with an integration test? This would not only verify it works as expected, it also gives implementors an up to date example.

Copy link
Collaborator

@robwoodgate robwoodgate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work @d4rp4t! Love the use of exponential backoff, and the implementation looks solid (eyeballed: have not had a chance to test). I've noted a couple of minor nits and some suggestions for your consideration.

export type Nut19Policy = {
ttl: number | null;
cached_endpoints: Array<{ method: 'GET' | 'POST'; path: string }>;
} | null;
Copy link
Collaborator

@robwoodgate robwoodgate Jul 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if this type definition is a little too loose...? I don't love allowing it to be null overall.
Also, athough null ttl is a valid mint response, it actually means no expiry in the spec (ie: Infinity). So perhaps we can make it tighter to make that clear and avoid repeated null checking?
What do you think of this version?

/**
 * Represents the NUT-19 caching policy as advertised in the mint's info response.
 * This defines how long and for which endpoints responses can be cached.
 *
 * - `ttl` is always a finite number (seconds) or `Infinity` (indefinite caching).
 * - An empty `cached_endpoints ` array means no endpoints are cached at the mint.
 */
export type Nut19Policy = {
    ttl: number;
    cached_endpoints: Array<{ method: 'GET' | 'POST'; path: string }>;
};

}
return { supported: false };
}

Copy link
Collaborator

@robwoodgate robwoodgate Jul 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks solid - requiring at least one cached endpoint for the supported flag makes sense. If we adopt a stricter NUT19Policy type definition as per my suggestion, we can map null to Infinity for indefinite caching. We could also make it easier for implementors to use the policy in the request without converting it, eg:

private checkNut19() {
  const rawPolicy = this._mintInfo.nuts[19];
  if (rawPolicy && rawPolicy.cached_endpoints.length > 0) {
	  return {
		  supported: true,
		  params: {
			  ttl: rawPolicy.ttl ?? Infinity,
			  cached_endpoints: rawPolicy.cached_endpoints
		  } as NUT19Policy
	  };
  }
  return { supported: false };
}

We can then import NUT19Policy and add a specific isSupported signature for this nut around line 28, eg:

isSupported(num: 19): { supported: boolean; params?: NUT19Policy };

src/request.ts Outdated
const isCachable = cached_endpoints?.some(
(cached_endpoint) =>
cached_endpoint.path === url.pathname && cached_endpoint.method === (options.method || 'GET')
);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we make the default handling more explicit with ?? - ie just null or undefined?
Would surface bugs sooner if a caller accidentally passes an invalid but falsey method (eg ''. false).

... && cached_endpoint.method === (options.method ?? 'GET')

src/request.ts Outdated
retries++;
const delay = Math.max(Math.pow(2, retries) * 1000, 1000);

if (ttl && totalElapsedTime + delay > ttl) {
Copy link
Collaborator

@robwoodgate robwoodgate Jul 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested addition:

requestLogger.error('Network Error: request abandoned after #{retries} retries', { e, retries });

if (ttl && totalElapsedTime + delay > ttl) {
throw e;
}

Copy link
Collaborator

@robwoodgate robwoodgate Jul 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested addition:

requestLogger.info('Network Error: attempting retry #{retries} in {delay}ms', { e, retries, delay });

await new Promise((resolve) => setTimeout(resolve, delay));
return retry();
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested addition:

requestLogger.error('Request failed and could not be retried', { e });

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@robwoodgate you can add suggestions with CNTRL+G

Copy link
Collaborator

@robwoodgate robwoodgate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be a good spot to leverage our new logger as well

} catch (e) {
if (e instanceof NetworkError) {
const totalElapsedTime = Date.now() - startTime;
const shouldRetry = retries < MAX_CACHED_RETRIES && (!ttl || totalElapsedTime < ttl);
Copy link
Collaborator

@robwoodgate robwoodgate Jul 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a unit mismatch here... NUT-19 specifies TTL in seconds. We are calculating totalElapsedTime in MS. Same further down with delay. So perhaps we should convert TTL here and use ttlMs for clarity? Also gives us an opportunity to sanitize ttl (as it's user input).

const ttlMs = typeof ttl === 'number' && !isNaN(ttl) ? Math.max(ttl, 0) * 1000 : Infinity;

And update ttl references in rest of function

src/request.ts Outdated
Comment on lines 64 to 65
retries++;
const delay = Math.max(Math.pow(2, retries) * 1000, 1000);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should increment retries after calculating delay, so delay starts at 1000ms, otherwise it starts at 2000ms.

Suggested change
retries++;
const delay = Math.max(Math.pow(2, retries) * 1000, 1000);
const delay = Math.max(Math.pow(2, retries) * 1000, 1000);
retries++;

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On a side-note... even after this tweak, our retry schedule (in MS) is:
1000, 2000, 4000, 8000, 16000, 32000, 64000, 128000, 256000, 512000

That's 1023 seconds of delay... about 17 minutes. Is an app likely to be waiting around that long?

Perhaps we should cap the delay... eg: using a 60 second cap is about 5 minutes of retries:
1000, 2000, 4000, 8000, 16000, 32000, 60000, 60000, 60000, 60000

We should also consider adding a jitter factor to avoid the thundering herd problem.
This would also potentially cut down the overall wait time.

eg:

// Assuming:
const MAX_RETRY_DELAY = 60000; // Added around line 11
const ttlMs = typeof ttl === 'number' && !isNaN(ttl) ? Math.max(ttl, 0) * 1000 : Infinity; // in line 62

// Here's what a capped exponential backoff with jitter could look like:
if (shouldRetry) {
    // Calculate capped exponential backoff using full jitter to avoid
    // the thundering herd problem
    const cappedDelay = Math.min(Math.pow(2, retries) * 1000, MAX_RETRY_DELAY);
    const delay = Math.random() * cappedDelay;

    if (totalElapsedTime + delay > ttlMs) {
        requestLogger.error('Network Error: request abandoned after #{retries} retries', { e, retries });
        throw e;
    }

    retries++;
    await new Promise((resolve) => setTimeout(resolve, delay));
    return retry();
}

@d4rp4t d4rp4t requested a review from robwoodgate August 1, 2025 10:05
Copy link
Contributor

@a1denvalu3 a1denvalu3 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a follow up to this PR we should make it possible to pass in a callback that takes in the request payload as a parameter, so the implementor can actually persist the payload data somewhere if they want to.

Copy link
Collaborator

@robwoodgate robwoodgate left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work @d4rp4t!

@Egge21M Egge21M linked an issue Aug 14, 2025 that may be closed by this pull request
@a1denvalu3
Copy link
Contributor

@d4rp4t is this ready?

@d4rp4t
Copy link
Contributor Author

d4rp4t commented Aug 18, 2025

@lollerfirst The logic itsef - yes, but i haven't implemented it in any method yet.

@d4rp4t
Copy link
Contributor Author

d4rp4t commented Sep 10, 2025

#356 related, since i'll be able to write integration tests with the CDK mint

@Egge21M Egge21M linked an issue Sep 14, 2025 that may be closed by this pull request
@robwoodgate
Copy link
Collaborator

@d4rp4t - I've rebased this to development (v3) as discussed. Please can you check it over.

@robwoodgate robwoodgate added the v3 label Oct 30, 2025
@robwoodgate robwoodgate added the needs rebase This PR needs to be rebased to development because it contains merge conflicts label Dec 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

needs rebase This PR needs to be rebased to development because it contains merge conflicts v3

Projects

None yet

Development

Successfully merging this pull request may close these issues.

NUT-19 cached response - Handle app close mid request Implement Caching layer for NUT-19

4 participants