-
Notifications
You must be signed in to change notification settings - Fork 56
Nut19 cache #330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: development
Are you sure you want to change the base?
Nut19 cache #330
Conversation
|
thanks for the PR @d4rp4t ! do you think this could be tested with an integration test? This would not only verify it works as expected, it also gives implementors an up to date example. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @d4rp4t! Love the use of exponential backoff, and the implementation looks solid (eyeballed: have not had a chance to test). I've noted a couple of minor nits and some suggestions for your consideration.
src/model/types/mint/responses.ts
Outdated
| export type Nut19Policy = { | ||
| ttl: number | null; | ||
| cached_endpoints: Array<{ method: 'GET' | 'POST'; path: string }>; | ||
| } | null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if this type definition is a little too loose...? I don't love allowing it to be null overall.
Also, athough null ttl is a valid mint response, it actually means no expiry in the spec (ie: Infinity). So perhaps we can make it tighter to make that clear and avoid repeated null checking?
What do you think of this version?
/**
* Represents the NUT-19 caching policy as advertised in the mint's info response.
* This defines how long and for which endpoints responses can be cached.
*
* - `ttl` is always a finite number (seconds) or `Infinity` (indefinite caching).
* - An empty `cached_endpoints ` array means no endpoints are cached at the mint.
*/
export type Nut19Policy = {
ttl: number;
cached_endpoints: Array<{ method: 'GET' | 'POST'; path: string }>;
};| } | ||
| return { supported: false }; | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks solid - requiring at least one cached endpoint for the supported flag makes sense. If we adopt a stricter NUT19Policy type definition as per my suggestion, we can map null to Infinity for indefinite caching. We could also make it easier for implementors to use the policy in the request without converting it, eg:
private checkNut19() {
const rawPolicy = this._mintInfo.nuts[19];
if (rawPolicy && rawPolicy.cached_endpoints.length > 0) {
return {
supported: true,
params: {
ttl: rawPolicy.ttl ?? Infinity,
cached_endpoints: rawPolicy.cached_endpoints
} as NUT19Policy
};
}
return { supported: false };
}We can then import NUT19Policy and add a specific isSupported signature for this nut around line 28, eg:
isSupported(num: 19): { supported: boolean; params?: NUT19Policy };
src/request.ts
Outdated
| const isCachable = cached_endpoints?.some( | ||
| (cached_endpoint) => | ||
| cached_endpoint.path === url.pathname && cached_endpoint.method === (options.method || 'GET') | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we make the default handling more explicit with ?? - ie just null or undefined?
Would surface bugs sooner if a caller accidentally passes an invalid but falsey method (eg ''. false).
... && cached_endpoint.method === (options.method ?? 'GET')
src/request.ts
Outdated
| retries++; | ||
| const delay = Math.max(Math.pow(2, retries) * 1000, 1000); | ||
|
|
||
| if (ttl && totalElapsedTime + delay > ttl) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested addition:
requestLogger.error('Network Error: request abandoned after #{retries} retries', { e, retries });| if (ttl && totalElapsedTime + delay > ttl) { | ||
| throw e; | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested addition:
requestLogger.info('Network Error: attempting retry #{retries} in {delay}ms', { e, retries, delay });| await new Promise((resolve) => setTimeout(resolve, delay)); | ||
| return retry(); | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggested addition:
requestLogger.error('Request failed and could not be retried', { e });There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@robwoodgate you can add suggestions with CNTRL+G
robwoodgate
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be a good spot to leverage our new logger as well
| } catch (e) { | ||
| if (e instanceof NetworkError) { | ||
| const totalElapsedTime = Date.now() - startTime; | ||
| const shouldRetry = retries < MAX_CACHED_RETRIES && (!ttl || totalElapsedTime < ttl); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a unit mismatch here... NUT-19 specifies TTL in seconds. We are calculating totalElapsedTime in MS. Same further down with delay. So perhaps we should convert TTL here and use ttlMs for clarity? Also gives us an opportunity to sanitize ttl (as it's user input).
const ttlMs = typeof ttl === 'number' && !isNaN(ttl) ? Math.max(ttl, 0) * 1000 : Infinity;And update ttl references in rest of function
src/request.ts
Outdated
| retries++; | ||
| const delay = Math.max(Math.pow(2, retries) * 1000, 1000); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should increment retries after calculating delay, so delay starts at 1000ms, otherwise it starts at 2000ms.
| retries++; | |
| const delay = Math.max(Math.pow(2, retries) * 1000, 1000); | |
| const delay = Math.max(Math.pow(2, retries) * 1000, 1000); | |
| retries++; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a side-note... even after this tweak, our retry schedule (in MS) is:
1000, 2000, 4000, 8000, 16000, 32000, 64000, 128000, 256000, 512000
That's 1023 seconds of delay... about 17 minutes. Is an app likely to be waiting around that long?
Perhaps we should cap the delay... eg: using a 60 second cap is about 5 minutes of retries:
1000, 2000, 4000, 8000, 16000, 32000, 60000, 60000, 60000, 60000
We should also consider adding a jitter factor to avoid the thundering herd problem.
This would also potentially cut down the overall wait time.
eg:
// Assuming:
const MAX_RETRY_DELAY = 60000; // Added around line 11
const ttlMs = typeof ttl === 'number' && !isNaN(ttl) ? Math.max(ttl, 0) * 1000 : Infinity; // in line 62
// Here's what a capped exponential backoff with jitter could look like:
if (shouldRetry) {
// Calculate capped exponential backoff using full jitter to avoid
// the thundering herd problem
const cappedDelay = Math.min(Math.pow(2, retries) * 1000, MAX_RETRY_DELAY);
const delay = Math.random() * cappedDelay;
if (totalElapsedTime + delay > ttlMs) {
requestLogger.error('Network Error: request abandoned after #{retries} retries', { e, retries });
throw e;
}
retries++;
await new Promise((resolve) => setTimeout(resolve, delay));
return retry();
}
a1denvalu3
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a follow up to this PR we should make it possible to pass in a callback that takes in the request payload as a parameter, so the implementor can actually persist the payload data somewhere if they want to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @d4rp4t!
|
@d4rp4t is this ready? |
|
@lollerfirst The logic itsef - yes, but i haven't implemented it in any method yet. |
|
#356 related, since i'll be able to write integration tests with the CDK mint |
673ecb8 to
209a6ea
Compare
|
@d4rp4t - I've rebased this to development (v3) as discussed. Please can you check it over. |
Fixes: #328
Description
Adds internal support for retrying cached requests according to NUT-19, based on exponential backoff.
Changes
request.ts
MintInfo.ts
Other
Note: Cached request usage is not yet implemented in the CashuMint class — waiting for approval.
PR Tasks
npm run test--> no failing unit testsnpm run format