Conversation
Co-authored-by: James Stevens <github@jrcs.net> Co-authored-by: Mark Tyneway <mark.tyneway@gmail.com>
|
The
|
turbomaze
left a comment
There was a problem hiding this comment.
Nice thought to stream it to s3, some questions but nothing blocking.
| const Address = require('../primitives/address'); | ||
| const Network = require('../protocol/network'); | ||
| const pkg = require('../pkg'); | ||
| const AWS = require('aws-sdk'); |
There was a problem hiding this comment.
What are some other options besides requiring this package for the whole full node? Not blocking but not ideal since it's a big dependency and would probably never make it into upstream.
Could this be an optional peer dependency or something?
There was a problem hiding this comment.
I'm not sure how peer dependencies work, exactly. I'd considered adding it as a separate plugin.
There was a problem hiding this comment.
I believe aws sdk v2 lets you import only the services you are planning to use as well, that would trim down the loaded lib size abit.
var S3 = require('aws-sdk/clients/s3');
There was a problem hiding this comment.
I'm not sure how peer dependencies work, exactly. I'd considered adding it as a separate plugin.
Peer dependency is the wrong word since it means something really specific in node.js/npm. I trust aws-sdk but it feels bad adding a 50mb dependency for this one feature. Do you have a clear idea of how this dependency could live in a plugin so hsd stays clean?
There was a problem hiding this comment.
A plugin would just need access to the Chain, which is itself a plugin that it could take a dependency on, I believe. So a dump-zone-to-s3 plugin would pretty much be this code with some boilerplate to create the plugin itself and then we'd need some other way of triggering it than the HTTP endpoint on the Node plugin interface.
Since it assumes AWS anyway, we could use a queue. If that's too much we could just put up another HTTP interface on a third port. If we wanted to move more of our custom functionality into plugins, it'd be easy to extend that HTTP interface to cover all of them with a namebase meta-plugin that aggregated the calls
| Key: this.options.s3DumpConfig.key, | ||
| Body: dumpzone.readableStream(this.chain) | ||
| }, (err, data) => { | ||
| // TODO - capture status, do a rename? |
There was a problem hiding this comment.
Not blocking/not requesting this but might be nice for the key to have the timestamp and then to rename one "current" etc
There was a problem hiding this comment.
👍 I'll verify how rename works in S3
There was a problem hiding this comment.
There's actually no way to rename something in S3, would have to do a copy then delete. Alternative could be to push files with the timestamp and have some other process to reap ones older than a certain age, but it would require the consumer to list the objects in the bucket and chose the lastest one
There was a problem hiding this comment.
one other alternative might be enabling versioning on s3 bucket by default you will get the timestamp and maangement of old versions provided by aws
Have refactored zippy's branch to use streams (because it will make uploading to S3 easier) and filtered out TXT records.
In progress: