To best explain this issue, I've set up this MRE:
import DHT from 'hyperdht';
import {
createConnection,
createServer,
} from 'node:net';
const nodeServer = new DHT();
const nodeClient = new DHT();
const keyPairServer = DHT.keyPair();
const keyPairClient = DHT.keyPair();
const local = false;
const fullyRead = true;
const destroyAfterRead = false;
const serverEndEarly = false;
const packetSize = 65536;
const svr = local ? createServer() : nodeServer.createServer();
function delay() {
return new Promise((resolve) => {
setTimeout(resolve, 200);
});
}
function createPacket(value: number) {
const packet = new Uint8Array(packetSize);
packet.fill(value);
return packet;
}
const prom = new Promise((resolve) => {
svr.on('connection', (socket) => {
socket.on('readable', () => console.log('server', 'readable'));
socket.on('end', () => console.log('server', 'end'));
socket.on('drain', () => console.log('server', 'drain'));
socket.on('finish', () => console.log('server', 'finish'));
socket.on('error', (e) => console.log('server', 'error', e));
socket.on('close', () => console.log('server', 'close'));
resolve(socket);
});
});
if (local) {
svr.listen(12345, '::1');
}
else {
await svr.listen(keyPairServer);
}
const client = local ? createConnection(12345, '::1') : nodeClient.connect(keyPairServer.publicKey, keyPairClient);
const server = await prom;
svr.close();
client.on('readable', () => console.log('client', 'readable'));
client.on('end', () => console.log('client', 'end'));
client.on('drain', () => console.log('client', 'drain'));
client.on('finish', () => console.log('client', 'finish'));
client.on('error', (e) => console.log('client', 'error', e));
client.on('close', () => console.log('client', 'close'));
console.log('> client write', client.write(createPacket(1)));
await delay();
console.log('> server read', server.read());
await delay();
console.log('> client write', client.write(createPacket(2)));
await delay();
console.log('> client write', client.write(createPacket(3)));
await delay();
console.log('> client write', client.write(createPacket(4)));
await delay();
console.log('> client end');
client.end();
await delay();
console.log('> server write', server.write(createPacket(3)));
await delay();
if (serverEndEarly) {
console.log('> server end');
server.end();
await delay();
}
if (!destroyAfterRead) {
console.log('> client destroy');
client.destroy();
await delay();
}
console.log('> server read', server.read());
await delay();
console.log('> server read', server.read());
await delay();
if (fullyRead) {
console.log('> server read', server.read());
await delay();
console.log('> server read', server.read());
await delay();
}
if (destroyAfterRead) {
console.log('> client destroy');
client.destroy();
await delay();
}
if (!serverEndEarly) {
console.log('> server end');
server.end();
await delay();
}
This gives the result:
> client write false
client drain
server readable
> server read <Buffer 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 ... 65486 more bytes>
> client write false
client drain
server readable
> client write false
client drain
> client write false
client drain
> client end
client finish
> server write false
server drain
client readable
> client destroy
client close
server error [Error: connection reset by peer] { code: 'ECONNRESET' }
server close
> server read null
> server read null
> server read null
> server read null
> server end
Even though the client received drain and finish, the server did not get a chance to read the packets that were supposedly "drained".
If we change destroyAfterRead to true, the error still occurs, just later:
> client write false
client drain
server readable
> server read <Buffer 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 ... 65486 more bytes>
> client write false
client drain
server readable
> client write false
client drain
> client write false
client drain
> client end
client finish
> server write false
server drain
client readable
> server read <Buffer 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 ... 65486 more bytes>
server readable
> server read <Buffer 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 ... 65486 more bytes>
server readable
> server read <Buffer 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 ... 65486 more bytes>
server end
> server read null
> client destroy
client close
server error [Error: connection reset by peer] { code: 'ECONNRESET' }
server close
> server end
If we instead change serverEndEarly to true, the error goes away, even though the client hasn't actually read the packets, and hasn't received any event indicating it's now "okay" to close the connection:
> client write false
client drain
server readable
> server read <Buffer 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 ... 65486 more bytes>
> client write false
client drain
server readable
> client write false
client drain
> client write false
client drain
> client end
client finish
> server write false
server drain
client readable
> server end
server finish
> client destroy
client close
> server read <Buffer 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 ... 65486 more bytes>
server readable
> server read <Buffer 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 ... 65486 more bytes>
server readable
> server read <Buffer 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 ... 65486 more bytes>
server end
server close
> server read null
(Note: This only works if end happens before destroy. Even if it happens one line later, before the reads, the result is the same as if it happened after everything)
In other words, either the client "knows" it received end without actually reading any packets, or the server sending end makes it immune to client destroy while still being able to read all packets. Either way, this creates a race condition between the server's end and the client's destroy with no way to communicate when the prior has occurred without reading all packets in between.
Please note that destroying a half-closed socket after finish is a legitimate use-case. Some times, there's no use for reading any more from a socket if the response is already a done deal.
Just to farther illustrate this is an error, here's the output if setting local to true instead:
> client write false
client drain
server readable
> server read <Buffer 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 01 ... 65486 more bytes>
> client write true
server readable
> client write true
> client write true
> client end
client finish
> server write true
client readable
> client destroy
client close
> server read <Buffer 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 02 ... 65486 more bytes>
server readable
> server read <Buffer 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 03 ... 65486 more bytes>
server readable
> server read <Buffer 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 04 ... 65486 more bytes>
server readable
server end
server finish
server close
> server read null
> server end
A regular NodeJS TCP socket has no issues reading the packets after the client used destroy, even without the server sending end first. There is the mild question of how packets of this size didn't hit some max buffer limit (in both versions), but given that the DHT version was also able to buffer as many packets if end was generated, that question is tangential to the issue.
Basically:
- Whether reading the packets succeeds or fails should not depend on a race condition between the server calling
end and the client calling destroy, both before any of the packets were read.
finish should mean "finished writing and guaranteed to be received by the other side".
- If it's not possible for either side to buffer the remaining packets and guarantee delivery even if the connection is destroyed,
finish should simply be delayed, and not emitted. That way, the client can delay calling destroy until packet delivery is actually guaranteed.
To best explain this issue, I've set up this MRE:
This gives the result:
Even though the client received
drainandfinish, the server did not get a chance to read the packets that were supposedly "drained".If we change
destroyAfterReadtotrue, the error still occurs, just later:If we instead change
serverEndEarlytotrue, the error goes away, even though the client hasn't actually read the packets, and hasn't received any event indicating it's now "okay" to close the connection:(Note: This only works if
endhappens beforedestroy. Even if it happens one line later, before thereads, the result is the same as if it happened after everything)In other words, either the client "knows" it received
endwithout actually reading any packets, or the server sendingendmakes it immune to clientdestroywhile still being able to read all packets. Either way, this creates a race condition between the server'sendand the client'sdestroywith no way to communicate when the prior has occurred without reading all packets in between.Please note that destroying a half-closed socket after
finishis a legitimate use-case. Some times, there's no use for reading any more from a socket if the response is already a done deal.Just to farther illustrate this is an error, here's the output if setting
localtotrueinstead:A regular NodeJS TCP socket has no issues reading the packets after the client used
destroy, even without the server sendingendfirst. There is the mild question of how packets of this size didn't hit some max buffer limit (in both versions), but given that the DHT version was also able to buffer as many packets ifendwas generated, that question is tangential to the issue.Basically:
endand the client callingdestroy, both before any of the packets were read.finishshould mean "finished writing and guaranteed to be received by the other side".finishshould simply be delayed, and not emitted. That way, the client can delay callingdestroyuntil packet delivery is actually guaranteed.