When you stream a subprocess’s stdout/stderr into an async generator, treat process exit, stream end/close, abort, and timeout escalation as concurrent events that can race.
Coding standard
exit. The “last output” might still be buffered when exit fires. Listen for stream close/end (or the streams’ close) and only then yield the final result/exit chunk.unref() escalation timers so they don’t keep the worker alive.exit happens before stdout/stderr stream end to ensure the final chunk is yielded only after buffers are flushed.Example pattern (simplified)
// shared queue
const { push, next } = makeEventQueue();
let processExited = false;
let stdoutClosed = false;
let stderrClosed = false;
const killProcess = (reason: string) => {
if (processExited) return;
child.kill('SIGTERM');
// escalation timer: track + clear + unref
if (sigkillTimer) clearTimeout(sigkillTimer);
sigkillTimer = setTimeout(() => {
if (!processExited) child.kill('SIGKILL');
}, 2000);
sigkillTimer.unref();
};
pipeLinesToQueue(child.stdout!, 'stdout', push);
pipeLinesToQueue(child.stderr!, 'stderr', push);
child.once('exit', (code, signal) => {
processExited = true;
push({ kind: 'exit', code, signal });
});
// Finalize only when streams are closed/flushed
child.stdout?.once('close', () => { stdoutClosed = true; push({ kind: 'stdout_close' } as any); });
child.stderr?.once('close', () => { stderrClosed = true; push({ kind: 'stderr_close' } as any); });
// consumer: drain events, then yield result when both close + exit known
// (implementation depends on your result-yield strategy)
Applying this prevents lost tail output, double-yields, and subtle leaks/hangs caused by concurrent lifecycle events and uncleared timers.
Enter the URL of a public GitHub repository