Benchmarks demonstrated that the idle timer handle approach didn't balance the
load quite fair enough, the majority of new connections still ended up in one
or two processes.
The new approach voluntarily gives up a scheduler timeslice by calling
nanosleep() with a one nanosecond timeout.
Why not sched_yield()? Because on Linux (and this is probably true for other
Unices as well), sched_yield() only yields if there are other processes running
on the same CPU.
nanosleep() on the other hand always forces the process to sleep, which gives
other processes a chance to accept our pending connections.
Don't make the event loop spin when connect() returns EINPROGRESS.
Test case:
#include "uv.h"
static void connect_cb(uv_connect_t* req, int status) {
// ...
}
int main() {
uv_tcp_t handle;
uv_connect_t req;
struct sockaddr_in addr;
addr = uv_ip4_addr("8.8.8.8", 1234); // unreachable
uv_tcp_init(uv_default_loop(), &handle);
uv_tcp_connect(&req, (uv_stream_t*)&handle, addr, connect_cb);
uv_run(uv_default_loop()); // busy loops until connection times out
return 0;
}
After EINPROGRESS, there are zero active handles and one active request. That
in turn makes uv__poll_timeout() believe that it should merely poll, not block,
in epoll() / kqueue() / port_getn().
Sidestep that by artificially starting the handle on connect() and stopping it
again once the TCP handshake completes / is rejected / times out.
It's a slightly hacky approach because I don't want to change the ABI of the
stable branch. I'll address it properly in the master branch.
Incidentally fixes a rather obscure bug where uv_tcp_connect() reconnected
and leaked a file descriptor when the handle was already busy connecting,
handle->fd was zero (unlikely) and uv_tcp_connect() got called again.