Benchmarks demonstrated that the idle timer handle approach didn't balance the
load quite fair enough, the majority of new connections still ended up in one
or two processes.
The new approach voluntarily gives up a scheduler timeslice by calling
nanosleep() with a one nanosecond timeout.
Why not sched_yield()? Because on Linux (and this is probably true for other
Unices as well), sched_yield() only yields if there are other processes running
on the same CPU.
nanosleep() on the other hand always forces the process to sleep, which gives
other processes a chance to accept our pending connections.
Implement a best effort approach to mitigating accept() EMFILE errors.
We have a spare file descriptor stashed away that we close to get below
the EMFILE limit. Next, we accept all pending connections and close them
immediately to signal the clients that we're overloaded - and we are, but
we still keep on trucking.
There is one caveat: it's not reliable in a multi-threaded environment.
The file descriptor limit is per process. Our party trick fails if another
thread opens a file or creates a socket in the time window between us
calling close() and accept().
Fixes#315.
kqueue(2) on osx doesn't work (emits EINVAL error) with specific fds
(i.e. /dev/tty, /dev/null, etc). When given such descriptors - start
select(2) watcher thread that will emit io events.
Readable tty handles need to be able to update the virtual window,
so if uv_tty_t is initialized with a console input fd, additionally
open the console output.
Formerly spawn errors would be reported as a message printed to the
process' stderr, to match unix behaviour. Unix has now been fixed to
be more sensible, so this hack can now be removed.
This also fixes a race condition that could occur when the user closes
a process handle before the exit callback has been made.
OS X has no public API for fdatasync. And as pointed out in `man fsync(2)`:
For applications that require tighter guarantees about the integrity of
their data, Mac OS X provides the F_FULLFSYNC fcntl. The F_FULLFSYNC
fcntl asks the drive to flush all buffered data to permanent storage.
Applications, such as databases, that require a strict ordering of writes
should use F_FULLFSYNC to ensure that their data is written in the order
they expect. Please see fcntl(2) for more detail.
Problem: registering two uv_fs_event_t watchers for the same path, then closing
them, caused a segmentation fault. While active, the watchers didn't work right
either, only one would receive events.
Cause: each watcher has a wd (watch descriptor) that's used as its key in a
binary tree. When you call inotify_watch_add() twice with the same path, the
second call doesn't return a new wd - it returns the existing one. That in turn
resulted in the first handle getting ousted from the binary tree, leaving
dangling pointers.
This commit addresses that by storing the watchers in a queue and storing the
queue in the binary tree instead of storing the watchers directly in the tree.
Fixesjoyent/node#3789.
* the callback gets called only once on error, not repeatedly...
* ...unless the error reason changes from e.g. UV_ENOENT to UV_EACCES
* the callback receives pointers to uv_statbuf_t objects so it can inspect what
changed