Allows for running the event loop in 3 modes:
* default: loop runs until the refcount drops to zero
* once: poll for events only once and block until one is handled
* nowait: poll for events only once but don't block if there are
no pending events
Bert Belder informs me the current approach where a request is immediately
cancelled, is impossible to implement on Windows.
Rework the API to always invoke the "done" callback with an UV_ECANCELED error
code.
kqueue(2) on osx doesn't work (emits EINVAL error) with specific fds
(i.e. /dev/tty, /dev/null, etc). When given such descriptors - start
select(2) watcher thread that will emit io events.
This can be used in conjuction with uv_run_once() to poll in one thread and run
the event loop's event callbacks in another.
Useful for embedding libuv's event loop in another event loop.
You can now select to build a shared object at configure time:
$ ./gyp_uv -Dcomponent=shared_library -Dlibrary=shared_library
And build it with:
$ make -C out BUILDTYPE=Debug # or BUILDTYPE=Release
Or, if you use ninja:
$ ninja -C out/Debug
This patch creates a new header - ev-proto.h - which contains all of the
protoypes for libev functions. This allows us to create a shared object of
libuv without exposing libev internal functions.
This reverts commit 5da380a5ca.
Contains a bug that effectively makes the select() thread busy-loop. The file
descriptor is polled for both reading and writing, regardless of what events
the main thread wants to receive. Fixing that requires proper synchronization
between the two threads.
See #614.
Relocate the include of TargetConditionals.h and fixe the use of
TARGET_OS_IPHONE. Furthermore, uv__fsevents_init() and uv__fsevents_close are
now empty functions for iOS, since the FSEvents API is not available there.
Guard against the possibility that the queue is emptied while we're iterating
over it. Simple test case:
#include "ngx-queue.h"
#include <assert.h>
int main(void) {
ngx_queue_t h;
ngx_queue_t v[2];
ngx_queue_t* q;
unsigned n = 0;
ngx_queue_init(&h);
ngx_queue_insert_tail(&h, v + 0);
ngx_queue_insert_tail(&h, v + 1);
ngx_queue_foreach(q, &h) {
ngx_queue_remove(v + 0);
ngx_queue_remove(v + 1);
n++;
}
assert(n == 1); // *not* 2
return 0;
}
Fixes#605.
This reverts commit 209abbab27.
Fixes the following SIGSEGV:
(gdb) f 1
#1 0x00007fc084683aec in uv__async_io (loop=0x7fc0848e0b40,
handle=0x7fc0848e0c78, events=1) at src/unix/async.c:175
175 ASYNC_CB(h)
(gdb) list
170
171 /* If we need to sweep all handles anyway - skip this loop */
172 if (!loop->async_sweep_needed) {
173 for (i = 0; i < end; i += sizeof(h)) {
174 h = *((uv_async_t**) (buf + i));
175 ASYNC_CB(h)
176 }
177 }
178
179 bytes -= end;
(gdb) print *h
$1 = {close_cb = 0x184e1b0, data = 0x18d9520, loop = 0x7fc0848e0b40,
type = 49, handle_queue = {prev = 0x18dae10, next = 0x7860c0}, flags = 32,
next_closing = 0x1863b40, pending = 0, async_cb = 0x31,
queue = {prev = 0x18dae50, next = 0x7860c0}}
(gdb)
It looks like the async handle gets closed or otherwise becomes invalid before
the sweep is executed.
Fixes#603.
Benchmarks demonstrated that the idle timer handle approach didn't balance the
load quite fair enough, the majority of new connections still ended up in one
or two processes.
The new approach voluntarily gives up a scheduler timeslice by calling
nanosleep() with a one nanosecond timeout.
Why not sched_yield()? Because on Linux (and this is probably true for other
Unices as well), sched_yield() only yields if there are other processes running
on the same CPU.
nanosleep() on the other hand always forces the process to sleep, which gives
other processes a chance to accept our pending connections.
Adds initial libuv build/platform support for AIX. Builds work using gcc or the
IBM XL C compiler using its gxlc wrapper. Platform support is added for
uv_hrtime, uv_exepath, uv_get_free_memory, uv_get_total_memory, uv_loadavg,
uv_uptime, uv_cpu_info, uv_interface_addresses.
Implement a best effort approach to mitigating accept() EMFILE errors.
We have a spare file descriptor stashed away that we close to get below
the EMFILE limit. Next, we accept all pending connections and close them
immediately to signal the clients that we're overloaded - and we are, but
we still keep on trucking.
There is one caveat: it's not reliable in a multi-threaded environment.
The file descriptor limit is per process. Our party trick fails if another
thread opens a file or creates a socket in the time window between us
calling close() and accept().
Fixes#315.
kqueue(2) on osx doesn't work (emits EINVAL error) with specific fds
(i.e. /dev/tty, /dev/null, etc). When given such descriptors - start
select(2) watcher thread that will emit io events.
uv_fs_poll_t has an embedded uv_timer_t handle that got closed at a time when
the memory of the encapsulating handle might already have been deallocated.
Solve that by moving the poller's state into a structure that is allocated on
the heap and can be freed independently.
Readable tty handles need to be able to update the virtual window,
so if uv_tty_t is initialized with a console input fd, additionally
open the console output.
Formerly spawn errors would be reported as a message printed to the
process' stderr, to match unix behaviour. Unix has now been fixed to
be more sensible, so this hack can now be removed.
This also fixes a race condition that could occur when the user closes
a process handle before the exit callback has been made.
OS X has no public API for fdatasync. And as pointed out in `man fsync(2)`:
For applications that require tighter guarantees about the integrity of
their data, Mac OS X provides the F_FULLFSYNC fcntl. The F_FULLFSYNC
fcntl asks the drive to flush all buffered data to permanent storage.
Applications, such as databases, that require a strict ordering of writes
should use F_FULLFSYNC to ensure that their data is written in the order
they expect. Please see fcntl(2) for more detail.
The Sun Studio compiler did not define any of the symbols used to determine if
the system was a unix-like system or not causing it to include the windows
header.
Problem: registering two uv_fs_event_t watchers for the same path, then closing
them, caused a segmentation fault. While active, the watchers didn't work right
either, only one would receive events.
Cause: each watcher has a wd (watch descriptor) that's used as its key in a
binary tree. When you call inotify_watch_add() twice with the same path, the
second call doesn't return a new wd - it returns the existing one. That in turn
resulted in the first handle getting ousted from the binary tree, leaving
dangling pointers.
This commit addresses that by storing the watchers in a queue and storing the
queue in the binary tree instead of storing the watchers directly in the tree.
Fixesjoyent/node#3789.
produces better error message from test-dgram-multicast-multi-process
when run w/o network.
before:
=== release test-dgram-multicast-multi-process ===
Path: simple/test-dgram-multicast-multi-process
dgram.js:287
throw new errnoException(errno, 'addMembership');
^
Error: addMembership Unknown system errno 19
at new errnoException (dgram.js:356:11)
at Socket.addMembership (dgram.js:287:11)
at Object.<anonymous> (/home/roman/wc/node/test/simple/test-dgram-multicast-multi-process.js:224:16)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.runMain (module.js:487:10)
at process.startup.processNextTick.process._tickCallback (node.js:244:9)
[PARENT] Worker 9223 died. 1 dead of 3
dgram.js:287
throw new errnoException(errno, 'addMembership');
^
Error: addMembership Unknown system errno 19
at new errnoException (dgram.js:356:11)
at Socket.addMembership (dgram.js:287:11)
at Object.<anonymous> (/home/roman/wc/node/test/simple/test-dgram-multicast-multi-process.js:224:16)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.runMain (module.js:487:10)
at process.startup.processNextTick.process._tickCallback (node.js:244:9)
[PARENT] sent 'First message to send' to 224.0.0.114:12346
dgram.js:287
[PARENT] sent 'Second message to send' to 224.0.0.114:12346
throw new errnoException(errno, 'addMembership');
[PARENT] sent 'Third message to send' to 224.0.0.114:12346
^
[PARENT] sendSocket closed
[PARENT] Worker 9224 died. 2 dead of 3
Error: addMembership Unknown system errno 19
at new errnoException (dgram.js:356:11)
at Socket.addMembership (dgram.js:287:11)
at Object.<anonymous> (/home/roman/wc/node/test/simple/test-dgram-multicast-multi-process.js:224:16)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.runMain (module.js:487:10)
at process.startup.processNextTick.process._tickCallback (node.js:244:9)
[PARENT] Worker 9225 died. 3 dead of 3
[PARENT] All workers have died.
[PARENT] Fail
Command: out/Release/node /home/roman/wc/node/test/simple/test-dgram-multicast-multi-process.js
after:
=== release test-dgram-multicast-multi-process ===
Path: simple/test-dgram-multicast-multi-process
dgram.js:287
throw new errnoException(errno, 'addMembership');
^
Error: addMembership ENODEV
at new errnoException (dgram.js:356:11)
at Socket.addMembership (dgram.js:287:11)
at Object.<anonymous> (/home/roman/wc/node/test/simple/test-dgram-multicast-multi-process.js:224:16)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.runMain (module.js:487:10)
at process.startup.processNextTick.process._tickCallback (node.js:244:9)
[PARENT] Worker 13141 died. 1 dead of 3
dgram.js:287
throw new errnoException(errno, 'addMembership');
^
[PARENT] sent 'First message to send' to 224.0.0.114:12346
[PARENT] sent 'Second message to send' to 224.0.0.114:12346
[PARENT] sent 'Third message to send' to 224.0.0.114:12346
[PARENT] sent 'Fourth message to send' to 224.0.0.114:12346
[PARENT] sendSocket closed
dgram.js:287
throw new errnoException(errno, 'addMembership');
^
Error: addMembership ENODEV
at new errnoException (dgram.js:356:11)
at Socket.addMembership (dgram.js:287:11)
at Object.<anonymous> (/home/roman/wc/node/test/simple/test-dgram-multicast-multi-process.js:224:16)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.runMain (module.js:487:10)
at process.startup.processNextTick.process._tickCallback (node.js:244:9)
[PARENT] Worker 13142 died. 2 dead of 3
Error: addMembership ENODEV
at new errnoException (dgram.js:356:11)
at Socket.addMembership (dgram.js:287:11)
at Object.<anonymous> (/home/roman/wc/node/test/simple/test-dgram-multicast-multi-process.js:224:16)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.runMain (module.js:487:10)
at process.startup.processNextTick.process._tickCallback (node.js:244:9)
[PARENT] Worker 13143 died. 3 dead of 3
[PARENT] All workers have died.
[PARENT] Fail
Command: out/Release/node /home/roman/wc/node/test/simple/test-dgram-multicast-multi-process.js