Allows for running the event loop in 3 modes:
* default: loop runs until the refcount drops to zero
* once: poll for events only once and block until one is handled
* nowait: poll for events only once but don't block if there are
no pending events
Don't use the relaxed accept() algorithm introduced in be2a217 unless
explicitly requested. It causes a 50+% performance drop on some node.js
benchmarks:
$ alias bench='out/Release/node benchmark/http_simple_auto.js \
-c 10 -n 50000 bytes/1 2>&1 | grep Req'
$ UV_TCP_SINGLE_ACCEPT=0 bench
Requests per second: 12331.84 [#/sec] (mean)
$ UV_TCP_SINGLE_ACCEPT=1 bench
Requests per second: 3944.63 [#/sec] (mean)
File descriptor might be closed during callback, all events that was reported
before the callback are not valid and trying to remove them will result
in ENOENT. This error can be safely ignored.
Fix a rather obscure bug where the event loop stalls when an I/O watcher is
stopped while an artificial event, generated with uv__io_feed(), is pending.
Bert Belder informs me the current approach where a request is immediately
cancelled, is impossible to implement on Windows.
Rework the API to always invoke the "done" callback with an UV_ECANCELED error
code.
kqueue(2) on osx doesn't work (emits EINVAL error) with specific fds
(i.e. /dev/tty, /dev/null, etc). When given such descriptors - start
select(2) watcher thread that will emit io events.
Avoid the extra syscall, it's a no-op for non-listening sockets.
At least, it should be - it remains to be investigated if a FreeBSD kernel bug
affects ephemeral port allocation inside connect(). See [1] for details.
[1] http://www.freebsd.org/cgi/query-pr.cgi?pr=174087
Running a make target that builds the shared object while overriding the CFLAGS
variable from the command line, would fail with a relocation error:
relocation R_X86_64_32 against `.text' can not be used when making a shared
object; recompile with -fPIC
Fix that by adding -fPIC unconditionally.
* If GetAdaptersAddresses() failed, it would return UV_OK nonetheless,
but the `adresses` and `count` out parameters would not be set.
* When adapters were enabled or added in between the two
GetAdaptersAddresses() calls, it would fail.
* In case of an out of memory situation, libuv would crash with a fatal
error.
* All interface information is now stored in a single heap-allocated
area.
* If GetAdaptersAddresses() failed, it would return UV_OK nonetheless,
but the `adresses` and `count` out parameters would not be set.
* When adapters were enabled or added in between the two
GetAdaptersAddresses() calls, it would fail.
* In case of an out of memory situation, libuv would crash with a fatal
error.
* All interface information is now stored in a single heap-allocated
area.
This can be used in conjuction with uv_run_once() to poll in one thread and run
the event loop's event callbacks in another.
Useful for embedding libuv's event loop in another event loop.
Send the wakeup signal to the main thread *before* releasing the lock. Doing it
the other way around introduces a race condition where the watcher may already
have been pulled off the work queue.
Obtain the CPU frequency from /proc/cpuinfo because there may not be any
cpufreq info available in /sys. This also means that the reported CPU speed
from now on is the *maximum* speed, not the *actual* speed the CPU runs at.
This change only applies to x86 because ARM and MIPS don't report that
information in /proc/cpuinfo.
Fixes#588.
This is a back-port of commit 775064a from the master branch.
Disable the fs_event_close_in_callback test on DragonFlyBSD, like we do on the
other BSDs.
The test doesn't work with kqueue-based file notifications, the event is
generated before the file is watched. Maybe we should remove it altogether.
Harmonize with stream.c and tcp.c: when a handle is closed that has pending
writes queued up, run the callbacks with loop->err.code set to UV_ECANCELED,
not UV_EINTR.
Pipe accept benchmarks have never been implemented, remove the code path.
Said code path also contained a bug: it tried to bind to the same pipe that is
bound to a few lines down.
Don't keep writing until the write queue fills up. On fast systems (mine), that
never happens - the data is sent out as fast as the benchmark generates it.