This internal-use-only storage type can be bumped to a curl_off_t once
we need to use bit 32 as the previous 'unsigned int' can no longer hold
them all then.
The websocket protocols take bit 30 and 31 so they are the last ones
that fit within 32 bits - but cannot properly be exported through APIs
since those use *signed* 32 bit types (long) in places.
Closes#9481
So that an address used from the DNS cache that was previously used for
QUIC can be reused for TCP and vice versa.
To make this possible, set conn->transport to "unix" for unix domain
connections ... and store the transport struct field in an unsigned char
to use less space.
Reported-by: ウさん
Fixes#9274Closes#9276
The options were added in #6341 and d13179d, but cause problems: Lots of
POLLIN event occurs but recvfrom read nothing.
Reported-by: Tatsuhiro Tsujikawa
Fixes#9209Closes#9215
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
Allow curl to send larger UDP datagram if Path MTU Discovery finds the
availability of larger path MTU. To make it work and not to send
fragmented packet, we need to set DF bit. That makes send(2) fail with
EMSGSIZE if UDP datagram is too large. In that case, just let it be
lost. This patch enables DF bit for Linux only.
Closes#8883
Earlier if CURLOPT_LOCALPORT + CURLOPT_LOCALPORTRANGE would go past port
65535 the code would fall back to random port rather than giving up.
Closes#8862
Make ngtcp2+quictls correctly acknowledge `CURLOPT_SSL_VERIFYPEER` and
`CURLOPT_SSL_VERIFYHOST`.
The name check now uses a function from lib/vtls/openssl.c which will
need attention for when TLS is not done by OpenSSL or is disabled while
QUIC is enabled.
Possibly the servercert() function in openssl.c should be adjusted to be
able to use for both regular TLS and QUIC.
Ref: #8173Closes#8178
... and make connect_init() refusing trying to tunnel protocols marked
as not working. Avoids a double-free.
Reported-by: Even Rouault
Fixes#8018Closes#8020
Regression. In d6a37c23a3 (7.75.0) we removed the duplicated storage
(connection + easy handle), so this info needs be extracted again even
for re-used connections.
Add test 435 to verify
Reported-by: Max Dymond
Fixes#7660Closes#7662
Commit dbd16c3e2 cleaned up the logic for traversing the addrinfos,
but the move left a conditional on ai which no longer is needed as
the while loop reevaluation will cover it.
Closes#7511
Reviewed-by: Carlo Marcelo Arenas Belón
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
0842175 (not in any release) used the wrong format specifier (long int)
for timediff_t. On an OS such as Windows libcurl's timediff_t (usually
64-bit) is bigger than long int (32-bit). In 32-bit Windows builds the
upper 32-bits of the timediff_t were erroneously then used by the next
format specifier. Usually since the timeout isn't larger than 32-bits
this would result in null as a pointer to the string with the reason for
the connection failing. On other OSes or maybe other compilers it could
probably result in garbage values (ie crash on deref).
Before:
Failed to connect to localhost port 12345 after 1201 ms: (nil)
After:
Failed to connect to localhost port 12345 after 1203 ms: Connection refused
Closes https://github.com/curl/curl/pull/7449
- the data needs to be "line-based" anyway since it's also passed to the
debug callback/application
- it makes infof() work like failf() and consistency is good
- there's an assert that triggers on newlines in the format string
- Also removes a few instances of "..."
- Removes the code that would append "..." to the end of the data *iff*
it was truncated in infof()
Closes#7357
- Check whether a connection has succeded before checking whether it's
timed out.
This means if we've connected quickly, but subsequently been
descheduled, we allow the connection to succeed. Note, if we timeout,
but between checking the timeout, and connecting to the server the
connection succeeds, we will allow it to go ahead. This is viewed as
an acceptable trade off.
- Add additional failf logging around failed connection attempts to
propogate the cause up to the caller.
Co-Authored-by: Martin Howarth
Closes#7178
In some situations, it was possible that a transfer was setup to
use an specific IP version, but due do DNS caching or connection
reuse, it ended up using a different IP version from requested.
This commit changes the effect of CURLOPT_IPRESOLVE from simply
restricting address resolution to preventing the wrong connection
type being used, when choosing a connection from the pool, and
to restricting what addresses could be used when establishing
a new connection.
It is important that all addresses versions are resolved, even if
not used in that transfer in particular, because the result is
cached, and could be useful for a different transfer with a
different CURLOPT_IPRESOLVE setting.
Closes#6853
The duration of a connect and the total transfer are calculated from two
different time-stamps. It can end up with the total timeout triggering
before the connect timeout expires and we should make sure to
acknowledge whichever timeout that is reached first.
This is especially notable when a transfer first sits in PENDING, as
that time is counted in the total time but the connect timeout is based
on the time since the handle changed to the CONNECT state.
The CONNECTTIMEOUT is per connect attempt. The TIMEOUT is for the entire
operation.
Fixes#6744Closes#6745
Reported-by: Andrei Bica
Assisted-by: Jay Satiro
The Curl_easy pointer struct entry in connectdata is now gone. Just
before commit 215db086e0 landed on January 8, 2021 there were 919
references to conn->data.
Closes#6608
As the info is already stored in the transfer handle anyway, there's no
need to carry around a duplicate buffer for the life-time of the handle.
Closes#6534
... and use 'int' for ports. We don't use 'unsigned short' since -1 is
still often used internally to signify "unknown value" and 0 - 65535 are
all valid port numbers.
Closes#6534
... in most cases instead of 'struct connectdata *' but in some cases in
addition to.
- We mostly operate on transfers and not connections.
- We need the transfer handle to log, store data and more. Everything in
libcurl is driven by a transfer (the CURL * in the public API).
- This work clarifies and separates the transfers from the connections
better.
- We should avoid "conn->data". Since individual connections can be used
by many transfers when multiplexing, making sure that conn->data
points to the current and correct transfer at all times is difficult
and has been notoriously error-prone over the years. The goal is to
ultimately remove the conn->data pointer for this reason.
Closes#6425
The linux kernel does not report all ICMP errors back to userspace due
to historical reasons.
IP*_RECVERR sockopt must be turned on to have the correct behaviour
which is to pass all ICMP errors to userspace.
See https://bugzilla.kernel.org/show_bug.cgi?id=202355Closes#6341
If supported, defer port selection until connect() time
if --interface is given and source port is 0.
Reproducer:
* start fast webserver on port 80
* starve system of ephemeral ports
$ sysctl net.ipv4.ip_local_port_range="60990 60999"
* start a curl/libcurl "crawler"
$curl --keepalive --parallel --parallel-immediate --head --interface
127.0.0.2 "http://127.0.0.[1-254]/file[001-002].txt"
current result:
(possible some successful data)
curl: (45) bind failed with errno 98: Address already in use
result after patch:
(complete success or few connections failing, higlhy depending on load)
Fail only when all the possible 4-tuple combinations are exhausted,
which is impossible to do when port is selected at bind() time becuse
the kernel does not know if socket will be listen()'ed on or connect'ed
yet.
Closes#6295
Valgrind will complain that ssrem buffer usage if not explicit
initialized, hence initialize it to zero.
This completes the change intially started in commit 2c0d721215 ('ftp:
retry getpeername for FTP with TCP_FASTOPEN') where the ssloc buffer has
a similar memset to zero.
Signed-off-by: Hans-Christian Noren Egtvedt <hegtvedt@cisco.com>
Closes#6289
In the case of TFO, the remote host name is not resolved at the
connetion time.
For FTP that has lead to missing hostname for the secondary connection.
Therefore the name resolution is done at the time, when FTP requires it.
Fixes#6252Closes#6265Closes#6282