- general construct/destroy in connectdata
- default implementations of callback functions
- connect: cfilters for connect and accept
- socks: cfilter for socks proxying
- http_proxy: cfilter for http proxy tunneling
- vtls: cfilters for primary and proxy ssl
- change in general handling of data/conn
- Curl_cfilter_setup() sets up filter chain based on data settings,
if none are installed by the protocol handler setup
- Curl_cfilter_connect() boot straps filters into `connected` status,
used by handlers and multi to reach further stages
- Curl_cfilter_is_connected() to check if a conn is connected,
e.g. all filters have done their work
- Curl_cfilter_get_select_socks() gets the sockets and READ/WRITE
indicators for multi select to work
- Curl_cfilter_data_pending() asks filters if the have incoming
data pending for recv
- Curl_cfilter_recv()/Curl_cfilter_send are the general callbacks
installed in conn->recv/conn->send for io handling
- Curl_cfilter_attach_data()/Curl_cfilter_detach_data() inform filters
and addition/removal of a `data` from their connection
- adding vtl functions to prevent use of Curl_ssl globals directly
in other parts of the code.
Reviewed-by: Daniel Stenberg
Closes#9855
- Use brackets for the IPv6 address shown in verbose message when the
format is address:port so that it is less confusing.
Before: Trying 2606:4700:4700::1111:443...
After: Trying [2606:4700:4700::1111]:443...
Bug: https://curl.se/mail/archive-2022-02/0041.html
Reported-by: David Hu
Closes#9635
The "Failed to connect to" message after a connection failure would
include the strerror message based on the presumed previous socket
error, but in times it seems that error number is not set when reaching
this code and therefore it would include the wrong error message.
The strerror message is now removed from here and the curl_easy_strerror
error is used instead.
Reported-by: Edoardo Lolletti
Fixes#9549Closes#9554
This internal-use-only storage type can be bumped to a curl_off_t once
we need to use bit 32 as the previous 'unsigned int' can no longer hold
them all then.
The websocket protocols take bit 30 and 31 so they are the last ones
that fit within 32 bits - but cannot properly be exported through APIs
since those use *signed* 32 bit types (long) in places.
Closes#9481
So that an address used from the DNS cache that was previously used for
QUIC can be reused for TCP and vice versa.
To make this possible, set conn->transport to "unix" for unix domain
connections ... and store the transport struct field in an unsigned char
to use less space.
Reported-by: ウさん
Fixes#9274Closes#9276
The options were added in #6341 and d13179d, but cause problems: Lots of
POLLIN event occurs but recvfrom read nothing.
Reported-by: Tatsuhiro Tsujikawa
Fixes#9209Closes#9215
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
Allow curl to send larger UDP datagram if Path MTU Discovery finds the
availability of larger path MTU. To make it work and not to send
fragmented packet, we need to set DF bit. That makes send(2) fail with
EMSGSIZE if UDP datagram is too large. In that case, just let it be
lost. This patch enables DF bit for Linux only.
Closes#8883
Earlier if CURLOPT_LOCALPORT + CURLOPT_LOCALPORTRANGE would go past port
65535 the code would fall back to random port rather than giving up.
Closes#8862
Make ngtcp2+quictls correctly acknowledge `CURLOPT_SSL_VERIFYPEER` and
`CURLOPT_SSL_VERIFYHOST`.
The name check now uses a function from lib/vtls/openssl.c which will
need attention for when TLS is not done by OpenSSL or is disabled while
QUIC is enabled.
Possibly the servercert() function in openssl.c should be adjusted to be
able to use for both regular TLS and QUIC.
Ref: #8173Closes#8178
... and make connect_init() refusing trying to tunnel protocols marked
as not working. Avoids a double-free.
Reported-by: Even Rouault
Fixes#8018Closes#8020
Regression. In d6a37c23a3 (7.75.0) we removed the duplicated storage
(connection + easy handle), so this info needs be extracted again even
for re-used connections.
Add test 435 to verify
Reported-by: Max Dymond
Fixes#7660Closes#7662
Commit dbd16c3e2 cleaned up the logic for traversing the addrinfos,
but the move left a conditional on ai which no longer is needed as
the while loop reevaluation will cover it.
Closes#7511
Reviewed-by: Carlo Marcelo Arenas Belón
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
0842175 (not in any release) used the wrong format specifier (long int)
for timediff_t. On an OS such as Windows libcurl's timediff_t (usually
64-bit) is bigger than long int (32-bit). In 32-bit Windows builds the
upper 32-bits of the timediff_t were erroneously then used by the next
format specifier. Usually since the timeout isn't larger than 32-bits
this would result in null as a pointer to the string with the reason for
the connection failing. On other OSes or maybe other compilers it could
probably result in garbage values (ie crash on deref).
Before:
Failed to connect to localhost port 12345 after 1201 ms: (nil)
After:
Failed to connect to localhost port 12345 after 1203 ms: Connection refused
Closes https://github.com/curl/curl/pull/7449
- the data needs to be "line-based" anyway since it's also passed to the
debug callback/application
- it makes infof() work like failf() and consistency is good
- there's an assert that triggers on newlines in the format string
- Also removes a few instances of "..."
- Removes the code that would append "..." to the end of the data *iff*
it was truncated in infof()
Closes#7357
- Check whether a connection has succeded before checking whether it's
timed out.
This means if we've connected quickly, but subsequently been
descheduled, we allow the connection to succeed. Note, if we timeout,
but between checking the timeout, and connecting to the server the
connection succeeds, we will allow it to go ahead. This is viewed as
an acceptable trade off.
- Add additional failf logging around failed connection attempts to
propogate the cause up to the caller.
Co-Authored-by: Martin Howarth
Closes#7178
In some situations, it was possible that a transfer was setup to
use an specific IP version, but due do DNS caching or connection
reuse, it ended up using a different IP version from requested.
This commit changes the effect of CURLOPT_IPRESOLVE from simply
restricting address resolution to preventing the wrong connection
type being used, when choosing a connection from the pool, and
to restricting what addresses could be used when establishing
a new connection.
It is important that all addresses versions are resolved, even if
not used in that transfer in particular, because the result is
cached, and could be useful for a different transfer with a
different CURLOPT_IPRESOLVE setting.
Closes#6853
The duration of a connect and the total transfer are calculated from two
different time-stamps. It can end up with the total timeout triggering
before the connect timeout expires and we should make sure to
acknowledge whichever timeout that is reached first.
This is especially notable when a transfer first sits in PENDING, as
that time is counted in the total time but the connect timeout is based
on the time since the handle changed to the CONNECT state.
The CONNECTTIMEOUT is per connect attempt. The TIMEOUT is for the entire
operation.
Fixes#6744Closes#6745
Reported-by: Andrei Bica
Assisted-by: Jay Satiro
The Curl_easy pointer struct entry in connectdata is now gone. Just
before commit 215db086e0 landed on January 8, 2021 there were 919
references to conn->data.
Closes#6608
As the info is already stored in the transfer handle anyway, there's no
need to carry around a duplicate buffer for the life-time of the handle.
Closes#6534
... and use 'int' for ports. We don't use 'unsigned short' since -1 is
still often used internally to signify "unknown value" and 0 - 65535 are
all valid port numbers.
Closes#6534
... in most cases instead of 'struct connectdata *' but in some cases in
addition to.
- We mostly operate on transfers and not connections.
- We need the transfer handle to log, store data and more. Everything in
libcurl is driven by a transfer (the CURL * in the public API).
- This work clarifies and separates the transfers from the connections
better.
- We should avoid "conn->data". Since individual connections can be used
by many transfers when multiplexing, making sure that conn->data
points to the current and correct transfer at all times is difficult
and has been notoriously error-prone over the years. The goal is to
ultimately remove the conn->data pointer for this reason.
Closes#6425