`data->id` is unique in *most* situations, but not in all. If a libcurl
application uses more than one connection cache, they will overlap. This
is a rare situations, but libcurl apps do crazy things. However, for
informative things, like tracing, `data->id` is superior, since it
assigns new ids in curl's serial curl_easy_perform() use.
Introduce `data->mid` which is a unique identifer inside one multi
instance, assigned on multi_add_handle() and cleared on
multi_remove_handle().
Use the `mid` in DoH operations and also in h2/h3 stream hashes.
Reported-by: 罗朝辉
Fixes#14414Closes#14499
Members of the filter context, like stream hash and buffers, need to be
initialized early and protected by a flag to also avoid double cleanup.
This allow the context to be used safely before a connect() is started
and the other parts of the context are set up.
Closes#14505
- rely on the new flush to handle blocked sends. No longer
do simulated EAGAIN on (partially) blocked sends with their
need to handle repeats.
- fix some debug handling CURL_SMALLREQSEND env var
- add some assertings in request.c for affirming we do it right
- enhance assertion output in test_16 for easier analysis
Closes#14435
- replace the counting of upload lengths with the new eos send flag
- improve frequency of stream draining to happen less on events where it
is not needed
- this PR is based on #14220
http2, cf-h2-proxy: fix EAGAINed out buffer
- in adjust pollset and shutdown handling, a non-empty `ctx->outbufq`
must trigger send polling, irregardless of http/2 flow control
- in http2, fix retry handling of blocked GOAWAY frame
test case improvement:
- let client 'upload-pausing' handle http versions
Closes#14253
Since data can be held in connection filter buffers when sending gives
EAGAIN, add methods to query this and perform flushing of those buffers.
The transfer loop will continue sending until all upload data is
processed and the connection is flushed.
- add `CF_QUERY_SEND_PENDING` to query filters
- add `CF_CTRL_DATA_SEND_FLUSH` to flush filters
- change `Curl_req_want_send()` to query the connection
if it needs flushing
- use `Curl_req_want_send()` to determine the POLLOUT
in the PERFORMING multi state
- implement flush handling in the HTTP/2 connection filter
Closes#14271
Adds a `bool eos` flag to send methods to indicate that the data
is the last chunk the invovled transfer wants to send to the server.
This will help protocol filters like HTTP/2 and 3 to forward the
stream's EOF flag and also allow to EAGAIN such calls when buffers
are not yet fully flushed.
Closes#14220
Set the initial stream window size to 64KB and increase that to the 10MB
we used to start with on the first server reply, unless a rate limit is
in effect.
Continously monitory changes to the transfers rate limit and adjust the
stream window size accordingly. `max_recv_speed` is a transfer propert
that can be changed during processing by a callback.
Closes#14326
Adds a `bool eos` flag to send methods to indicate that the data is the
last chunk the invovled transfer wants to send to the server.
This will help protocol filters like HTTP/2 and 3 to forward the
stream's EOF flag and also allow to EAGAIN such calls when buffers are
not yet fully flushed.
Closes#14220
Based on the standards and guidelines we use for our documentation.
- expand contractions (they're => they are etc)
- host name = > hostname
- file name => filename
- user name = username
- man page => manpage
- run-time => runtime
- set-up => setup
- back-end => backend
- a HTTP => an HTTP
- Two spaces after a period => one space after period
Closes#14073
When libcurl discards a connection there are two phases this may go
through: "shutdown" and "closing". If a connection is aborted, the
shutdown phase is skipped and it is closed right away.
The connection filters attached to the connection implement the phases
in their `do_shutdown()` and `do_close()` callbacks. Filters carry now a
`shutdown` flags next to `connected` to keep track of the shutdown
operation.
Filters are shut down from top to bottom. If a filter is not connected,
its shutdown is skipped. Notable filters that *do* something during
shutdown are HTTP/2 and TLS. HTTP/2 sends the GOAWAY frame. TLS sends
its close notify and expects to receive a close notify from the server.
As sends and receives may EAGAIN on the network, a shutdown is often not
successful right away and needs to poll the connection's socket(s). To
facilitate this, such connections are placed on a new shutdown list
inside the connection cache.
Since managing this list requires the cooperation of a multi handle,
only the connection cache belonging to a multi handle is used. If a
connection was in another cache when being discarded, it is removed
there and added to the multi's cache. If no multi handle is available at
that time, the connection is shutdown and closed in a one-time,
best-effort attempt.
When a multi handle is destroyed, all connection still on the shutdown
list are discarded with a final shutdown attempt and close. In curl
debug builds, the environment variable `CURL_GRACEFUL_SHUTDOWN` can be
set to make this graceful with a timeout in milliseconds given by the
variable.
The shutdown list is limited to the max number of connections configured
for a multi cache. Set via CURLMOPT_MAX_TOTAL_CONNECTIONS. When the
limit is reached, the oldest connection on the shutdown list is
discarded.
- In multi_wait() and multi_waitfds(), collect all connection caches
involved (each transfer might carry its own) into a temporary list.
Let each connection cache on the list contribute sockets and
POLLIN/OUT events it's connections are waiting for.
- in multi_perform() collect the connection caches the same way and let
them peform their maintenance. This will make another non-blocking
attempt to shutdown all connections on its shutdown list.
- for event based multis (multi->socket_cb set), add the sockets and
their poll events via the callback. When `multi_socket()` is invoked
for a socket not known by an active transfer, forward this to the
multi's cache for processing. On closing a connection, remove its
socket(s) via the callback.
TLS connection filters MUST NOT send close nofity messages in their
`do_close()` implementation. The reason is that a TLS close notify
signals a success. When a connection is aborted and skips its shutdown
phase, the server needs to see a missing close notify to detect
something has gone wrong.
A graceful shutdown of FTP's data connection is performed implicitly
before regarding the upload/download as complete and continuing on the
control connection. For FTP without TLS, there is just the socket close
happening. But with TLS, the sent/received close notify signals that the
transfer is complete and healthy. Servers like `vsftpd` verify that and
reject uploads without a TLS close notify.
- added test_19_* for shutdown related tests
- test_19_01 and test_19_02 test for TCP RST packets
which happen without a graceful shutdown and should
no longer appear otherwise.
- add test_19_03 for handling shutdowns by the server
- add test_19_04 for handling shutdowns by curl
- add test_19_05 for event based shutdowny by server
- add test_30_06/07 and test_31_06/07 for shutdown checks
on FTP up- and downloads.
Closes#13976
This adds connection shutdown infrastructure and first use for FTP. FTP
data connections, when not encountering an error, are now shut down in a
blocking way with a 2sec timeout.
- add cfilter `Curl_cft_shutdown` callback
- keep a shutdown start timestamp and timeout at connectdata
- provide shutdown timeout default and member in
`data->set.shutdowntimeout`.
- provide methods for starting, interrogating and clearing
shutdown timers
- provide `Curl_conn_shutdown_blocking()` to shutdown the
`sockindex` filter chain in a blocking way. Use that in FTP.
- add `Curl_conn_cf_poll()` to wait for socket events during
shutdown of a connection filter chain.
This gets the monitoring sockets and events via the filters
"adjust_pollset()" methods. This gives correct behaviour when
shutting down a TLS connection through a HTTP/2 proxy.
- Implement shutdown for all socket filters
- for HTTP/2 and h2 proxying to send GOAWAY
- for TLS backends to the best of their capabilities
- for tcp socket filter to make a final, nonblocking
receive to avoid unwanted RST states
- add shutdown forwarding to happy eyeballers and
https connect ballers when applicable.
Closes#13904
- as reported in #13725, some servers wrongly send body bytes in
responses to a HEAD request. This used to be tolerated in curl
8.4 and before and leads to failed transfers in newer versions.
- restore previous behaviour for HTTP/1.1 and HTTP/2:
* 1.1: do not add 'Transfer-Encoding' writers from HEAD
responses. RFC 9112 says they do not apply.
* 2: when the transfer expects 'no_body', to not report stream
resets as error when all response headers have been received.
Reported-by: Jeroen Ooms
Fixes#13725Closes#13732
- errors returned by Curl_xfer_write_resp() and the header variant are
not errors in the protocol. The result needs to be returned on the
next recv() from the protocol filter.
- make xfer write errors for response data cause the stream to be
cancelled
- added pytest test_02_14 and test_02_15 to verify that also for
parallel processing
Reported-by: Laramie Leavitt
Fixes#13411Closes#13424
- add `Curl_hash_offt` as hashmap between a `curl_off_t` and
an object. Use this in h2+h3 connection filters to associate
`data->id` with the internal stream state.
- changed implementations of all affected connection filters
- removed `h2_ctx*` and `h3_ctx*` from `struct HTTP` and thus
the easy handle
- solves the problem of attaching "foreign protocol" easy handles
during connection shutdown
Test 1616 verifies the new hash functions.
Closes#13204
- When the writing of response data fails, reset the stream
and do not return a callback error to nghttp2. That would
be a fatal error for the connection and harm other requests.
- add test cases for various abort scenarios
Reported-by: Konstantin Kuzov
Fixes#13292Closes#13298
Saving some cpu cycles in http response header processing:
- pass the length of the header line along
- use string constant sizeof() instead of strlen()
- check line length if prefix is possible
- switch on first header char to limit checks
Closes#13143
A transfer may do several `SingleRequest`s for its success. This happens
regularly for authentication, follows and retries on failed connections.
The "readwrite()" calls and functions connected to those carried a `bool
*done` parameter to indicate that the current `SingleRequest` is over.
This may happen before `upload_done` or `download_done` bits of
`SingleRequest` are set.
The problem with that is now `write_resp()` protocol handlers are
invoked in places where the `bool *done` cannot be passed up to the
caller. Instead of being a bool in the call chain, it needs to become a
member of `SingleRequest`, reflecting its state.
This removes the `bool *done` parameter and adds the `done` bit to
`SingleRequest` instead. It adds `Curl_req_soft_reset()` for using a
`SingleRequest` in a follow up, clearing `done` and other
flags/counters.
Closes#13096
- use the new `Curl_xfer_write_resp()` to write incoming responses
directly to the client
- eliminates `stream->recvbuf`
- memory consumption on parallel transfers minimized
Closes#12828
- let `multi_getsock()` initialize the pollset in what the
transfer state requires in regards to SEND/RECV
- change connection filters `adjust_pollset()` implementation
to react on the presence of POLLIN/-OUT in the pollset and
no longer check CURL_WANT_SEND/CURL_WANT_RECV
- cf-socket will no longer add POLLIN on its own
- http2 and http/3 filters will only do adjustments if the
passed pollset wants to POLLIN/OUT for the transfer on
the socket. This is similar to the HTTP/2 proxy filter
and works in stacked filters.
Closes#12640
do not add a socket for POLLIN when the transfer does not want to send
(for example is paused).
Follow-up to 47f5b1a
Reported-by: bubbleguuum on github
Fixes#12632Closes#12633
- there seems to be a code path that cleans up easy handles without
triggering DONE or DETACH events to the connection filters. This
would explain wh nghttp2 still holds stream user data
- add GOOD check to easy handle used in on_close_callback to
prevent crashes, ASSERTs in debug builds.
- NULL the stream user data early before submitting RST
- add checks in on_stream_close() to identify UNGOOD easy handles
Reported-by: Hans-Christian Egtvedt
Fixes#10936Closes#12562
- use `data->state.dselect_bits` everywhere instead
- remove `bool *comeback` parameter as non-zero
`data->state.dselect_bits` will indicate that IO is
incomplete.
Closes#12512
- refs #12356 where a UAF is reported when closing a connection
with a stream whose easy handle was cleaned up already
- handle DETACH events same as DONE events in h2/h3 filters
Fixes#12356
Reported-by: Paweł Wegner
Closes#12364
GCC 14 introduces a new -Walloc-size included in -Wextra which gives:
```
src/tool_operate.c: In function ‘add_per_transfer’:
src/tool_operate.c:213:5: warning: allocation of insufficient size ‘1’ for type ‘struct per_transfer’ with size ‘480’ [-Walloc-size]
213 | p = calloc(sizeof(struct per_transfer), 1);
| ^
src/var.c: In function ‘addvariable’:
src/var.c:361:5: warning: allocation of insufficient size ‘1’ for type ‘struct var’ with size ‘32’ [-Walloc-size]
361 | p = calloc(sizeof(struct var), 1);
| ^
```
The calloc prototype is:
```
void *calloc(size_t nmemb, size_t size);
```
So, just swap the number of members and size arguments to match the
prototype, as we're initialising 1 struct of size `sizeof(struct
...)`. GCC then sees we're not doing anything wrong.
Closes#12292
Connection filter had a `get_select_socks()` method, inspired by the
various `getsocks` functions involved during the lifetime of a
transfer. These, depending on transfer state (CONNECT/DO/DONE/ etc.),
return sockets to monitor and flag if this shall be done for POLLIN
and/or POLLOUT.
Due to this design, sockets and flags could only be added, not
removed. This led to problems in filters like HTTP/2 where flow control
prohibits the sending of data until the peer increases the flow
window. The general transfer loop wants to write, adds POLLOUT, the
socket is writeable but no data can be written.
This leads to cpu busy loops. To prevent that, HTTP/2 did set the
`SEND_HOLD` flag of such a blocked transfer, so the transfer loop cedes
further attempts. This works if only one such filter is involved. If a
HTTP/2 transfer goes through a HTTP/2 proxy, two filters are
setting/clearing this flag and may step on each other's toes.
Connection filters `get_select_socks()` is replaced by
`adjust_pollset()`. They get passed a `struct easy_pollset` that keeps
up to `MAX_SOCKSPEREASYHANDLE` sockets and their `POLLIN|POLLOUT`
flags. This struct is initialized in `multi_getsock()` by calling the
various `getsocks()` implementations based on transfer state, as before.
After protocol handlers/transfer loop have set the sockets and flags
they want, the `easy_pollset` is *always* passed to the filters. Filters
"higher" in the chain are called first, starting at the first
not-yet-connection one. Each filter may add sockets and/or change
flags. When all flags are removed, the socket itself is removed from the
pollset.
Example:
* transfer wants to send, adds POLLOUT
* http/2 filter has a flow control block, removes POLLOUT and adds
POLLIN (it is waiting on a WINDOW_UPDATE from the server)
* TLS filter is connected and changes nothing
* h2-proxy filter also has a flow control block on its tunnel stream,
removes POLLOUT and adds POLLIN also.
* socket filter is connected and changes nothing
* The resulting pollset is then mixed together with all other transfers
and their pollsets, just as before.
Use of `SEND_HOLD` is no longer necessary in the filters.
All filters are adapted for the changed method. The handling in
`multi.c` has been adjusted, but its state handling the the protocol
handlers' `getsocks` method are untouched.
The most affected filters are http/2, ngtcp2, quiche and h2-proxy. TLS
filters needed to be adjusted for the connecting handshake read/write
handling.
No noticeable difference in performance was detected in local scorecard
runs.
Closes#11833
Getting nghttp2's error message helps users understand what's going
on. For example when the connection is brought down due a forbidden
header is used - as that header is then not displayed by curl itself.
Example:
curl: (92) Invalid HTTP header field was received: frame type: 1,
stream: 1, name: [upgrade], value: [h2,h2c]
Ref: #12172Closes#12179
- fold the code to convert dynhds to the nghttp2 structs
into a dynhds internal method
- saves code duplication
- pacifies compiler analyzers
Closes#12097
populate_binsettings now returns a negative value on error, instead of a
huge positive value. Both places which call this function have been
updated to handle this change in its contract.
The way populate_binsettings had been used prior to this change the huge
positive values -- due to signed->unsigned conversion of the potentially
negative result of nghttp2_pack_settings_payload which returns negative
values on error -- are not possible. But only because http2.c currently
always provides a large enough output buffer and provides H2 SETTINGS
IVs which pass the verification logic inside nghttp2. If the
verification logic were to change or if http2.c started passing in more
IVs without increasing the output buffer size, the overflow could become
reachable, and libcurl/curl might start leaking memory contents to
servers/proxies...
Closes#12101
- answer HTTP/2 streams refused via a GOAWAY from the server to
respond with CURLE_RECV_ERROR in order to trigger a retry
on another connection
Reported-by: black-desk on github
Ref #11859Closes#12054
- If SSL shutdown is not finished then make an additional call to
SSL_read to gather additional tracing.
- Fix http2 and h2-proxy filters to forward do_close() calls to the next
filter.
For example h2 and SSL shutdown before and after this change:
Before:
Curl_conn_close -> cf_hc_close -> Curl_conn_cf_discard_chain ->
ssl_cf_destroy
After:
Curl_conn_close -> cf_hc_close -> cf_h2_close -> cf_setup_close ->
ssl_cf_close
Note that currently the tracing does not show output on the connection
closure handle. Refer to discussion in #11878.
Ref: https://github.com/curl/curl/discussions/11878
Closes https://github.com/curl/curl/pull/11858
- refs #11342 where errors with git https interactions
were observed
- problem was caused by 1st sends of size larger than 64KB
which resulted in later retries of 64KB only
- limit sending of 1st block to 64KB
- adjust h2/h3 filters to cope with parsing the HTTP/1.1
formatted request in chunks
- introducing Curl_nwrite() as companion to Curl_write()
for the many cases where the sockindex is already known
Fixes#11342 (again)
Closes#11803
- added test cases for various code paths
- fixed handling of blocked write when stream had
been closed inbetween attempts
- re-enabled DEBUGASSERT on send with smaller data size
- in debug builds, environment variables can be set to simulate a slow
network when sending data. cf-socket.c and vquic.c support
* CURL_DBG_SOCK_WBLOCK: percentage of send() calls that should be
answered with a EAGAIN. TCP/UNIX sockets.
This is chosen randomly.
* CURL_DBG_SOCK_WPARTIAL: percentage of data that shall be written
to the network. TCP/UNIX sockets.
Example: 80 means a send with 1000 bytes would only send 800
This is applied to every send.
* CURL_DBG_QUIC_WBLOCK: percentage of send() calls that should be
answered with EAGAIN. QUIC only.
This is chosen randomly.
Closes#11756