- h2-download now always opens the output file on first write callback
invocation, if it will pause the transfer or not.
- Checks on output files then does not depend on the amount of data curl
has collected for the first write.
Closes#13323
- When the writing of response data fails, reset the stream
and do not return a callback error to nghttp2. That would
be a fatal error for the connection and harm other requests.
- add test cases for various abort scenarios
Reported-by: Konstantin Kuzov
Fixes#13292Closes#13298
... no need to use an absolute path, that makes the build unncessarily
fail if invoked using a different mount point. managen now takes options
to find the input files.
Update test1478 to provide the dir arguments to managen
Closes#13281
A transfer with a completed download that is still uploading needs to
check the connection state when it is PAUSEd, since connection
close/errors would otherwise go unnoticed.
Reported-by: Sergey Bronnikov
Fixes#13260Closes#13271
The two options CURLOPT_PROXYUSERNAME and CURLOPT_PROXYPASSWORD set the
actual names as-is, not URL encoded.
Modified test 503 to use percent-encoded strings in the credential
strings that should be passed on as-is.
Reported-by: Sergey Ogryzkov
Fixes#13265Closes#13270
- curl's transfer handling may write 0-length chunks at the end of the
download with an EOS flag. (HTTP/2 does this commonly)
- content encoders need to pass-through such a write and not count this
as error in case they are finished decoding
Fixes#13209Fixes#13212Closes#13219
Move all handling of HTTP's `Expect: 100-continue` feature into a client
reader. Add sending flag `KEEP_SEND_TIMED` that triggers transfer
sending on general events like a timer.
HTTP installs a `CURL_CR_PROTOCOL` reader when announcing `Expect:
100-continue`. That reader works as follows:
- on first invocation, records time, starts the `EXPIRE_100_TIMEOUT`
timer, disables `KEEP_SEND`, enables `KEEP_SEND_TIMER` and returns 0,
eos=FALSE like a paused upload.
- on subsequent invocation it checks if the timer has expired. If so, it
enables `KEEP_SEND` and switches to passing through reads to the
underlying readers.
Transfer handling's `readwrite()` will be invoked when a timer expires
(like `EXPIRE_100_TIMEOUT`) or when data from the server arrives. Seeing
`KEEP_SEND_TIMER`, it will try to upload more data, which triggers
reading from the client readers again. Which then may lead to a new
pausing or cause the upload to start.
Flags and timestamps connected to this have been moved from
`SingleRequest` into the reader's context.
Closes#13110
The option is really two enums ORed together, so it needs special
attention to make the code output nice.
Added test 1481 to verify. Both the server and the proxy versions.
Reported-by: Boris Verkhovskiy
Fixes#13127Closes#13129
Use imperative mood consistently for the first sentence describing an
option.
"Set this" instead "tell curl to set" or "this sets..."
Plus some extra cleanups and rephrasing.
Closes#13106
... correctly, even when they follow an existing one without a space in
between.
Verify with test 467
Follow-up to 07dd60c05b
Reported-by: Geeknik Labs
Fixes#13101Closes#13102
... in its new build path.
Also update the test scripts to be more precise in error messages to
help us understand CI errors better.
Follow-up to f03c85635f
Ref: #13029Closes#13083
Create ASCII version of manpage without nroff
- build src/tool_hugegelp.c from the ascii manpage
- move the the manpage and the ascii version build to docs/cmdline-opts
- remove all use of nroff from the build process
- should make the build entirely reproducible (by avoiding nroff)
- partly reverts 2620aa9 to build libcurl option man pages one by one
in cmake because the appveyor builds got all crazy until I did
The ASCII version of the manpage
- is built with gen.pl, just like the manpage is
- has a right-justified column making the appearance similar to the previous
version
- uses a 4-space indent per level (instead of the old version's 7)
- does not do hyphenation of words (which nroff does)
History
We first made the curl build use nroff for building the hugehelp file in
December 1998, for curl 5.2.
Closes#13047
If a response without a status line is received, and the connection is
known to use HTTP/1.x (not HTTP/0.9), report the error "Invalid status
line" instead of "Received HTTP/0.9 when not allowed".
Closes#13045
- pytest has changed the signature of the hook pytest_report_header()
for some obscure reason and that change landed in our CI now
- remove the changed param that we never used anyway
Closes#13037
This fixes miscellaneous typos and duplicated words in the docs, lib
and test comments and a few user facing errorstrings.
Author: RainRat on Github
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Dan Fandrich <dan@coneharvesters.com>
Closes: #13019
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk (to be made
into a sperate PR, also)
Added as documented [in
CLIENT-READER.md](5b1f31dfba/docs/CLIENT-READERS.md).
- old `Curl_buffer_send()` completely replaced by new `Curl_req_send()`
- old `Curl_fillreadbuffer()` replaced with `Curl_client_read()`
- HTTP chunked uploads are now formatted in a client reader added when
needed.
- FTP line-end conversions are done in a client reader added when
needed.
- when sending requests headers, remaining buffer space is filled with
body data for sending in "one go". This is independent of the request
body size. Resolves#12938 as now small and large requests have the
same code path.
Changes done to test cases:
- test513: now fails before sending request headers as this initial
"client read" triggers the setup fault. Behaves now the same as in
hyper build
- test547, test555, test1620: fix the length check in the lib code to
only fail for reads *smaller* than expected. This was a bug in the
test code that never triggered in the old implementation.
Closes#12969
When disabling all protocols without enabling any, the resulting
set of allowed protocols remained the default set. Clearing the
allowed set before inspecting the passed value from --proto make
the set empty even in the errorpath of no protocols enabled.
Co-authored-by: Dan Fandrich <dan@telarity.com>
Reported-by: Dan Fandrich <dan@telarity.com>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Closes: #13004
Curl_read/Curl_write clarifications
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to 1clarify
when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer setup of
`conn->sockfd` and `conn->writesockfd` on which connection filter
chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index as
parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for naming
consistency
- clarify that the special CURLE_AGAIN handling to return `CURLE_OK`
with length 0 only applies to `Curl_xfer_send()` and CURLE_AGAIN is
returned by all other send() variants.
SingleRequest reshuffling
- move functions into request.[ch]
- differentiate between reset and free
- add Curl_req_done() to perform last actions
- add a send `bufq` to SingleRequest for future use in keeping upload data
Closes#12963
Returns 1 if the previous transfer used a proxy, otherwise 0. Useful to
for example determine if a `NOPROXY` pattern matched the hostname or
not.
Extended test 970 and 972
- when data arrived in several chunks, the collection into
the passed buffer always started at offset 0, overwriting
the data already there.
adding test_20_07 to verify fix
- debug environment var CURL_WS_CHUNK_SIZE can be used to
influence the buffer chunk size used for en-/decoding.
Closes#12945
Also fix the tests. New implementation tested with GNU libmicrohttpd.
The new numbers in tests are real SHA-512/256 numbers (not just some
random ;) numbers ).