`SHA512_256_BLOCK_SIZE`, `SHA512_256_DIGEST_SIZE` macros were both
defined within curl and also in the nettle library required by GnuTLS.
Fix it by namespacing the curl macros.
Cherry-picked from #14495Closes#14514
Already used in `vtls.h`. Prefer this curl-namespaced name over the
unprefixed `SHA256_DIGEST_LENGTH`. The latter is also defined by TLS
backends with a potential to cause issues.
Also stop relying on externel headers setting this constant. It's
already defined in `vtls.h` on curl's behalf, do this also for `lib`.
Cherry-picked from #14495Closes#14513
Silence bogus MSVC warning C4232. Use the method already used
for similar cases earlier.
Also fixup existing suppressions to use pragma push/pop.
```
lib\vquic\curl_ngtcp2.c(709,40): error C2220: the following warning is treated as an error
lib\vquic\curl_ngtcp2.c(709,40): warning C4232: nonstandard extension used: 'client_initial': address of dllimport 'ngtcp2_crypto_client_initial_cb' is not static, identity not guaranteed
lib\vquic\curl_ngtcp2.c(709,40): warning C4232: nonstandard extension used: 'recv_crypto_data': address of dllimport 'ngtcp2_crypto_recv_crypto_data_cb' is not static, identity not guaran
lib\vquic\curl_ngtcp2.c(709,40): warning C4232: nonstandard extension used: 'encrypt': address of dllimport 'ngtcp2_crypto_encrypt_cb' is not static, identity not guaranteed
lib\vquic\curl_ngtcp2.c(709,40): warning C4232: nonstandard extension used: 'decrypt': address of dllimport 'ngtcp2_crypto_decrypt_cb' is not static, identity not guaranteed
lib\vquic\curl_ngtcp2.c(709,40): warning C4232: nonstandard extension used: 'hp_mask': address of dllimport 'ngtcp2_crypto_hp_mask_cb' is not static, identity not guaranteed
lib\vquic\curl_ngtcp2.c(709,40): warning C4232: nonstandard extension used: 'recv_retry': address of dllimport 'ngtcp2_crypto_recv_retry_cb' is not static, identity not guaranteed
lib\vquic\curl_ngtcp2.c(709,40): warning C4232: nonstandard extension used: 'update_key': address of dllimport 'ngtcp2_crypto_update_key_cb' is not static, identity not guaranteed
lib\vquic\curl_ngtcp2.c(709,40): warning C4232: nonstandard extension used: 'delete_crypto_aead_ctx': address of dllimport 'ngtcp2_crypto_delete_crypto_aead_ctx_cb' is not static, identit
lib\vquic\curl_ngtcp2.c(709,40): warning C4232: nonstandard extension used: 'delete_crypto_cipher_ctx': address of dllimport 'ngtcp2_crypto_delete_crypto_cipher_ctx_cb' is not static, ide
lib\vquic\curl_ngtcp2.c(709,40): warning C4232: nonstandard extension used: 'get_path_challenge_data': address of dllimport 'ngtcp2_crypto_get_path_challenge_data_cb' is not static, ident
```
Ref: https://github.com/curl/curl/actions/runs/10343459009/job/28627621355#step:10:30
Cherry-picked from #14495
Co-authored-by: Tal Regev
Ref: #14383Closes#14510
- make sure to exclude failing tests when libidn2 is detected by
default.
- ignore test 1560 results. Seen to fail with libidn2.
I'm not sure why this test was not executed earlier:
https://github.com/curl/curl/actions/runs/10354610889/job/28660309355#step:13:3647
- runtests: recognize `libidn2` as a feature.
- move IDN test exclusions from GHA/windows to `tests/data/DISABLED`.
- GHA/windows: drop default `-DUSE_LIBIDN2=ON` cmake config.
Cherry-picked from #14495Closes#14519
Before, setting CURLOPT_SSLVERSION with wolfSSL restricted the the tls
proto to just the specified version. Now it properly supports a range.
So it can set the min and max tls proto (max requires wolfSSL 4.2.0).
Bump the absolute minimum required version of wolfSSL to 3.4.6 (released
2015) because it is needed for the wolfSSL_CTX_SetMinVersion() function.
Closes#14480
Rename internal macros to match their `libcurl.pc` metadata counterpart.
Also apply these to the `curl-config.in` template.
- `CPPFLAG_CURL_STATICLIB` -> `LIBCURL_PC_CFLAGS`
- `LIBCURL_LIBS` -> `LIBCURL_PC_LIBS_PRIVATE`
- `LIBCURL_NO_SHARED` -> `LIBCURL_PC_LIBS`
Closes#14476
- Turned them all into functions to also do asserts etc.
- The llist related structs got all their fields renamed in order to make
sure no existing code remains using direct access.
- Each list node struct now points back to the list it "lives in", so
Curl_node_remove() no longer needs the list pointer.
- Rename the node struct and some of the access functions.
- Added lots of ASSERTs to verify API being used correctly
- Fix some cases of API misuse
Add docs/LLIST.md documenting the internal linked list API.
Closes#14485
- prefix local variables with underscore and convert to lowercase.
- list variables accepted by `libcurl.pc` and `curl-config` templates.
- quote more string literals.
Follow-up to 919394ee64#14450Closes#14462
Log progress only at start and end of transfer to give normalized
output when upload data is only partially sent or temporarily blocked.
Fixes test with CURL_DBG_SOCK_WBLOCK=90 set.
Closes#14454
- tidy up two `MATCHES` expression by avoiding macros expansion and
adding quotes. Then convert then to `STREQUAL` to match other places
in the code doing the same checks.
- fix setting `_ALL_SOURCE` for AIX to match what autotools does.
- delete stray `_ALL_SOURCE` reference from `lib/config_riscos.h`
- simplify/fix two `STREQUAL ""` checks.
The one in the `openssl_check_symbol_exists()` macro succeeded
regardless of the value. The other could return TRUE when
`CMAKE_OSX_SYSROOT` was undefined.
- delete code for CMake versions (<3.7) we no longer support.
- prefer `LIST(APPEND ...)` to extend `CURL_LIBS`.
- use `CURL_LIBS` to add the `network` lib for Haiku.
Before this patch it was done via raw C flags. I could not test this.
- move `_WIN32_WINNT`-related code next to each other.
It also moves detection to the top, allowing more code to use
the result.
- merge two `WIN32` blocks.
- rename internal variables to underscore + lowercase.
- unwrap a line, indent, whitespace.
Closes#14450
- quote string literals.
In the hope it improves syntax-highlighting and readability.
- use lowercase, underscore-prefixed local var names.
As a hint for scope, to help readability.
- prefer `pkg_search_module` (over `pkg_check_modules`).
They are the same, but `pkg_search_module` stops searching
at the first hit.
- more `IN LISTS` in `foreach()`.
- OtherTests.cmake: clear `CMAKE_EXTRA_INCLUDE_FILES` after use.
- add `PROJECT_LABEL` for http/client and unit test targets.
- sync `Find*` module comments and formatting.
- drop a few local variables.
- drop bogus `CARES_LIBRARIES` from comment.
- unquote numeric literal.
Follow-up to acbc6b703f#14197Closes#14388
- rely on the new flush to handle blocked sends. No longer
do simulated EAGAIN on (partially) blocked sends with their
need to handle repeats.
- fix some debug handling CURL_SMALLREQSEND env var
- add some assertings in request.c for affirming we do it right
- enhance assertion output in test_16 for easier analysis
Closes#14435
This disambiguates the source code being tested. The output format is
the same as when testing out of a git repo, but with no description and
a long hash.
Ref: #14363Closes#14429
(in debug-builds)
Fix implementation in curl using libuv to process parallel transfers.
Add pytest capabilities to run test cases with --test-event.
- fix uv_timer handling to carry correct 'data' pointing to uv context.
- fix uv_loop handling to reap and add transfers when possible
- fix return code when a transfer errored
Closes#14413
- QUIT is not an important FTP command
- curl only sends it "best effort", meaning it might not be sent
- it is a known "flaky" thing in test output because of this
Closes#14404
- sync build-dir/source-dir header path order with autotools, by
including build-dir first, then source-dir.
This prevents out-of-tree builds breaking due to leftover generated
headers in the source tree.
- tests/unit: move `src` ahead of `libtest` in header path, syncing with
autotools.
- stop adding non-existing generated `include` dir to header path.
There are no generated `include` headers and this directory is either
missing in out-of-tree builds or the same as the one already added
globally via the root `CMakeLists.txt`.
- lib: stop adding a duplicate source include directory to the header
path.
It's already added globally via the root `CMakeLists.txt`.
- lib: stop adding the project root to the header path.
- docs/examples: drop internal header paths.
Examples do not and should not use internal headers.
- replace `curl_setup_once.h` in comments with `curl_setup.h`,
the header actually used, and also referred to in autotools comments.
- add comment why we need `src` in include path for `tests/server`.
- add quotes around header directories.
Closes#14416
The documented and mandated step has been to not use buildconf but to
invoke 'autoreconf -fi' for four years already.
This change only drops buildconf from the release tarball, it remains
present in git for now.
Follow-up to 85868537d6Closes#14412
- extend existing Linux workflow with CMake support.
Including running pytest the first time with CMake.
- cmake: generate `tests/config` and `tests/http/config.ini`.
Required for pytest tests.
Uses basic detection logic. Feel free to take it from here.
Also dump config files in a CI step for debugging purposes.
- cmake: build `tests/http/clients` programs.
- fix portability issues with `tests/http/clients` programs.
Some of them use `getopt()`, which is not supported by MSVC.
Fix the rest to compile in CI (old-mingw-w64, MSVC, Windows).
- GHA/linux: add CMake job matching an existing autotools one.
- GHA/linux: test `-DCURL_LIBCURL_VERSIONED_SYMBOLS=ON`
in the new CMake job.
- reorder testdeps to build server, client tests first and then
libtests and units, to catch errors in the more complex/unique
sources earlier.
- sort list in `tests/http/clients/Makefile.inc`.
Closes#14382
If a request containing two headers that have equivalent prefixes (ex.
"x-amz-meta-test:test" and "x-amz-meta-test-two:test2") AWS expects the
header with the shorter name to come first. The previous implementation
used `strcmp` on the full header. Using the example, this would result
in a comparison between the ':' and '-' chars and sort
"x-amz-meta-test-two" before "x-amz-meta-test", which produces a
different "StringToSign" than the one calculated by AWS.
Test 1976 verifies
Closes#14370
Bring setting ciphers with WolfSSL in line with other SSL backends,
to make the curl interface more consistent across the backends.
Now the tls1.3 ciphers are set with the --tls13-ciphers option, when
not set the default tls1.3 ciphers are used. The tls1.2 (1.1, 1.0)
ciphers are set with the --ciphers option, when not set the default
tls1.2 ciphers are used. The ciphers available for the connection
are now a union of the tls1.3 and tls1.2 ciphers.
This changes the behaviour for WolfSSL when --ciphers is set, but
--tls13-ciphers is not set. Now the ciphers set with --ciphers
are combined with the default tls1.3 ciphers, whereas before solely
the ciphers of --ciphers were used.
Thus before when no tls1.3 ciphers were specified in --ciphers,
tls1.3 was completely disabled. This might not be what the user
expected, especially as this does not happen with OpenSSL.
Closes#14385
Bring setting ciphers with mbedTLS in line with other SSL backends,
to make the curl interface more consistent across the backends.
Now the tls1.3 ciphers are set with the --tls13-ciphers option, when
not set the default tls1.3 ciphers are used. The tls1.2 (1.1, 1.0)
ciphers are set with the --ciphers option, when not set the default
tls1.2 ciphers are used. The ciphers available for the connection
are now a union of the tls1.3 and tls1.2 ciphers.
This changes the behaviour for mbedTLS when --ciphers is set, but
--tls13-ciphers is not set. Now the ciphers set with --ciphers
are combined with the default tls1.3 ciphers, whereas before solely
the ciphers of --ciphers were used.
Thus before when no tls1.3 ciphers were specified in --ciphers,
tls1.3 was completely disabled. This might not be what the user
expected, especially as this does not happen with OpenSSL.
Closes#14384
add --with-libuv to configure to (optionally) use it in debug-builds to
drive the event-based API
Use curl_multi_socket_action() and friends to drive parallel transfers.
tests/README has brief documentation for this
Closes#14298
- replace the counting of upload lengths with the new eos send flag
- improve frequency of stream draining to happen less on events where it
is not needed
- this PR is based on #14220
http2, cf-h2-proxy: fix EAGAINed out buffer
- in adjust pollset and shutdown handling, a non-empty `ctx->outbufq`
must trigger send polling, irregardless of http/2 flow control
- in http2, fix retry handling of blocked GOAWAY frame
test case improvement:
- let client 'upload-pausing' handle http versions
Closes#14253
With this option, the entire download is skipped if the selected target
filename already exists when the opertion is about to begin.
Test 994, 995 and 996 verify.
Ref: #11012Closes#13993
revert f6cb3c63#14338
Setting SSLHonorCipherOrder to on means it honors the server cipher
order. From the documentation: "When choosing a cipher during an SSLv3
or TLSv1 handshake, normally the client's preference is used. If this
directive is enabled, the server's preference will be used instead."
Also the commit inhibits test_17_07_ssl_ciphers. The test tries to
tests if all the ciphers specified, and only those, are properly set
in curl. For that to work we need have cases where some or all ciphers
do no intersect with the cipher-set of the server. We need to be able
to assert a failed connection based on a cipher set mismatch.
That is why a restricted set of ciphers is used on the server. This
set is so chosen that it contains the well known most secure ciphers.
Except with the slower aes256 variant intentionally left out, to be
able to test above described.
As test_17_07_ssl_ciphers is currently the only test that tests the
functioning of the --ciphers and --tls13-ciphers options, it is
important that its coverage is as good as possible.
Closes#14381
Use these words and casing more consistently across text, comments and
one curl tool output:
AIX, ALPN, ANSI, BSD, Cygwin, Darwin, FreeBSD, GitHub, HP-UX, Linux,
macOS, MS-DOS, MSYS, MinGW, NTLM, POSIX, Solaris, UNIX, Unix, Unicode,
WINE, WebDAV, Win32, winbind, WinIDN, Windows, Windows CE, Winsock.
Mostly OS names and a few more.
Also a couple of other minor text fixups.
Closes#14360
Since the documentation text blob might be gzipped, it needs to search
for what to output in a streaming manner. It then first searches for
"\nALL OPTIONS".
Then, it looks for the start to display at "\n -[option]" and stops
again at "\n -". Except for the last option in the man page, which
ends at "\nFILES" - the subtitle for the section following all options
in the manpage.
Test 1707 to 1710 verify
Closes#13997
... or pick the last directory part from the path if available.
Instead of returning error.
Add test 690 and 691 to verify. Test 76 and 2036 no longer apply.
Closes#13988
- tidy-up comments.
- use lowercase, underscore prefixed names for internal variables.
- use `IN LISTS` and `IN ITEMS` in `foreach()` loops.
- rename variable name `OUTPUT` to a more distinctive one.
- tidy-up `STREQUAL` syntax.
- delete commented code.
- indent/whitespace.
Closes#14197
Replace Curl_resolv_unlock() with Curl_resolv_unlink():
-replace inuse member with refcount in Curl_dns_entry
- pass Curl_dns_entry ** to unlink, so it gets always cleared
- solve potential (but unlikley) UAF in FTP's handling of looked up
Curl_dns_entry. Esp. do not use addr information after unlinking an entry.
In reality, the unlink will not free memory, as the dns entry is still
referenced by the hostcache. But this is not safe and relying on no other
code pruning the cache in the meantime.
- pass permanent flag when adding a dns entry instead of fixing timestamp
afterwards.
url.c: fold several static *resolve_* functions into one.
Closes#14195
The test-ci target now uses 2 processes by default, but the amount of
parallelism is tuned for each CI service and build environment based on
results of a number of test runs. Some CI services use super-
oversubscribed build machines that can barely run the curl tests
already with no parallelism without frequently failing with
timing-induced failures. These continue to be run without parallelism.
Other services provide two fast, unloaded cores and these run with 14
processes, which is a good default for this kind of environment.
Here's a summary of the number of test processes by CI service:
Appveyor - 2 (Windows MSVC), 1 (others)
Azure - 2
Circle CI - 14
Cirrus - 28 (macOS), 14 (Linux), 7 (FreeBSD), 5 (macOS torture), 2 (Windows)
GitHub Actions - 3 (macOS), 2 (Linux)
Some of these are a bit conservative to keep timing-induced flakiness down.
The net result is that the first test results should arrive only
3 minutes after a commit submission.
Changes merged via separate commits:
- 2a7c8b27fd#14171
- 72341068a2
- efce544418#14244
- c6cf411bac
Ref: #10818Closes#11510
It's a no-op since
d162fca69a#9333 (2022-08-18).
Also revert 476499c75c that is
no longer necessary: move `Makefile.inc` back into `Makefile.am`.
Closes#14357
Raise the limit for certification information from 10 thousand to 100
thousand bytes. Certificates can be larger than 10k.
Change the infof() debug output to add '...' at the end when the max
limit it can handle is exceeded.
Reported-by: Sergio Durigan Junior
Fixes#14352Closes#14354
Set the initial stream window size to 64KB and increase that to the 10MB
we used to start with on the first server reply, unless a rate limit is
in effect.
Continously monitory changes to the transfers rate limit and adjust the
stream window size accordingly. `max_recv_speed` is a transfer propert
that can be changed during processing by a callback.
Closes#14326
Let the client, e.g. curl, influence the cipher selected in a TLS
handshake. TLS backends have different preferences and honor that
in httpd the same as Caddy does.
Also makes for a more fair compare of different TLS backends.
Closes#14338
As runtests.md and testcurl.md. Very few people actually need these as
manpages anyway.
With this, we have no more nroff formatted documents in git.
Closes#14324
- run tests via `make test-ci` instead of `make check` with autotools.
- add `x86_64` job for FreeBSD, with tests.
It matches the existing Cirrus CI job, with these differences:
- finishes 3x faster (thanks to parallel tests enabled).
- librtmp is not enabled because it's slated for removal by FreeBSD.
(already past the removal deadline, thought the package still
installs.)
- DICT and TELNET servers fail to start. Couldn't figure out why.
It means skipping test 1450 and 1452.
- it runs more tests, e.g. websockets and ip6-localhost.
- no `pkg update -f`.
- it misses the `CRYPTOGRAPHY_DONT_BUILD_RUST=1`, `pkg delete curl`,
`chmod 777`, `sudo -u nobody` and `sysctl net.inet.tcp.blackhole`
tricks. The latter is the default in these runners, the others did
not affect results.
- set `-j0` for tests in the NetBSD job. Flaky otherwise.
Closes#14244
Instead of providing a fixed single synthetic response in the test
server itself. To allow us to better use *different* directory listings
in different test cases. In this change, most listings remain the same
as before.
The wildcard match tests still use synthetic responses but we should fix
that as well.
Updated numerous test cases to use this.
Closes#14295
- generate AVAILABILITY manpage sections automatically - for consistent
wording
- allows us to double-check against other documumentation (symbols-in-versions
etc)
- enables proper automation/scripting based on this data
- lots of them were wrong or missing in the manpages
- several of them repeated (sometimes mismatching) backend support info
Add test 1488 to verify "added-in" version numbers against
symbols-in-versions.
Closes#14217
- add upload tests to scorecard, invoke with
> python3 tests/http/scorecard.py -u h1|h2|h3
- add a reverse proxy setup from Caddy to httpd for
upload tests since Caddy does not have other PUT/POST handling
- add caddy tests in test_08 for POST/PUT
- increase read buffer in mod_curltest for larger reads
Closes#14208
Useful to see what the numbers listed in the `TESTFAIL:` and `IGNORED:`
lines mean. Also list test keywords to help catching failure patterns.
Example:
```
FAIL 1034: 'HTTP over proxy with malformatted IDN host name' HTTP, HTTP GET, HTTP proxy, IDN, FAILURE, config file
FAIL 1035: 'HTTP over proxy with too long IDN host name' HTTP, HTTP GET, HTTP proxy, IDN, FAILURE
TESTFAIL: These test cases failed: 1034 1035
```
Closes#14174
To make sure that `managen` called by test 1706 uses the same date as
the test expects in the `%DATE` macro.
Before this patch when tests started running before UTC midnight and
reached test 1706 after, these dates were different and the test failed.
Follow-up to 0e73b69b3dFixes#14173Closes#14187
Some feature names used in tests had minor differences compared to
the well-known ones from `curl -V`. This patch syncs them to make test
results easier to grok.
Closes#14183
When CRLF line end conversion was enabled (--crlf), input after the last
newline in the upload buffer was not sent, if the buffer contained a
newline.
Reported-by: vuonganh1993 on github
Fixes#14165Closes#14169
- disbable this test on WIN32 platforms. It uses the file describtor '1'
as valid socket without events. Not portable.
- reduce trace output somewhat on other runs
Fixes#14177
Reported-by: Viktor Szakats
Closes#14191
The man pages for curl_easy_getinfo, curl_easy_setopt and
curl_multi_setopt now feature the lists of options alphabetically
sorted. Test 1139 verify that they are.
The curl_multi_setopt page also got brief explanations of the listed
options.
Closes#14156
This PR fixes a leak and a crash that can happen when curl encounters
bad HTTPS RR values in DNS. We're starting to do better testing of that
kind of thing and e.g. have published bad HTTPS RR values at
dodgy.test.defo.ie.
Closes#14151
Necessary to find generated files in the out-of-tree build directory.
E.g. `tests/configurehelp.pm`, for tests 1119 and 1167.
Before this patch macOS autotools builds were failing these two tests
due to falling back to the default preprocessor (`cpp`) instead of
the actual one configured. Then `cpp` failing to compile Apple SDK
headers referenced by curl headers.
Cherry-picked from #14097Closes#14124
This was missed during some refactoring more than a year ago and is
causing a warning "Use of uninitialized value $path in pattern match".
Follow-up to 70d2fca2
Ref: #10818Closes#14113
Also:
- remove stray `ECH` and `HTTPSRR` from cmake protocol list.
- stop excluding `Debug` and `TrackMemory` in `test1013.pl`.
- configure: delete `CURL_CHECK_CURLDEBUG` check.
Ref: 065047dc62
This check was effectively doing nothing, except disabling
`--enable-curldebug` in `curl-config` for
Cygwin/MSYS/cegcc/OS2/AIX targets with c-ares enabled.
Closes#14096
The commit 6483813b was missing changes necessitated by 2abfc75 that
causes a crash. Also, use ARRAYSIZE() for cleaner code.
Follow-up to 6483813b
Ref #14055
This eliminates the need to run an extra help subcommand to get the
possible categories, reducing the friction in getting relevant help. The
help wording was also slightly tweaked for grammatical accuracy.
Closes#14055
Option cleanups:
--get is not upload
--form* are post
- added several options into ldap, smtp, imap and pop3
- shortened the category descriptions in the list
category curl fixes:
--create-dirs removed from 'curl'
--ftp-create-dirs removed from 'curl'
--netrc moved to 'auth' from 'curl'
--netrc-file moved to 'auth' from 'curl'
--netrc-optional moved to 'auth' from 'curl'
--no-buffer moved to 'output' from 'curl'
--no-clobber removed from 'curl'
--output removed from 'curl'
--output-dir removed from 'curl'
--remove-on-error removed from 'curl'
Add a "global" category:
- Made all "global" options set this category
Add a "deprecated" category:
- Moved the deprecated options to it (maybe they should not be in any
category long term)
Add a 'timeout' category
- Put a number of appropriate options in it
Add an 'ldap' category
- Put the LDAP related option in there
Remove categories "ECH" and "ipfs"
- They should not be categories. Had only one single option each.
Remove category "misc"
- It should not be a category as it is impossible to know when to browse
it.
--use-ascii moved to ftp and output
--xattr moved to output
--service-name moved to auth
Managen fixes:
- errors if an option is given a category name that is not already setup
for in code
- verifies that options set `scope: global` also is put in category
`global´
Closes#14101
- add a DEBUGASSERT for when a transfer's pollset should not be empty.
- move write unpausing from transfer loop into curl_easy_pause. This
make sure that the url_updatesocket() finds the correct state when
updating socket events.
- fix HTTP/2 proxy during connect phase to set sockets correctly
- fix test2600 to simulate a socket set
- move write unpausing from transfer loop into curl_easy_pause. This
make sure that the url_updatesocket() finds the correct state when
updating socket events.
- waiting for the resolver to deliver might not involve any sockets to
wait for. Do not generate a warning.
Fixes#14047Closes#14074
Based on the standards and guidelines we use for our documentation.
- expand contractions (they're => they are etc)
- host name = > hostname
- file name => filename
- user name = username
- man page => manpage
- run-time => runtime
- set-up => setup
- back-end => backend
- a HTTP => an HTTP
- Two spaces after a period => one space after period
Closes#14073
When an individual file ended with a quote (typically an example), the
render function would return without ending the quote correctly with a
".fi" (fill in) in the manpage output.
This made the additional text provided below to render wrongly.
Closes#14048
Fix issues detected.
Also:
- One of the `.vc` files used LF EOLs, while the other didn't.
Make that one also use LF EOLs, as this is apparently supported by
`nmake`.
- Drop `.dsw` and `.btn` types from `.gitattributes`.
The repository doesn't use them.
- Sync section order with the rest of files in
`tests/certs/EdelCurlRoot-ca.prm`.
- Indent/align `.prm` and `.pem` files.
- Delete dummy `[something]` section from `.prm` and `.pem` files.
Mental note:
MSVC `.sln` files seem to accept spaces for indentation and also support
LF line-endings. I cannot test this and I don't know what's more
convenient when updating them, so left them as-is, with specific
exclusions.
Closes#14031
It needs to be set to the leading digits and dots only, so that the
`-[date]` suffix strings are not included, as those used in the daily
snapshots.
Fixes#14035
Reported-by: Marcel Raad
Closes#14036
When libcurl discards a connection there are two phases this may go
through: "shutdown" and "closing". If a connection is aborted, the
shutdown phase is skipped and it is closed right away.
The connection filters attached to the connection implement the phases
in their `do_shutdown()` and `do_close()` callbacks. Filters carry now a
`shutdown` flags next to `connected` to keep track of the shutdown
operation.
Filters are shut down from top to bottom. If a filter is not connected,
its shutdown is skipped. Notable filters that *do* something during
shutdown are HTTP/2 and TLS. HTTP/2 sends the GOAWAY frame. TLS sends
its close notify and expects to receive a close notify from the server.
As sends and receives may EAGAIN on the network, a shutdown is often not
successful right away and needs to poll the connection's socket(s). To
facilitate this, such connections are placed on a new shutdown list
inside the connection cache.
Since managing this list requires the cooperation of a multi handle,
only the connection cache belonging to a multi handle is used. If a
connection was in another cache when being discarded, it is removed
there and added to the multi's cache. If no multi handle is available at
that time, the connection is shutdown and closed in a one-time,
best-effort attempt.
When a multi handle is destroyed, all connection still on the shutdown
list are discarded with a final shutdown attempt and close. In curl
debug builds, the environment variable `CURL_GRACEFUL_SHUTDOWN` can be
set to make this graceful with a timeout in milliseconds given by the
variable.
The shutdown list is limited to the max number of connections configured
for a multi cache. Set via CURLMOPT_MAX_TOTAL_CONNECTIONS. When the
limit is reached, the oldest connection on the shutdown list is
discarded.
- In multi_wait() and multi_waitfds(), collect all connection caches
involved (each transfer might carry its own) into a temporary list.
Let each connection cache on the list contribute sockets and
POLLIN/OUT events it's connections are waiting for.
- in multi_perform() collect the connection caches the same way and let
them peform their maintenance. This will make another non-blocking
attempt to shutdown all connections on its shutdown list.
- for event based multis (multi->socket_cb set), add the sockets and
their poll events via the callback. When `multi_socket()` is invoked
for a socket not known by an active transfer, forward this to the
multi's cache for processing. On closing a connection, remove its
socket(s) via the callback.
TLS connection filters MUST NOT send close nofity messages in their
`do_close()` implementation. The reason is that a TLS close notify
signals a success. When a connection is aborted and skips its shutdown
phase, the server needs to see a missing close notify to detect
something has gone wrong.
A graceful shutdown of FTP's data connection is performed implicitly
before regarding the upload/download as complete and continuing on the
control connection. For FTP without TLS, there is just the socket close
happening. But with TLS, the sent/received close notify signals that the
transfer is complete and healthy. Servers like `vsftpd` verify that and
reject uploads without a TLS close notify.
- added test_19_* for shutdown related tests
- test_19_01 and test_19_02 test for TCP RST packets
which happen without a graceful shutdown and should
no longer appear otherwise.
- add test_19_03 for handling shutdowns by the server
- add test_19_04 for handling shutdowns by curl
- add test_19_05 for event based shutdowny by server
- add test_30_06/07 and test_31_06/07 for shutdown checks
on FTP up- and downloads.
Closes#13976
Since the framework is already returning that variable by default.
Avoids a warning for unreachable code.
Reported-by: Tal Regev
Fixes#13967Closes#13973
The function we use is called 'gnutls_x509_crt_check_hostname()' but if
we pass in the hostname with a trailing dot, the check fails. If we pass
in the SNI name, which cannot have a trailing dot, it succeeds for
https://pyropus.ca./
I consider this as a flaw in GnuTLS and have submitted this issue
upstream:
https://gitlab.com/gnutls/gnutls/-/issues/1548
In order to work with old and existing GnuTLS versions, we still need
this change no matter how they view the issue or might change it in the
future.
Fixes#13428
Reported-by: Ryan Carsten Schmidt
Closes#13949
- Parse etag and content-disposition headers for 3xx replies.
For example, a server may send a content-disposition filename header
with a redirect reply (3xx) but not with the final response (2xx).
Without this change curl would ignore the server's specified filename
and continue to use the filename extracted from the user-specified URL.
Prior to this change, 75d79a4 had limited etag and content-disposition
to 2xx replies only.
Tests-by: Daniel Stenberg
Reported-by: Morgan Willcock
Fixes https://github.com/curl/curl/issues/13302Closes#13484
As the list of variable names grows, doing a simple loop to find the
name get increasingly worse. This switches to a bsearch.
Also: do a case sensitive check for the variable name. The names have
not been documented to be case insensitive and there is no point in
having them so.
Closes#13914
This adds connection shutdown infrastructure and first use for FTP. FTP
data connections, when not encountering an error, are now shut down in a
blocking way with a 2sec timeout.
- add cfilter `Curl_cft_shutdown` callback
- keep a shutdown start timestamp and timeout at connectdata
- provide shutdown timeout default and member in
`data->set.shutdowntimeout`.
- provide methods for starting, interrogating and clearing
shutdown timers
- provide `Curl_conn_shutdown_blocking()` to shutdown the
`sockindex` filter chain in a blocking way. Use that in FTP.
- add `Curl_conn_cf_poll()` to wait for socket events during
shutdown of a connection filter chain.
This gets the monitoring sockets and events via the filters
"adjust_pollset()" methods. This gives correct behaviour when
shutting down a TLS connection through a HTTP/2 proxy.
- Implement shutdown for all socket filters
- for HTTP/2 and h2 proxying to send GOAWAY
- for TLS backends to the best of their capabilities
- for tcp socket filter to make a final, nonblocking
receive to avoid unwanted RST states
- add shutdown forwarding to happy eyeballers and
https connect ballers when applicable.
Closes#13904
Add new job to test building for UWP (aka `CURL_WINDOWS_APP`).
Fix fallouts when building for UWP:
- rand: do not use `BCryptGenRandom()`.
- cmake: disable using win32 LDAP.
- cmake: disable telnet.
- version_win32: fix code before declaration.
- schannel: disable `HAS_MANUAL_VERIFY_API`.
- schannel: disable `SSLSUPP_PINNEDPUBKEY`
and make `schannel_checksum()` a stub.
Ref: e178fbd40a#1429
- schannel: make `cert_get_name_string()` a failing stub.
- system_win32: make `Curl_win32_impersonating()` a failing stub.
- system_win32: try to fix `Curl_win32_init()` (untested).
- threads: fix to use `CreateThread()`.
- src: disable searching `PATH` for the CA bundle.
- src: disable bold text support and capability detection.
- src: disable `getfiletime()`/`setfiletime()`.
- tests: make `win32_load_system_library()` a failing stub.
- tests/server/util: make it compile.
- tests/server/sockfilt: make it compile.
- tests/lib3026: fix to use `CreateThread()`.
See individual commits for build error details.
Some of these fixes may have better solutions, and some may not work
as expected. The goal of this patch is to make curl build for UWP.
Closes#13870
Introduce new notation for CURLOPT_INTERFACE / --interface:
ifhost!<interface>!<host>
Binding to an interface doesn't set the address, and an interface can
have multiple addresses.
When binding to an address (without interface), the kernel is free to
choose the route, and it can route through any device that can access
the target address, not necessarily the one with the chosen address.
Moreover, it is possible for different interfaces to have the same IP
address, on which case we need to provide a way to be more specific.
Factor out the parsing part of interface option, and add unit tests:
1663.
Closes#13719
- add special sauce to disable unwanted peer verification by mbedtls
when negotiating TLS v1.3
- add special sauce for MBEDTLS_ERR_SSL_RECEIVED_NEW_SESSION_TICKET
return code on *writing* TLS data. We assume the data had not been
written and EAGAIN.
- return correct Curl error code when peer verification failed.
- disable test_08_05 with 50 HTTP/1.1 connections, as mbedtls reports a
memory allocation failed during handshake.
- bump CI mbedtls version to 3.6.0
Fixes#13653Closes#13838
Used for extracting:
- when used asking for a scheme, it will return CURLUE_NO_SCHEME if the
stored information was a guess
- when used asking for a URL, the URL is returned without a scheme, like
when previously given to the URL parser when it was asked to guess
- as soon as the scheme is set explicitly, it is no longer internally
marked as guessed
The idea being:
1. allow a user to figure out if a URL's scheme was set as a result of
guessing
2. extract the URL without a guessed scheme
3. this makes it work similar to how we already deal with port numbers
Extend test 1560 to verify.
Closes#13616
- determine the actual poll timeout *after* all sockets
have been collected. Protocols and connection filters may
install new timeouts during collection.
- add debug logging to test1533 where the mistake was noticed
Reported-by: Matt Jolly
Fixes#13782Closes#13825
Refactors canon_query, so it could use the encoding part of the function
to use it in the path.
As the path doesn't encode '/', but encode '=', I had to add some
conditions to know If I was doing the query or path encoding.
Also, instead of adding a `bool in_path` variable, I use `bool
*found_equals` to know if the function was called for the query or path,
as found_equals is used only in query_encoding.
Test 472 verifies.
Reported-by: Alexander Shtuchkin
Fixes#13754Closes#13814
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
`CURLDEBUG` is meant to enable memory tracking, but in a bunch of cases,
it was protecting debug features that were supposed to be guarded with
`DEBUGBUILD`.
Replace these uses with `DEBUGBUILD`.
This leaves `CURLDEBUG` uses solely for its intended purpose: to enable
the memory tracking debug feature.
Also:
- autotools: rely on `DEBUGBUILD` to enable `checksrc`.
Instead of `CURLDEBUG`, which worked in most cases because debug
builds enable `CURLDEBUG` by default, but it's not accurate.
- include `lib/easyif.h` instead of keeping a copy of a declaration.
- add CI test jobs for the build issues discovered.
Ref: https://github.com/curl/curl/pull/13694#issuecomment-2120311894Closes#13718
Before this patch, the `testdeps` build target required `-DCURLDEBUG`
be set either via `ENABLE_DEBUG=ON` or `ENABLE_CURLDEBUG=ON` to build
the curl unit tests.
After fixing build issues in #13694, we can drop this requirement and
build unit tests unconditionally.
Depends-on: #13694
Depends-on: #13697 (fix unit test issue revealed by Old Linux CI job)
Follow-up to 39e7c22bb4#11446Closes#13698
- add `Curl_hash_add2()` that passes a destructor function for
the element added. Call element destructor instead of hash
destructor if present.
- multi: add `proto_hash` for protocol related information,
remove `struct multi_ssl_backend_data`.
- openssl: use multi->proto_hash to keep x509 shared store
- schannel: use multi->proto_hash to keep x509 shared store
- vtls: remove Curl_free_multi_ssl_backend_data() and its
equivalents in the TLS backends
Closes#13345
- HEADERFUNCTIONS might inspect response properties like
CURLINFO_CONTENT_LENGTH_DOWNLOAD_T on seeing the last header line. If
the line is being written before this is initialized, values are not
available.
- write the last header line late when analyzing a HTTP response so that
all information is available at the time of the writing.
- add test1485 to verify that CURLINFO_CONTENT_LENGTH_DOWNLOAD_T works
on seeing the last header.
Fixes#13752
Reported-by: Harry Sintonen
Closes#13757
This stops keeping perl and shell processes around that are no longer
needed, plus it eliminates an unneeded shell message when the server is
later terminated.
Closes#13772
- add 2 variations on test_07_42 which PAUSEs uploads
and response connections terminating either right away
or after the 100-continue response
- when detecting the connection being closed in transfer.c
readwrite_data(), clear ALL send bits in data->req.keepon.
It no longer makes send to wait for a KEEP_SEND_PAUSE or HOLD.
- in the protocol client writer add the check for incomplete
response bodies. When an EOS is seen and the length is known,
check that and fail if bytes are missing.
Reported-by: Sergey Bronnikov
Fixes#13740Closes#13750
- refs #13556
- allow anon uploads on vsftpd test server
- add test_30_05 for plain upload of 1k, 100k, 1m
- add test_31_05 for SSL upload of 1k, 100k, 1m
- verify file size and contents
Closes#13734
with more than one transfer-encoding, 'chunked' must be the last added
to the writer stack (and therefore the first to decode). RFC 9112, ch.
6.1.
Closes#13736
Fixes:
- in uds tests, abort also silently on os errors
- be conservative on the h3 goaway duration
- detect curl debug build and use in checks
- fix caddy version check for slight difference under linux
- set caddy default path fitting for linux
- fix deprecation warnings in valid time checks
FTP tests:
- add '--with-test-vsftpd=path' to configure
- use vsftpd default path suitable for linux
- add test_30 with plain FTP tests
- add test_31 with --ssl-reqd FTP tests
- add vsftpd to linux GHA for pytest workflows
Closes#13661
Verifies that the issue in #13669 actually is fixed. This return code is
what the CURLOPT_WRITEFUNCTION manpage documents should be returned.
This code is mostly from the
Source-written-by: Trumeet on github
Closes#13671
This avoids mistaking symbols with their numeric value when using
certain C preprocessors which output these numeric values at the
beginning of the line as part of an expression.
Seen on OpenBSD 7.5 + clang.
Example `test1167.pl -v` output, before this patch:
```
Source: cpp /home/runner/work/curl/curl/tests/../include/curl/curl.h
Symbol: 20000
Line #3835: 20000 + 142,
[...]
Bad symbols in public header files:
20000
[...]
```
Ref: https://github.com/curl/curl/actions/runs/9069136530/job/24918015357#step:3:7513
Ref: #13583Closes#13634
Before this patch, the result code was a mixture of `int` and
`CURLcode`.
Also adjust casts and fix a couple of minor issues found along the way.
Cherry-picked from #13489Closes#13600
Also make the user and password arguments mandatory, since all code
paths in libcurl used them anyway.
Adapted unit test case 1620 to the new rules.
Closes#13584
- identify ngtcp2 and nghttp3 error codes that are fatal
- close quic connection on fatal errors
- refuse further filter operations once connection is closed
- confusion about the nghttp3 API. We should close the QUIC stream on
cancel and not use the nghttp3 calls intended to be invoked when the
QUIC stream was closed by the peer.
Closes#13562
CURLOPT_EGDSOCKET and CURLOPT_RANDOM_FILE are both completely dead
so remove their example sections since the code there is useless.
There is still a way to inject a random file for OpenSSL older than
1.1.0 but it's not what the example showed (and it's not even done
with this option) so we refrain from documenting it here.
Closes: #13540
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Manpages which document deprecated CURLOPT_ or CURLINFO_ are not
required to have an EXAMPLE section since they might effectively
be dead no-ops which we don't want to trick users into believing
they can use by copying example code.
Closes: #13540
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
This avoids the below compiler warning:
tftpd.c:280:1: warning: function 'timer' could be declared with
attribute 'noreturn' [-Wmissing-noreturn]
Closes: #13534
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Take advantage of the Curl_cipher_suite_walk_str() and
Curl_cipher_suite_get_str() functions introduced in commit fba9afeb.
This also fixes CURLOPT_SSL_CIPHER_LIST not working at all for bearssl
due to commit ff74cef5.
Closes#13464
- connect to DNS names with trailing dot
- connect to DNS names with double trailing dot
- rustls, always give `peer->hostname` and let it
figure out SNI itself
- add SNI tests for ip address and localhost
- document in code and TODO that QUIC with ngtcp2+wolfssl
does not do proper peer verification of the certificate
- mbedtls, skip tests with ip address verification as not
supported by the library
Closes#13486
- quiche: error transfers that try to receive on a closed
or draining connection
- ngtcp2: use callback for extending max bidi streams. This
allows more precise calculation of MAX_CONCURRENT as we
only can start a new stream when the server acknowledges
the close - not when we locally have closed it.
- remove a fprintf() from h2-download client to avoid excess
log files on tests timing out.
Closes#13475
- add session with destructor callback
- remove vtls `session_free` method
- let `Curl_ssl_addsessionid()` take ownership
of session object, freeing it also on failures
- change tls backend use
- test_17, add tests for SSL session resumption
Closes#13386
- ignore duplicate "chunked" transfer-encodings from
a server to accomodate for broken implementations
- add test1482 and test1483
Reported-by: Mel Zuser
Fixes#13451Closes#13461
Use a lookup list to set the cipher suites, allowing the
ciphers to be set by either openssl or IANA names.
To keep the binary size of the lookup list down we compress
each entry in the cipher list down to 2 + 6 bytes using the
C preprocessor.
Closes#13442
This fixes a regression of 75d79a4486. The
code in tool-operate truncated the etag save file, under the assumption
that the file would be written with a new etag value. However since
75d79a4486 that might not be the case
anymore and could result in the file being truncated when --etag-compare
and --etag-save was used and that the etag value matched with what the
server responded. Instead the truncation should not be done when a new
etag value should be written.
Test 3204 was added to verify that the file with the etag value doesn't
change the contents when used by --etag-compare and --etage-save and
that value matches with what the server returns on a non 2xx response.
Closes#13432
- errors returned by Curl_xfer_write_resp() and the header variant are
not errors in the protocol. The result needs to be returned on the
next recv() from the protocol filter.
- make xfer write errors for response data cause the stream to be
cancelled
- added pytest test_02_14 and test_02_15 to verify that also for
parallel processing
Reported-by: Laramie Leavitt
Fixes#13411Closes#13424
A connection that has seen an HTTP major version now refuses any other
major HTTP version in future responses. Previously, a HTTP/1.x
connection would just silently accept HTTP/2 or HTTP/3 in the status
lines as long as it had support for those built-in. It would then just
lead to confusion and badness.
Indirectly Spotted by CodeSonar which identified a duplicate assignment
in this function.
Add test 471 to verify
Closes#13421
By default the API inhibits empty queries and fragments extracted.
Unless this new flag is set.
This also makes the behavior more consistent: without it set, zero
length queries and fragments are considered not present in the URL. With
the flag set, they are returned as a zero length strings if they were in
fact present in the URL.
This applies when extracting the individual query and fragment
components and for the full URL.
Closes#13396
Using the URL API for a redirect URL when the redirected-to string
starts with a hash, ie is only a fragment, the API would produce the
wrong final URL.
Adjusted test 1560 to test for several new redirect cases.
Closes#13394
- add `Curl_hash_offt` as hashmap between a `curl_off_t` and
an object. Use this in h2+h3 connection filters to associate
`data->id` with the internal stream state.
- changed implementations of all affected connection filters
- removed `h2_ctx*` and `h3_ctx*` from `struct HTTP` and thus
the easy handle
- solves the problem of attaching "foreign protocol" easy handles
during connection shutdown
Test 1616 verifies the new hash functions.
Closes#13204
I implemented the IDN functions for macOS and iOS using Unicode
libraries coming with macOS and iOS.
Builds and runs here on macOS 14.2.1. Also verified to load and
run on older macOS version 10.13.
Build requires macOS SDK 13 or equivalent.
Set `-DUSE_APPLE_IDN=ON` CMake option to enable it.
With autotools and other build tools, set these manual options:
```
CPPFLAGS=-DUSE_APPLE_IDN
LIBS=-licucore
```
Completes TODO 1.6.
TODO: add autotools option and feature-detection.
Refs: #5330#5371
Co-authored-by: Viktor Szakats
Closes#13246
- fix flow handling in ngtcp2 to ACK data on streams
we abort ourself.
- extend test_02_23* cases to also run for h3
- skip test_02_23* for OpenSSL QUIC as it gets stalled
on progressing the connection
Closes#13374
To reduce the risk that the user running the tests has a .curlrc present
that messes things up.
Support 'option="no-q"' for the <command> tag to switch it off on demand.
Use this new feature in test 433 and 436.
Ref: #13284Closes#13387
- remember error encountered in invoking write callback and always fail
afterwards without further invokes
- check behaviour in test_02_17 with h2-pausing client
Reported-by: Pavel Kropachev
Fixes#13337Closes#13340
Before this patch, two macros were used to guard IPv6 features in curl
sources: `ENABLE_IPV6` and `USE_IPV6`. This patch makes the source use
the latter for consistency with other similar switches.
`-DENABLE_IPV6` remains accepted for compatibility as a synonym for
`-DUSE_IPV6`, when passed to the compiler.
`ENABLE_IPV6` also remains the name of the CMake and `Makefile.vc`
options to control this feature.
Closes#13349
- h2-download now always opens the output file on first write callback
invocation, if it will pause the transfer or not.
- Checks on output files then does not depend on the amount of data curl
has collected for the first write.
Closes#13323
- When the writing of response data fails, reset the stream
and do not return a callback error to nghttp2. That would
be a fatal error for the connection and harm other requests.
- add test cases for various abort scenarios
Reported-by: Konstantin Kuzov
Fixes#13292Closes#13298
... no need to use an absolute path, that makes the build unncessarily
fail if invoked using a different mount point. managen now takes options
to find the input files.
Update test1478 to provide the dir arguments to managen
Closes#13281
A transfer with a completed download that is still uploading needs to
check the connection state when it is PAUSEd, since connection
close/errors would otherwise go unnoticed.
Reported-by: Sergey Bronnikov
Fixes#13260Closes#13271
The two options CURLOPT_PROXYUSERNAME and CURLOPT_PROXYPASSWORD set the
actual names as-is, not URL encoded.
Modified test 503 to use percent-encoded strings in the credential
strings that should be passed on as-is.
Reported-by: Sergey Ogryzkov
Fixes#13265Closes#13270
- curl's transfer handling may write 0-length chunks at the end of the
download with an EOS flag. (HTTP/2 does this commonly)
- content encoders need to pass-through such a write and not count this
as error in case they are finished decoding
Fixes#13209Fixes#13212Closes#13219
Move all handling of HTTP's `Expect: 100-continue` feature into a client
reader. Add sending flag `KEEP_SEND_TIMED` that triggers transfer
sending on general events like a timer.
HTTP installs a `CURL_CR_PROTOCOL` reader when announcing `Expect:
100-continue`. That reader works as follows:
- on first invocation, records time, starts the `EXPIRE_100_TIMEOUT`
timer, disables `KEEP_SEND`, enables `KEEP_SEND_TIMER` and returns 0,
eos=FALSE like a paused upload.
- on subsequent invocation it checks if the timer has expired. If so, it
enables `KEEP_SEND` and switches to passing through reads to the
underlying readers.
Transfer handling's `readwrite()` will be invoked when a timer expires
(like `EXPIRE_100_TIMEOUT`) or when data from the server arrives. Seeing
`KEEP_SEND_TIMER`, it will try to upload more data, which triggers
reading from the client readers again. Which then may lead to a new
pausing or cause the upload to start.
Flags and timestamps connected to this have been moved from
`SingleRequest` into the reader's context.
Closes#13110
The option is really two enums ORed together, so it needs special
attention to make the code output nice.
Added test 1481 to verify. Both the server and the proxy versions.
Reported-by: Boris Verkhovskiy
Fixes#13127Closes#13129
Use imperative mood consistently for the first sentence describing an
option.
"Set this" instead "tell curl to set" or "this sets..."
Plus some extra cleanups and rephrasing.
Closes#13106
... correctly, even when they follow an existing one without a space in
between.
Verify with test 467
Follow-up to 07dd60c05b
Reported-by: Geeknik Labs
Fixes#13101Closes#13102
... in its new build path.
Also update the test scripts to be more precise in error messages to
help us understand CI errors better.
Follow-up to f03c85635f
Ref: #13029Closes#13083
Create ASCII version of manpage without nroff
- build src/tool_hugegelp.c from the ascii manpage
- move the the manpage and the ascii version build to docs/cmdline-opts
- remove all use of nroff from the build process
- should make the build entirely reproducible (by avoiding nroff)
- partly reverts 2620aa9 to build libcurl option man pages one by one
in cmake because the appveyor builds got all crazy until I did
The ASCII version of the manpage
- is built with gen.pl, just like the manpage is
- has a right-justified column making the appearance similar to the previous
version
- uses a 4-space indent per level (instead of the old version's 7)
- does not do hyphenation of words (which nroff does)
History
We first made the curl build use nroff for building the hugehelp file in
December 1998, for curl 5.2.
Closes#13047
If a response without a status line is received, and the connection is
known to use HTTP/1.x (not HTTP/0.9), report the error "Invalid status
line" instead of "Received HTTP/0.9 when not allowed".
Closes#13045
- pytest has changed the signature of the hook pytest_report_header()
for some obscure reason and that change landed in our CI now
- remove the changed param that we never used anyway
Closes#13037
This fixes miscellaneous typos and duplicated words in the docs, lib
and test comments and a few user facing errorstrings.
Author: RainRat on Github
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Dan Fandrich <dan@coneharvesters.com>
Closes: #13019
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to
clarify when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer
setup of `conn->sockfd` and `conn->writesockfd` on which
connection filter chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index
as parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for
naming consistency
- clarify that the special CURLE_AGAIN hangling to return
`CURLE_OK` with length 0 only applies to `Curl_xfer_send()`
and CURLE_AGAIN is returned by all other send() variants.
- fix a bug in websocket `curl_ws_recv()` that mixed up data
when it arrived in more than a single chunk (to be made
into a sperate PR, also)
Added as documented [in
CLIENT-READER.md](5b1f31dfba/docs/CLIENT-READERS.md).
- old `Curl_buffer_send()` completely replaced by new `Curl_req_send()`
- old `Curl_fillreadbuffer()` replaced with `Curl_client_read()`
- HTTP chunked uploads are now formatted in a client reader added when
needed.
- FTP line-end conversions are done in a client reader added when
needed.
- when sending requests headers, remaining buffer space is filled with
body data for sending in "one go". This is independent of the request
body size. Resolves#12938 as now small and large requests have the
same code path.
Changes done to test cases:
- test513: now fails before sending request headers as this initial
"client read" triggers the setup fault. Behaves now the same as in
hyper build
- test547, test555, test1620: fix the length check in the lib code to
only fail for reads *smaller* than expected. This was a bug in the
test code that never triggered in the old implementation.
Closes#12969
When disabling all protocols without enabling any, the resulting
set of allowed protocols remained the default set. Clearing the
allowed set before inspecting the passed value from --proto make
the set empty even in the errorpath of no protocols enabled.
Co-authored-by: Dan Fandrich <dan@telarity.com>
Reported-by: Dan Fandrich <dan@telarity.com>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Closes: #13004
Curl_read/Curl_write clarifications
- replace `Curl_read()`, `Curl_write()` and `Curl_nwrite()` to 1clarify
when and at what level they operate
- send/recv of transfer related data is now done via
`Curl_xfer_send()/Curl_xfer_recv()` which no longer has
socket/socketindex as parameter. It decides on the transfer setup of
`conn->sockfd` and `conn->writesockfd` on which connection filter
chain to operate.
- send/recv on a specific connection filter chain is done via
`Curl_conn_send()/Curl_conn_recv()` which get the socket index as
parameter.
- rename `Curl_setup_transfer()` to `Curl_xfer_setup()` for naming
consistency
- clarify that the special CURLE_AGAIN handling to return `CURLE_OK`
with length 0 only applies to `Curl_xfer_send()` and CURLE_AGAIN is
returned by all other send() variants.
SingleRequest reshuffling
- move functions into request.[ch]
- differentiate between reset and free
- add Curl_req_done() to perform last actions
- add a send `bufq` to SingleRequest for future use in keeping upload data
Closes#12963
Returns 1 if the previous transfer used a proxy, otherwise 0. Useful to
for example determine if a `NOPROXY` pattern matched the hostname or
not.
Extended test 970 and 972
- when data arrived in several chunks, the collection into
the passed buffer always started at offset 0, overwriting
the data already there.
adding test_20_07 to verify fix
- debug environment var CURL_WS_CHUNK_SIZE can be used to
influence the buffer chunk size used for en-/decoding.
Closes#12945
Also fix the tests. New implementation tested with GNU libmicrohttpd.
The new numbers in tests are real SHA-512/256 numbers (not just some
random ;) numbers ).
The previous realloc code in this code could trigger a compiler warning,
but since that code path cannot happen in normal circumstances it now
instead exits with an error message there.
Ref: #12887Closes#12890
- add test_05_04 for requests using http/1.0, http/1.1 and h2 against an
Apache resource that does an unclean TLS shutdown.
- revert special workarund in openssl.c for suppressing shutdown errors
on multiplexed connections
- vlts.c restore to its state before 9a90c9dd64Fixes#12885Fixes#12844Closes#12848
- test450: remove --config from the keywords
- test2080: change return code
- test428: add --config as a keyword
- test428: disable on Windows due to CI problems
Like when trying to import an environment variable that does not exist.
Also fix a bug for reading env variables when there is a default value
set.
Bug: https://curl.se/mail/archive-2024-02/0008.html
Reported-by: Brett Buddin
Add test 462 to verify.
Closes#12862
Create the line in a dynbuf. Aborts the reading of the file on
errors. Avoids having to always allocate maximum amount from the
start. Avoids direct malloc.
Closes#12846
- improve info logging when peer verification fails to indicate
if DNS name or ip address has been tried to match
- add test case for contacting https proxy with ip address
- add pytest env check on loaded credentials and re-issue
when they are no longer valid
- disable proxy ip address test for bearssl, since not supported there
Ref: #12831Closes#12838
and mark the stream for close, but return OK since the response this far
was ok - if headers were received. Partly because this is what curl has
done traditionally.
Test 499 verifies. Updates test 689.
Reported-by: Sergey Bronnikov
Bug: https://curl.se/mail/lib-2024-02/0000.htmlCloses#12842
- delete redundant warning suppressions for `-Wformat-nonliteral`.
This now relies on `CURL_PRINTF()` and it's theoratically possible
that this macro isn't active but the warning is. We're ignoring this
as a corner-case here.
- replace two pragmas with code changes to avoid the warnings.
Follow-up to aee4ebe591#12803
Follow-up to 0923012758#12540
Follow-up to 3829759bd0#12489
Reviewed-by: Daniel Stenberg
Closes#12812
- Use http authentication mechanisms as a default, not a preset.
Consider http authentication options which are mapped to SASL options as
a default (overriding the hardcoded default mask for the protocol) that
is ignored if a login option string is given.
Prior to this change, if some HTTP auth options were given, sasl mapped
http authentication options to sasl ones but merged them with the login
options.
That caused problems with the cli tool that sets the http login option
CURLAUTH_BEARER as a side-effect of --oauth2-bearer, because this flag
maps to more than one sasl mechanisms and the latter cannot be cleared
individually by the login options string.
New test 992 checks this.
Fixes https://github.com/curl/curl/issues/10259
Closes https://github.com/curl/curl/pull/12790