To reduce the damage an application can cause if using -1 or other
ridiculous timeout values and letting the cache live long times.
The maximum number of entries in the DNS cache is now totally
arbitrarily and hard-coded set to 29999.
Closes#11084
I was reading curl_unescape(3) and I noticed that there was an extra
space after the open parenthesis in the SYNOPSIS; I removed the extra
space.
I also ran a few grep -r commands to find and remove extra spaces
after '(' in other files, and to find and replace uses of `T*' instead
of `T *'. Some of the instances of `T*` where unnecessary casts that I
removed.
I also fixed a comment that was misaligned in CURLMOPT_SOCKETFUNCTION.3.
And I fixed some formatting inconsistencies: in curl_unescape(3), all
function parameter were mentioned with bold text except length, that was
mentioned as 'length'; and, in curl_easy_unescape(3), all parameters
were mentioned in bold text except url that was italicised. Now they are
all mentioned in bold.
Documentation is not very consistent in how function parameter are
formatted: many pages italicise them, and others display them in bold
text; but I think it makes sense to at least be consistent with
formatting within the same page.
Closes#11027
- remove the version numbers
- simplify the texts
The date and version number will be put there for releases when maketgz
runs the updatemanpages.pl script.
Closes#11029
- remove h3 issues believed to be fixed
- make the flaky CI issue be generic and not Windows specific
- "TLS session cache does not work with TFO" now documented
This is now a documented restriction and not a bug. TFO in general is
rarely used and has other problems, making it a low-priotity thing to
work on.
- remove "Renegotiate from server may cause hang for OpenSSL backend"
This is an OpenSSL issue, not a curl one. Even if it taints curl.
- rm "make distclean loops forever"
- rm "configure finding libs in wrong directory"
Added a section to docs/INSTALL.md about it.
- "A shared connection cache is not thread-safe"
Moved over to TODO and expanded for other sharing improvements we
could do
- rm "CURLOPT_OPENSOCKETPAIRFUNCTION is missing"
- rm "Blocking socket operations in non-blocking API"
Already listed as a TODO
- rm "curl compiled on OSX 10.13 failed to run on OSX 10.10"
Water under the bridge. No one cares about this anymore.
- rm "build on Linux links libcurl to libdl"
Verified to not be true (anymore).
- rm "libpsl is not supported"
The cmake build supports it since cafb356e19Closes#10963
* Configure changes to detect AWS-LC
* CMakeLists.txt changes to detect AWS-LC
* Compile-time branches needed to support AWS-LC
* Correctly set OSSL_VERSION and report AWS-LC release number
* GitHub Actions script to build with autoconf and cmake against AWS-LC
AWS-LC is a BoringSSL/OpenSSL derivative
For more information see https://github.com/awslabs/aws-lc/Closes#10320
The test does a slightly ugly busy-loop for this case but should be
managable due to it likely being a very short moment.
Mention CURLE_AGAIN in curl_ws_recv.3
Fixes#10760
Reported-by: Jay Satiro
Closes#10781
all s3 requests default to UNSIGNED-PAYLOAD and add the required
x-amz-content-sha256 header. this allows CURLAUTH_AWS_SIGV4 to correctly
sign s3 requests to amazon with no additional configuration
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Closes#9995
It results in error "NSS error -5985 (PR_ADDRESS_NOT_SUPPORTED_ERROR)"
Disabled test 1470 for NSS builds and documented the restriction.
Reported-by: Dan Fandrich
Fixes#10723Closes#10734
Current test for curl_free() may produce warnings with strict compiler
flags or even with default compiler flags with upcoming versions.
These warning could turned into errors by -Werror or similar flags.
Such warnings/errors are avoided by this patch.
Closes#10710
The variable had a few different names. Now try to use 'clientp'
consistently for all man pages using a custom pointer set by the
application.
Reported-by: Gerrit Renker
Fixes#10434Closes#10435
Bump the limit from 512K. There might be reasons for applications using
h3 to set larger buffers and there is no strong reason for curl to have
a very small maximum.
Ref: https://curl.se/mail/lib-2023-01/0026.htmlCloses#10256
- Warn that in Windows if libcurl is running from a DLL and if
CURLOPT_HEADERDATA is set then CURLOPT_WRITEFUNCTION or
CURLOPT_HEADERFUNCTION must be set as well, otherwise the user may
experience crashes.
We already have a similar warning in CURLOPT_WRITEDATA. Basically, in
Windows libcurl could crash writing a FILE pointer that was created by
a different C runtime. In Windows each DLL that is part of a program may
or may not have its own C runtime.
Ref: https://github.com/curl/curl/issues/10231
Closes https://github.com/curl/curl/pull/10233
- they are mostly pointless in all major jurisdictions
- many big corporations and projects already don't use them
- saves us from pointless churn
- git keeps history for us
- the year range is kept in COPYING
checksrc is updated to allow non-year using copyright statements
Closes#10205
- curl_ws_send returns CURLE_SEND_ERROR if data->conn is gone
- curl_ws_recv returns CURLE_GOT_NOTHING on connection close
- curl_ws_recv.3: mention new return code for connection close + example
embryo
Closes#10084
- CURL_GLOBAL_SSL
This option was changed in libcurl 7.57.0 and clearly it has not caused
too many issues and a lot of time has passed.
- Store TLS context per transfer instead of per connection
This is a possible future optimization. One that is much less important
and interesting since the added support for CA caching.
- Microsoft telnet server
This bug was filed in May 2007 against curl 7.16.1 and we have not
received further reports.
- active FTP over a SOCKS
Actually, proxies in general is not working with active FTP mode. This
is now added in proxy documentation.
- DICT responses show the underlying protocol
curl still does this, but since this is now an established behavior
since forever we cannot change it easily and adding an option for it
seems crazy as this protocol is not so little its not worth it. Let's
just live with it.
- Secure Transport disabling hostname validation also disables SNI
This is an already documented restriction in Secure Transport.
- CURLOPT_SEEKFUNCTION not called with CURLFORM_STREAM
The curl_formadd() function is marked and documented as deprecated. No
point in collecting bugs for it. It should not be used further.
- STARTTRANSFER time is wrong for HTTP POSTs
After close source code inspection I cannot see how this is true or that
there is any special treatment for different HTTP methods. We also have
not received many further reports on this, making me strongly suspect
that this is no (longer an) issue.
- multipart formposts file name encoding
The once proposed RFC 5987-encoding is since RFC 7578 documented as MUST
NOT be used. The since then implemented MIME API allows the user to set
the name on their own and can thus provide it encoded as it wants.
- DoH is not used for all name resolves when enabled
It is questionable if users actually want to use DoH for interface and
FTP port name resolving. This restriction is now documented and we
advice users against using name resolving at all for these functions.
Closes#10043
Deprecation and removal of codeset conversion support from the library
have released the strict need for an early binding of mime structures to
an easy handle (https://github.com/curl/curl/commit/2610142).
This constraint currently forces to create the handle before the mime
structure and the latter cannot be attached to another handle once
created (see https://curl.se/mail/lib-2022-08/0027.html).
This commit removes the handle pointers from the mime structures
allowing more flexibility on their use.
When an easy handle is duplicated, bound mime structures must however
still be duplicated too as their components hold send-time dynamic
information.
Closes#9927
- "FTP with CONNECT and slow server"
I believe this is not a problem these days.
- "FTP with NULs in URL parts"
The FTP protocol does not support them properly anyway.
- remove "FTP and empty path parts in the URL"
I don't think this has ever been reported as a real problem but was only
a hypothetical one.
- "Premature transfer end but healthy control channel"
This is not a bug, this is an optimization that *could* be performed but is
not an actual problem.
- "FTP without or slow 220 response"
Instead add to the documentation of the connect timeout that the
connection is considered complete at TCP/TLS/QUIC layer.
Closes#9979
`Curl_output_aws_sigv4()` doesn't always have the whole payload in
memory to generate a real payload hash. this commit allows the user to
pass in a header like `x-amz-content-sha256` to provide their desired
payload hash
some services like s3 require this header, and may support other values
like s3's `UNSIGNED-PAYLOAD` and `STREAMING-AWS4-HMAC-SHA256-PAYLOAD`
with special semantics. servers use this header's value as the payload
hash during signature validation, so it must match what the client uses
to generate the signature
CURLOPT_AWS_SIGV4.3 now describes the content-sha256 interaction
Signed-off-by: Casey Bodley <cbodley@redhat.com>
Closes#9804
Add a deprecated attribute to functions and enum values that should not
be used anymore.
This uses a gcc 4.3 dialect, thus is only available for this version of
gcc and newer. Note that the _Pragma() keyword is introduced by C99, but
is available as part of the gcc dialect even when compiling in C89 mode.
It is still possible to disable deprecation at a calling module compile
time by defining CURL_DISABLE_DEPRECATION.
Gcc type checking macros are made aware of possible deprecations.
Some testing support Perl programs are adapted to the extended
declaration syntax.
Several test and unit test C programs intentionally use deprecated
functions/options and are annotated to not generate a warning.
New test 1222 checks the deprecation status in doc and header files.
Closes#9667
Field feature_names contains a null-terminated sorted array of feature
names. Bitmask field features is deprecated.
Documentation is updated. Test 1177 and tests/version-scan.pl updated to
match new documentation format and extended to check feature names too.
Closes#9583
Prior to this change if the user wanted to signal an error from their
write callbacks they would have to use logic to return a value different
from the number of bytes (nmemb) passed to the callback. Also, the
inclination of some users has been to just return 0 to signal error,
which is incorrect as that may be the number of bytes passed to the
callback.
To remedy this the user can now return CURL_WRITEFUNC_ERROR instead.
Ref: https://github.com/curl/curl/issues/9873
Closes https://github.com/curl/curl/pull/9874
Adds a new option to control the maximum time that a cached
certificate store may be retained for.
Currently only the OpenSSL backend implements support for
caching certificate stores.
Closes#9620
A regfression in 7.86.0 (via 1e9a538e05) made the tailmatch work
differently than before. This restores the logic to how it used to work:
All names listed in NO_PROXY are tailmatched against the used domain
name, if the lengths are identical it needs a full match.
Update the docs, update test 1614.
Reported-by: Stuart Henderson
Fixes#9842Closes#9858
For both IPv4 and IPv6 addresses. Now also checks IPv6 addresses "correctly"
and not with string comparisons.
Split out the noproxy checks and functionality into noproxy.c
Added unit test 1614 to verify checking functions.
Reported-by: Mathieu Carbonneaux
Fixes#9773Fixes#5745Closes#9775
"You never needed a pass phrase" reads like it's about to be followed by
something like "until version so-and-so", but that is not what is
intended. Change to "You never need a pass phrase". There are two
instances of this text, so make sure to update both.
curl_ws_recv() now receives data to fill up the provided buffer, but can
return a partial fragment. The function now also get a pointer to a
curl_ws_frame struct with metadata that also mentions the offset and
total size of the fragment (of which you might be receiving a smaller
piece). This way, large incoming fragments will be "streamed" to the
application. When the curl_ws_frame struct field 'bytesleft' is 0, the
final fragment piece has been delivered.
curl_ws_recv() was also adjusted to work with a buffer size smaller than
the fragment size. (Possibly needless to say as the fragment size can
now be 63 bit large).
curl_ws_send() now supports sending a piece of a fragment, in a
streaming manner, in addition to sending the entire fragment in a single
call if it is small enough. To send a huge fragment, curl_ws_send() can
be used to send it in many small calls by first telling libcurl about
the total expected fragment size, and then send the payload in N number
of separate invokes and libcurl will stream those over the wire.
The struct curl_ws_meta() returns is now called 'curl_ws_frame' and it
has been extended with two new fields: *offset* and *bytesleft*. To help
describe the passed on data chunk when a fragment is delivered in many
smaller pieces.
The documentation has been updated accordingly.
Closes#9636
The former way that also suggested using a non-existing file to just
enable the cookie engine could lead to developers maybe a bit carelessly
guessing a file name that will not exist, and then in a future due to
circumstances, such a file could be made to exist and then accidentally
libcurl would read cookies not actually meant to.
Reported-by: Trail of bits
Closes#9654
The introduction of CURL_DISABLE_MIME came with some additional bugs:
- Disabled MIME is compiled-in anyway if SMTP and/or IMAP is enabled.
- CURLOPT_MIMEPOST, CURLOPT_MIME_OPTIONS and CURLOPT_HTTPHEADER are
conditioned on HTTP, although also needed for SMTP and IMAP MIME mail
uploads.
In addition, the CURLOPT_HTTPHEADER and --header documentation does not
mention their use for MIME mail.
This commit fixes the problems above.
Closes#9610
This is how the RFC calls the protocol. Also rename the file in docs/ to
WEBSOCKET.md in uppercase to match how we have done it for many other
protocol docs in similar fashion.
Add the WebSocket docs to the tarball.
Closes#9496
Next Protocol Negotiation is a TLS extension that was created and used
for agreeing to use the SPDY protocol (the precursor to HTTP/2) for
HTTPS. In the early days of HTTP/2, before the spec was finalized and
shipped, the protocol could be enabled using this extension with some
servers.
curl supports the NPN extension with some TLS backends since then, with
a command line option `--npn` and in libcurl with
`CURLOPT_SSL_ENABLE_NPN`.
HTTP/2 proper is made to use the ALPN (Application-Layer Protocol
Negotiation) extension and the NPN extension has no purposes
anymore. The HTTP/2 spec was published in May 2015.
Today, use of NPN in the wild should be extremely rare and most likely
totally extinct. Chrome removed NPN support in Chrome 51, shipped in
June 2016. Removed in Firefox 53, April 2017.
Closes#9307
Lintian (on Debian) has been complaining about this for a while but
I didn't bother initially as the groff parser that we use is not
affected by this.
But I have now noticed that the online manpage is affected by it:
https://curl.se/libcurl/c/CURLOPT_WILDCARDMATCH.html
(I'm using double quotes for quoting-only down below)
The section that should be parsed as "'\'" ends up being parsed as
"'´".
This is due to roffit not parsing "'\\'" correctly, which is fine
as the "correct" way of writing "'\'" is "'\e'" instead.
Note that this fix is not enough to fix the online manpage at
curl's website, as roffit seems to parse it wrongly either way.
My intent is to at least fix the manpage so that roffit can
be changed to parse "'\e'" correctly (although I suggest making
roffit parse both ways correctly, since that's what groff does).
More details at:
https://bugs.debian.org/966803930b18e4b2/tags/a/acute-accent-in-manual-page.tagCloses#9418
- Support TLS 1.3 as the default max TLS version for Windows Server 2022
and Windows 11.
- Support specifying TLS 1.3 ciphers via existing option
CURLOPT_TLS13_CIPHERS (tool: --tls13-ciphers).
Closes https://github.com/curl/curl/pull/8419
Starting now, CURLOPT_FTP_RESPONSE_TIMEOUT is the alias instead of the
other way around.
Since 7.20.0, CURLOPT_SERVER_RESPONSE_TIMEOUT has existed as an alias
but since the option is for more protocols than FTP the more "correct"
version of the option is the "server" one so now we switch.
Closes#9104
... as replacements for deprecated CURLOPT_PROTOCOLS and
CURLOPT_REDIR_PROTOCOLS as these new ones do not risk running into the
32 bit limit the old ones are facing.
CURLINFO_PROTCOOL is now deprecated.
The curl tool is updated to use the new options.
Added test 1597 to verify the libcurl protocol parser.
Closes#8992
During the packaging of the latest curl release for Debian, Lintian
warned me about a typo which causes the section name "Secrets in memory"
to not be rendered in the manpage due to "SH_" not being recognized as a
header.
Closes#9057
.. and update some docs to explain curl_global_* is now thread-safe.
Follow-up to 23af112 which made curl_global_init/cleanup thread-safe.
Closes https://github.com/curl/curl/pull/9016
- Remove misleading text that says progress function "gets called at
least once per second, even if the connection is paused."
The progress function behavior is more nuanced and the user is better
served reading the progress function doc rather than attempt to explain
it in the curl_easy_pause doc.
The progress function can only be called at least once per second if an
appropriate multi transfer function is called (eg curl_multi_perform) in
that time. For a paused transfer there may not be such a call. Rather
than explain this in detail in the curl_easy_pause doc, rely on the user
reading the CURLOPT_PROGRESSFUNCTION doc.
Ref: https://github.com/curl/curl/issues/8983
Closes https://github.com/curl/curl/pull/9015
Referring to Daniel's article [1], making the init function thread-safe
was the last bit to make libcurl thread-safe as a whole. So the name of
the feature may as well be the more concise 'threadsafe', also telling
the story that libcurl is now fully thread-safe, not just its init
function. Chances are high that libcurl wants to remain so in the
future, so there is little likelihood of ever needing any other distinct
`threadsafe-<name>` feature flags.
For consistency we also shorten `CURL_VERSION_THREADSAFE_INIT` to
`CURL_VERSION_THREADSAFE`, update its description and reference libcurl's
thread safety documentation.
[1]: https://daniel.haxx.se/blog/2022/06/08/making-libcurl-init-more-thread-safe/
Reviewed-by: Daniel Stenberg
Reviewed-by: Jay Satiro
Closes#8989
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
Adds two new error codes: CURLE_UNRECOVERABLE_POLL and
CURLM_UNRECOVERABLE_POLL one each for the easy and the multi interfaces.
Reported-by: Harry Sintonen
Fixes#8921Closes#8961
The e-mail link in the advice contains instructions that are prone to
error. We need an example that works and can demonstrate how to properly
perform a ranged upload, and then we can refer to that example instead.
Bug: https://github.com/curl/curl/issues/8969
Reported-by: Simon Berger
Closes https://github.com/curl/curl/pull/8970
This flag can be used to make sure that curl_global_init() is
thread-safe.
This can be useful for libraries that can't control what other
dependencies are doing with Curl.
Closes#8680
The callback set by CURLOPT_SSH_HOSTKEYFUNCTION is called to check
wether or not the connection should continue.
The host key is passed in argument with a custom handle for the
application.
It overrides CURLOPT_SSH_KNOWNHOSTS
Closes#7959
Folded header lines will now get passed through like before. The headers
API is adapted and will provide the content unfolded.
Added test 1274 and extended test 1940 to verify.
Reported-by: Petr Pisar
Fixes#8844Closes#8899
Usage:
curl -x "socks5h://localhost/run/tor/socks" "https://example.com"
Updated runtests.pl to run a socksd server listening on unix socket
Added tests test1467 test1468
Added documentation for proxy command line option and socks proxy
options
Closes#8668
These two options were only ever used for the OpenSSL backend for
versions before 1.1.0. They were never used for other backends and they
are not used with recent OpenSSL versions. They were never used much by
applications.
The defines RANDOM_FILE and EGD_SOCKET can still be set at build-time
for ancient EOL OpenSSL versions.
Closes#8670
The API documentation for the MIME functions specify that the parts
can be set twice, with the last call winning. While true, the user
can set the parts n times for n > 2, reword to specify multiple API
calls instead.
Closes: #8860
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Multiple share examples were missing a semicolon on the line defining
the CURLSHcode variable.
Closes: #8697
Reported-by: Michael Kaufmann <mail@michael-kaufmann.ch>
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Mostly based on recent language decisions from "everything curl":
- remove contractions (isn't => is not)
- *an* HTTP (consistency)
- runtime (no hyphen)
- backend (no hyphen)
- URL is uppercase
Closes#8646
Several years ago a change was made to block user callbacks from calling
back into the API when not supported (recursive calls). One of the calls
blocked was curl_multi_assign. Recently the blocking was extended to the
multi interface API, however curl_multi_assign may need to be called
from within those user callbacks (eg CURLMOPT_SOCKETFUNCTION).
I can't think of any callback where it would be unsafe to call
curl_multi_assign so I removed the restriction entirely.
Reported-by: Michael Wallner
Ref: https://github.com/curl/curl/commit/b46cfbc
Ref: https://github.com/curl/curl/commit/340bb19
Fixes https://github.com/curl/curl/issues/8480
Closes https://github.com/curl/curl/pull/8483
1. The callback is better described in the option for setting it. Having
it in a single place reduces the risk that one of them is wrong.
2. The "typical usage" is wrong since the functions described in this
man page are both deprecated so they cannot be used in any "typical" way
anymore.
Closes#8262
As credentials can be quite different depending on the mechanism used,
there are no default mechanisms for LDAP and simple bind with a DN is
then used.
The caller has to provide mechanism(s) using CURLOPT_LOGIN_OPTIONS to
enable SASL authentication and disable simple bind.
Closes#8152
83cc966 changed documentation from using http to https. However,
CURLOPT_RESOLVE being set to port 80 in the documentation means that it
isn't valid for the new URL. Update to 443.
Closes https://github.com/curl/curl/pull/8258
Mesalink has ceased development. We can no longer encourage use of it.
It seems to be continued under the name TabbySSL, but no attempts have
(yet) been to make curl support it.
Fixes#8188Closes#8191
For consistency, use the same return code for URL malformats,
independently of what scheme that is used. Previously this would return
CURLE_LDAP_INVALID_URL, but starting now that error cannot be returned.
Closes#8170
Add support for `CURLOPT_CAINFO_BLOB` `CURLOPT_PROXY_CAINFO_BLOB` to the
rustls TLS backend. Multiple certificates in a single PEM string are
supported just like OpenSSL does with this option.
This is compatible at least with rustls-ffi 0.8+ which is our new
minimum version anyway.
I was able to build and run this on Windows, pulling trusted certs from
the system and then add them to rustls by setting
`CURLOPT_CAINFO_BLOB`. Handy!
Closes#8255
The new CURLOPT_PREREQFUNCTION callback is another way to sanitize
addresses.
Using the curl_url API is a way to mitigate against attacks relying on
URL parsing differences.
- Early check proper LDAP URL syntax. Reject URLs with a userinfo part.
- Use dynamic memory for ldap_init_fd() URL rather than a
stack-allocated buffer.
- Never chase referrals: supporting it would require additional parallel
connections and alternate authentication credentials.
- Do not wait 1 microsecond while polling/reading query response data.
- Store last received server code for retrieval with CURLINFO_RESPONSE_CODE.
Closes#8140
Minor rephrasing for some explanations.
Put the format strings in stand-alone lines with .nf/.fi to be easier to spot.
Move "added in" to AVAILABILITY
Closed#8110
This is the exact same limitation already documented for
CURLOPT_WRITEDATA but should be clarified here. It also has a different
work-around.
Reported-by: Stephane Pellegrino
Bug: https://github.com/curl/curl/issues/8102Closes#8103
The callbacks were partially documented to support this. Now the
behavior is documented and returning error from either of these
callbacks will effectively kill all currently ongoing transfers.
Added test 530 to verify
Reported-by: Marcelo Juchem
Fixes#8083Closes#8089
Make all libcurl related options use .nf (no fill) for the SYNOPSIS
section - for consistent look. roffit then renders that section using
<pre> (monospace font) in html for the website.
Extended manpage-syntax (test 1173) with a basic check for it.
Closes#8062
Previously, the return code CURLUE_MALFORMED_INPUT was used for almost
30 different URL format violations. This made it hard for users to
understand why a particular URL was not acceptable. Since the API cannot
point out a specific position within the URL for the problem, this now
instead introduces a number of additional and more fine-grained error
codes to allow the API to return more exactly in what "part" or section
of the URL a problem was detected.
Also bug-fixes curl_url_get() with CURLUPART_ZONEID, which previously
returned CURLUE_OK even if no zoneid existed.
Test cases in 1560 have been adjusted and extended. Tests 1538 and 1559
have been updated.
Updated libcurl-errors.3 and curl_url_strerror() accordingly.
Closes#8049
Instad of having all callers pass in the maximum length, always use
it. The passed in length is instead used only as the length of the
target buffer for to storing the scheme name in, if used.
Added the scheme max length restriction to the curl_url_set.3 man page.
Follow-up to 45bcb2eaa7Closes#8047
Until now, form field and file names where escaped using the
backslash-escaping algorithm defined for multipart mails. This commit
replaces this with the percent-escaping method for URLs.
As this may introduce incompatibilities with server-side applications, a
new libcurl option CURLOPT_MIME_OPTIONS with bitmask
CURLMIMEOPT_FORMESCAPE is introduced to revert to legacy use of
backslash-escaping. This is controlled by new cli tool option
--form-escape.
New tests and documentation are provided for this feature.
Reported by: Ryan Sleevi
Fixes#7789Closes#7805
Easy handles that are used by the multi interface should be removed from
the multi handle before they are cleaned up.
Reported-by: Stephen M. Coakley
Ref: #7982Closes#7983
Bold the example ciphers instead of using single quotes, which then also
avoids the problem of how to use single quotes when first in a line.
Also rephrased the pages a little.
Reported-by: Sergio Durigan Junior
Ref: #7928Closes#7934
This is the same order we already enforce among the options' man pages:
consistency is good. Add lots of previously missing examples.
Adjust the manpage-syntax script for this purpose, used in test 1173.
Closes#7904
- simplify the example by using curl_multi_poll
- mention curl_multi_add_handle in the text
- cut out the description of pre-7.20.0 return code behavior - that version
is now more than eleven years old and is basically no longer out there
- adjust the "typical usage" to mention curl_multi_poll
Closes#7883
The host name is stored decoded and can be encoded when used to extract
the full URL. By default when extracting the URL, the host name will not
be URL encoded to work as similar as possible as before. When not URL
encoding the host name, the '%' character will however still be encoded.
Getting the URL with the CURLU_URLENCODE flag set will percent encode
the host name part.
As a bonus, setting the host name part with curl_url_set() no longer
accepts a name that contains space, CR or LF.
Test 1560 has been extended to verify percent encodings.
Reported-by: Noam Moshe
Reported-by: Sharon Brizinov
Reported-by: Raul Onitza-Klugman
Reported-by: Kirill Efimov
Fixes#7830Closes#7834
Triggered before a request is made but after a connection is set up
Changes:
- callback: Update docs and callback for pre-request callback
- Add documentation for CURLOPT_PREREQDATA and CURLOPT_PREREQFUNCTION,
- Add redirect test and callback failure test
- Note that the function may be called multiple times on a redirection
- Disable new 2086 test due to Windows weirdness
Closes#7477
Add curl_url_strerror() to convert CURLUcode into readable string and
facilitate easier troubleshooting in programs using URL API.
Extend CURLUcode with CURLU_LAST for iteration in unit tests.
Update man pages with a mention of new function.
Update example code and tests with new functionality where it fits.
Closes#7605
- avoid writing "set ..." or "enable/disable ..." or "specify ..."
*All* options for curl_easy_setopt() are about setting or enabling
things and most of the existing options didn't use that way of
description.
- start with lowercase letter, unless abbreviation. For consistency.
- Some additional touch-ups
Closes#7688
In every libcurl option man page there are now 8 mandatory sections that
must use the right name in the correct order and test 1173 verifies
this. Only 14 man pages needed adjustments.
The sections and the order is as follows:
- NAME
- SYNOPSIS
- DESCRIPTION
- PROTOCOLS
- EXAMPLE
- AVAILABILITY
- RETURN VALUE
- SEE ALSO
Reviewed-by: Daniel Gustafsson
Closes#7656
Extended manpage-syntax.pl (run by test 1173) to check that every man
page for a libcurl option has an EXAMPLE section that is more than two
lines. Then fixed all errors it found and added examples.
Reviewed-by: Daniel Gustafsson
Closes#7656
Since this option is also used for FTP, it needs to work to set for
applications even if hyper doesn't support it for HTTP. Verified by test
1137.
Updated docs to specify that the option doesn't work for HTTP when using
the hyper backend.
Closes#7614
CURLUE_BAD_HANDLE and CURLUE_BAD_PARTPOINTER should be for "bad" or
wrong pointers in a generic sense, not just for NULL pointers.
Reviewed-by: Jay Satiro
Ref: #7605Closes#7611
... and also change the 'Removed' column name to 'Last' since that
column is for the last version to contain the symbol.
Closes https://github.com/curl/curl/pull/7609
- Add protocols field to max-filesize.d.
- Revert wording on unknown file size caveat and do not discuss specific
protocols in that section.
Partial revert of ecf0225. All max-filesize options now have the list of
protocols and it's clearer just to have that list without discussing
specific protocols in the caveat.
Reported-by: Josh Soref
Ref: https://github.com/curl/curl/issues/7453#issuecomment-884128762
Also make it clearer that the caveat 'if the file size is unknown it
the option will have no effect' may apply to protocols other than FTP
and HTTP.
Reported-by: Josh Soref
Fixes https://github.com/curl/curl/issues/7453
Only the OpenSSL backend actually use the EGDSOCKET, and also use
TLS consistently rather than mixing SSL and TLS. While there, also
fix a minor spelling nit.
Closes: #7391
Reviewed-by: Jay Satiro <raysatiro@yahoo.com>
The documentation for the read callback was erroneously referencing
the nitems argument by nmemb. The error was introduced in commit
ce0881edee.
Closes#7383
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Unicode Windows builds use UTF-8 strings internally in libcurl,
so make sure to call the UTF-8 flavour of the libidn2 API. Also
document that Windows builds with libidn2 and UNICODE do expect
CURLOPT_URL as an UTF-8 string.
Reported-by: dEajL3kA on github
Assisted-by: Jay Satiro
Reviewed-by: Marcel Raad
Closes#7246Fixes#7228
They were never officially allowed and slipped in only due to sloppy
parsing. Spaces (ascii 32) should be correctly encoded (to %20) before
being part of a URL.
The new flag bit CURLU_ALLOW_SPACE when a full URL is set, makes libcurl
allow spaces.
Updated test 1560 to verify.
Closes#7073
For options that pass in lists or strings that are subsequently parsed
and must be correct. This broadens the scope for the option previously
known as CURLE_TELNET_OPTION_SYNTAX but the old name is of course still
provided as a #define for existing applications.
Closes#7175
In some situations, it was possible that a transfer was setup to
use an specific IP version, but due do DNS caching or connection
reuse, it ended up using a different IP version from requested.
This commit changes the effect of CURLOPT_IPRESOLVE from simply
restricting address resolution to preventing the wrong connection
type being used, when choosing a connection from the pool, and
to restricting what addresses could be used when establishing
a new connection.
It is important that all addresses versions are resolved, even if
not used in that transfer in particular, because the result is
cached, and could be useful for a different transfer with a
different CURLOPT_IPRESOLVE setting.
Closes#6853
... or the cookies won't get sent. Push users to using the "Netscape"
format instead, which curl uses when saving a cookie "jar".
Reported-by: Martin Dorey
Reviewed-by: Daniel Gustafsson
Fixes#6723Closes#7077
Previously this logic would cap the send to CURL_MAX_WRITE_SIZE bytes,
but for the situations where a larger upload buffer has been set, this
function can benefit from sending more bytes. With default size used,
this does the same as before.
Also changed the storage of the size to an 'unsigned int' as it is not
allowed to be set larger than 2M.
Also added cautions to the man pages about changing buffer sizes in
run-time.
Closes#7022
These functions have existed in the API since the dawn of time. It is
about time we describe how they work, even if we discourage users from
using them.
Closes#7010
When a TLS server requests a client certificate during handshake and
none can be provided, libcurl now returns this new error code
CURLE_SSL_CLIENTCERT
Only supported by Secure Transport and OpenSSL for TLS 1.3 so far.
Closes#6721
wording taken from man page for CURLOPT_URL.3
As far as I can see, the URL part is either malloc'ed before due to
encoding or it is strdup'ed.
Closes#6953
- Disable auto credentials by default. This is a breaking change
for clients that are using it, wittingly or not.
- New libcurl ssl option value CURLSSLOPT_AUTO_CLIENT_CERT tells libcurl
to automatically locate and use a client certificate for
authentication, when requested by the server.
- New curl tool options --ssl-auto-client-cert and
--proxy-ssl-auto-client-cert map to CURLSSLOPT_AUTO_CLIENT_CERT.
This option is only supported for Schannel (the native Windows SSL
library). Prior to this change Schannel would, with no notification to
the client, attempt to locate a client certificate and send it to the
server, when requested by the server. Since the server can request any
certificate that supports client authentication in the OS certificate
store it could be a privacy violation and unexpected.
Fixes https://github.com/curl/curl/issues/2262
Reported-by: Jeroen Ooms
Assisted-by: Wes Hinsley
Assisted-by: Rich FitzJohn
Ref: https://curl.se/mail/lib-2021-02/0066.html
Reported-by: Morten Minde Neergaard
Closes https://github.com/curl/curl/pull/6673
... previously they were supported if a TLS library would (unexpectedly)
still support them, but from this change they will be refused already in
curl_easy_setopt(). SSLv2 and SSLv3 have been known to be insecure for
many years now.
Closes#6773
Add test 676 to verify that setting CURLOPT_COOKIEFILE to NULL again clears
the cookiejar from memory.
Reported-by: Stefan Karpinski
Fixes#6889Closes#6891
- Document in DOH that some SSL settings are inherited but DOH hostname
and peer verification are not and are controlled separately.
- Document that CURLOPT_SSL_CTX_FUNCTION is inherited by DOH handles but
we're considering changing behavior to no longer inherit it. Request
feedback.
Closes https://github.com/curl/curl/pull/6688
- add CURLINFO_REFERER libcurl option
- add --write-out '%{referer}' command-line option
- extend --xattr command-line option to fill user.xdg.referrer.url extended
attribute with the referrer (if there was any)
Closes#6591
- New libcurl options CURLOPT_DOH_SSL_VERIFYHOST,
CURLOPT_DOH_SSL_VERIFYPEER and CURLOPT_DOH_SSL_VERIFYSTATUS do the
same as their respective counterparts.
- New curl tool options --doh-insecure and --doh-cert-status do the same
as their respective counterparts.
Prior to this change DOH SSL certificate verification settings for
verifyhost and verifypeer were supposed to be inherited respectively
from CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER, but due to a bug
were not. As a result DOH verification remained at the default, ie
enabled, and it was not possible to disable. This commit changes
behavior so that the DOH verification settings are independent and not
inherited.
Ref: https://github.com/curl/curl/pull/4579#issuecomment-554723676
Fixes https://github.com/curl/curl/issues/4578
Closes https://github.com/curl/curl/pull/6597
- Add support services without region and service prefixes in
the URL endpoint (ex. Min.IO, GCP, Yandex Cloud, Mail.Ru Cloud Solutions, etc)
by providing region and service parameters via aws-sigv4 option.
- Add [:region[:service]] suffix to aws-sigv4 option;
- Fix memory allocation errors.
- Refactor memory management.
- Use Curl_http_method instead() STRING_CUSTOMREQUEST.
- Refactor canonical headers generating.
- Remove repeated sha256_to_hex() usage.
- Add some docs fixes.
- Add some codestyle fixes.
- Add overloaded strndup() for debug - curl_dbg_strndup().
- Update tests.
Closes#6524
We currently use both spellings the british "behaviour" and the american
"behavior". However "behavior" is more used in the project so I think
it's worth dropping the british name.
Closes#6395
Extend the syntax of CURLOPT_RESOLVE strings: allow using a '+' prefix
(similar to the existing '-' prefix for removing entries) to add
DNS cache entries that will time out just like entries that are added
by libcurl itself.
Append " (non-permanent)" to info log message in case a non-permanent
entry is added.
Adjust relevant comments to reflect the new behavior.
Adjust documentation.
Extend unit1607 to test the new functionality.
Closes#6294
Paused transfers should not be stopped due to slow speed even when
CURLOPT_LOW_SPEED_LIMIT is set. Additionally, the slow speed timer is
now reset when the transfer is unpaused - as otherwise it would easily
just trigger immediately after unpausing.
Reported-by: Harry Sintonen
Fixes#6358Closes#6359
It is a security process for HTTP.
It doesn't seems to be standard, but it is used by some cloud providers.
Aws:
https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html
Outscale:
https://wiki.outscale.net/display/EN/Creating+a+Canonical+Request
GCP (I didn't test that this code work with GCP though):
https://cloud.google.com/storage/docs/access-control/signing-urls-manually
most of the code is in lib/http_v4_signature.c
Information require by the algorithm:
- The URL
- Current time
- some prefix that are append to some of the signature parameters.
The data extracted from the URL are: the URI, the region,
the host and the API type
example:
https://api.eu-west-2.outscale.com/api/latest/ReadNets
~~~ ~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
^ ^ ^
/ \ URI
API type region
Small description of the algorithm:
- make canonical header using content type, the host, and the date
- hash the post data
- make canonical_request using custom request, the URI,
the get data, the canonical header, the signed header
and post data hash
- hash canonical_request
- make str_to_sign using one of the prefix pass in parameter,
the date, the credential scope and the canonical_request hash
- compute hmac from date, using secret key as key.
- compute hmac from region, using above hmac as key
- compute hmac from api_type, using above hmac as key
- compute hmac from request_type, using above hmac as key
- compute hmac from str_to_sign using above hmac as key
- create Authorization header using above hmac, prefix pass in parameter,
the date, and above hash
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Closes#5703
The command line tool also independently sets --ftp-skip-pasv-ip by
default.
Ten test cases updated to adapt the modified --libcurl output.
Bug: https://curl.se/docs/CVE-2020-8284.html
CVE-2020-8284
Reported-by: Varnavas Papaioannou
for curl_easy_escape and curl_easy_setopt()
The limit is there to catch mistakes and abuse. It is meant to be large
enough to allow virtually all "fine" use cases.
Reported-by: Marc Schlatter
Fixes#6190Closes#6191
- enable in the build (configure)
- header parsing
- host name lookup
- unit tests for the above
- CI build
- CURL_VERSION_HSTS bit
- curl_version_info support
- curl -V output
- curl-config --features
- CURLOPT_HSTS_CTRL
- man page for CURLOPT_HSTS_CTRL
- curl --hsts (sets CURLOPT_HSTS_CTRL and works with --libcurl)
- man page for --hsts
- save cache to disk
- load cache from disk
- CURLOPT_HSTS
- man page for CURLOPT_HSTS
- added docs/HSTS.md
- fixed --version docs
- adjusted curl_easy_duphandle
Closes#5896
Fixed two return code mixups. CURLE_UNKNOWN_OPTION is saved for when the
option is, yeah, not known. Clarified this in the setopt man page too.
Closes#5993
Since HTTPS is "the new normal", this update changes a lot of man page
examples to use https://example.com instead of the previous "http://..."
Closes#5969
USE_TLS_SRP will be true if *any* selected TLS backend can use SRP
HAVE_OPENSSL_SRP is defined when OpenSSL can use it
HAVE_GNUTLS_SRP is defined when GnuTLS can use it
Clarify in the curl_verison_info docs that CURL_VERSION_TLSAUTH_SRP is
set if at least one of the supported backends offers SRP.
Reported-by: Stefan Strogin
Fixes#5865Closes#5870