The cookiefile entries are set into the handle and should remain set for
the lifetime of the handle so that duplicating it also duplicates the
list. Therefore, the struct field is moved from 'state' to 'set'.
Fixes#10133Closes#10134
This makes a big difference for cases when the rewind is not actually
necessary to perofm (for example HTTP response code 301 converts to GET)
and therefore the rewind can be avoided. In particular for situations
when that rewind fails, for example when reading from a pipe or similar.
Reported-by: Ali Utku Selen
Fixes#9735Closes#9958
- almost all backend calls pass the Curl_cfilter intance instead of
connectdata+sockindex
- ssl_connect_data is remove from struct connectdata and made internal
to vtls
- ssl_connect_data is allocated in the added filter, kept at cf->ctx
- added function to let a ssl filter access its ssl_primary_config and
ssl_config_data this selects the propert subfields in conn and data,
for filters added as plain or proxy
- adjusted all backends to use the changed api
- adjusted all backends to access config data via the exposed
functions, no longer using conn or data directly
cfilter renames for clear purpose:
- methods `Curl_conn_*(data, conn, sockindex)` work on the complete
filter chain at `sockindex` and connection `conn`.
- methods `Curl_cf_*(cf, ...)` work on a specific Curl_cfilter
instance.
- methods `Curl_conn_cf()` work on/with filter instances at a
connection.
- rebased and resolved some naming conflicts
- hostname validation (und session lookup) on SECONDARY use the same
name as on FIRST (again).
new debug macros and removing connectdata from function signatures where not
needed.
adapting schannel for new Curl_read_plain paramter.
Closes#9919
This struct field MUST remain what the application set it to, so that
handle reuse and handle duplication work.
Instead, the request state bit 'no_body' is introduced for code flows
that need to change this in run-time.
Closes#9888
- general construct/destroy in connectdata
- default implementations of callback functions
- connect: cfilters for connect and accept
- socks: cfilter for socks proxying
- http_proxy: cfilter for http proxy tunneling
- vtls: cfilters for primary and proxy ssl
- change in general handling of data/conn
- Curl_cfilter_setup() sets up filter chain based on data settings,
if none are installed by the protocol handler setup
- Curl_cfilter_connect() boot straps filters into `connected` status,
used by handlers and multi to reach further stages
- Curl_cfilter_is_connected() to check if a conn is connected,
e.g. all filters have done their work
- Curl_cfilter_get_select_socks() gets the sockets and READ/WRITE
indicators for multi select to work
- Curl_cfilter_data_pending() asks filters if the have incoming
data pending for recv
- Curl_cfilter_recv()/Curl_cfilter_send are the general callbacks
installed in conn->recv/conn->send for io handling
- Curl_cfilter_attach_data()/Curl_cfilter_detach_data() inform filters
and addition/removal of a `data` from their connection
- adding vtl functions to prevent use of Curl_ssl globals directly
in other parts of the code.
Reviewed-by: Daniel Stenberg
Closes#9855
Unlike `CONNECT`, currently we don't keep track whether `PROXY` is
already sent or not. This causes `PROXY` header to be sent twice during
`MSTATE_TUNNELING` and `MSTATE_PROTOCONNECT`.
Closes#9878Fixes#9442
Adds a new option to control the maximum time that a cached
certificate store may be retained for.
Currently only the OpenSSL backend implements support for
caching certificate stores.
Closes#9620
- Replace `Github` with `GitHub`.
- Replace `windows` with `Windows`
- Replace `advice` with `advise` where a verb is used.
- A few fixes on removing repeated words.
- Replace `a HTTP` with `an HTTP`
Closes#9802
Detecting headers and lib separately makes sense when headers come in
variations or with extra ones, but this wasn't the case here. These were
duplicate/parallel macros that we had to keep in sync with each other
for a working build. This patch leaves a single macro for each of these
dependencies:
- Rely on `HAVE_LIBZ`, delete parallel `HAVE_ZLIB_H`.
Also delete CMake logic making sure these two were in sync, along with
a toggle to turn off that logic, called `CURL_SPECIAL_LIBZ`.
Also delete stray `HAVE_ZLIB` defines.
There is also a `USE_ZLIB` variant in `lib/config-dos.h`. This patch
retains it for compatibility and deprecates it.
- Rely on `USE_LIBSSH2`, delete parallel `HAVE_LIBSSH2_H`.
Also delete `LIBSSH2_WIN32`, `LIBSSH2_LIBRARY` from
`winbuild/MakefileBuild.vc`, these have a role when building libssh2
itself. And `CURL_USE_LIBSSH`, which had no use at all.
Also delete stray `HAVE_LIBSSH2` defines.
- Rely on `USE_LIBSSH`, delete parallel `HAVE_LIBSSH_LIBSSH_H`.
Also delete `LIBSSH_WIN32`, `LIBSSH_LIBRARY` and `HAVE_LIBSSH` from
`winbuild/MakefileBuild.vc`, these were the result of copy-pasting the
libssh2 line, and were not having any use.
- Delete unused `HAVE_LIBPSL_H` and `HAVE_LIBPSL`.
Reviewed-by: Daniel Stenberg
Closes#9652
Move the curl_prot_t to its own conditional block. Introduce symbol
PROTO_TYPE_SMALL to control it.
Fix a cast in a curl_prot_t assignment.
Remove an outdated comment.
Follow-up to cd5ca80.
Closes#9534
This internal-use-only storage type can be bumped to a curl_off_t once
we need to use bit 32 as the previous 'unsigned int' can no longer hold
them all then.
The websocket protocols take bit 30 and 31 so they are the last ones
that fit within 32 bits - but cannot properly be exported through APIs
since those use *signed* 32 bit types (long) in places.
Closes#9481
Next Protocol Negotiation is a TLS extension that was created and used
for agreeing to use the SPDY protocol (the precursor to HTTP/2) for
HTTPS. In the early days of HTTP/2, before the spec was finalized and
shipped, the protocol could be enabled using this extension with some
servers.
curl supports the NPN extension with some TLS backends since then, with
a command line option `--npn` and in libcurl with
`CURLOPT_SSL_ENABLE_NPN`.
HTTP/2 proper is made to use the ALPN (Application-Layer Protocol
Negotiation) extension and the NPN extension has no purposes
anymore. The HTTP/2 spec was published in May 2015.
Today, use of NPN in the wild should be extremely rare and most likely
totally extinct. Chrome removed NPN support in Chrome 51, shipped in
June 2016. Removed in Firefox 53, April 2017.
Closes#9307
By (almost) sorting the struct fields in connectdata in a decending size
order, having the single char ones last, we reduce the number of holes
in the struct and thus the amount of storage needed.
Closes#9280
So that an address used from the DNS cache that was previously used for
QUIC can be reused for TCP and vice versa.
To make this possible, set conn->transport to "unix" for unix domain
connections ... and store the transport struct field in an unsigned char
to use less space.
Reported-by: ウさん
Fixes#9274Closes#9276
and make 'dnstype' in 'struct dnsprobe' use the DNStype to fix the icc compiler warning:
doh.c(924): error #188: enumerated type mixed with another type
Reported-by: Matthew Thompson
Ref #9156Closes#9174
ftp_filemethod, ftpsslauth and ftp_ccc are now uchars
accepttimeout is now unsigned int - almost 50 days ought to be enough
for this value.
Closes#9106
... as replacements for deprecated CURLOPT_PROTOCOLS and
CURLOPT_REDIR_PROTOCOLS as these new ones do not risk running into the
32 bit limit the old ones are facing.
CURLINFO_PROTCOOL is now deprecated.
The curl tool is updated to use the new options.
Added test 1597 to verify the libcurl protocol parser.
Closes#8992
- Send no more than 150 cookies per request
- Cap the max length used for a cookie: header to 8K
- Cap the max number of received Set-Cookie: headers to 50
Bug: https://curl.se/docs/CVE-2022-32205.html
CVE-2022-32205
Reported-by: Harry Sintonen
Closes#9048
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
The callback set by CURLOPT_SSH_HOSTKEYFUNCTION is called to check
wether or not the connection should continue.
The host key is passed in argument with a custom handle for the
application.
It overrides CURLOPT_SSH_KNOWNHOSTS
Closes#7959
Folded header lines will now get passed through like before. The headers
API is adapted and will provide the content unfolded.
Added test 1274 and extended test 1940 to verify.
Reported-by: Petr Pisar
Fixes#8844Closes#8899
These two options were only ever used for the OpenSSL backend for
versions before 1.1.0. They were never used for other backends and they
are not used with recent OpenSSL versions. They were never used much by
applications.
The defines RANDOM_FILE and EGD_SOCKET can still be set at build-time
for ancient EOL OpenSSL versions.
Closes#8670
Also move static function safecmp() as non-static Curl_safecmp() since
its purpose is needed at several places.
Bug: https://curl.se/docs/CVE-2022-22576.html
CVE-2022-22576
Closes#8746
... and make connect_init() refusing trying to tunnel protocols marked
as not working. Avoids a double-free.
Reported-by: Even Rouault
Fixes#8018Closes#8020
Until now, form field and file names where escaped using the
backslash-escaping algorithm defined for multipart mails. This commit
replaces this with the percent-escaping method for URLs.
As this may introduce incompatibilities with server-side applications, a
new libcurl option CURLOPT_MIME_OPTIONS with bitmask
CURLMIMEOPT_FORMESCAPE is introduced to revert to legacy use of
backslash-escaping. This is controlled by new cli tool option
--form-escape.
New tests and documentation are provided for this feature.
Reported by: Ryan Sleevi
Fixes#7789Closes#7805
The code for sending DoH requests with GET was never enabled in a way
such that it could be used or tested. As there haven't been requests
for this feature, and since it at this is effectively dead, remove it
and favor reimplementing the feature in case anyone is interested.
Closes#7870
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Triggered before a request is made but after a connection is set up
Changes:
- callback: Update docs and callback for pre-request callback
- Add documentation for CURLOPT_PREREQDATA and CURLOPT_PREREQFUNCTION,
- Add redirect test and callback failure test
- Note that the function may be called multiple times on a redirection
- Disable new 2086 test due to Windows weirdness
Closes#7477
warning C4189: 'netrc_user_changed': local variable is initialized but
not referenced
warning C4189: 'netrc_passwd_changed': local variable is initialized but
not referenced
Closes#7423
The libssh2 backend has SSH session associated with the connection but
the callback context is the easy handle, so when a connection gets
attached to a transfer, the protocol handler now allows for a custom
function to get used to set things up correctly.
Reported-by: Michael O'Farrell
Fixes#6898Closes#7078
Also added 'CURL_SMALLSENDS' to make Curl_write() send short packets,
which helped verifying this even more.
Add test 363 to verify.
Reported-by: ustcqidi on github
Fixes#6950Closes#7024
Previously this logic would cap the send to CURL_MAX_WRITE_SIZE bytes,
but for the situations where a larger upload buffer has been set, this
function can benefit from sending more bytes. With default size used,
this does the same as before.
Also changed the storage of the size to an 'unsigned int' as it is not
allowed to be set larger than 2M.
Also added cautions to the man pages about changing buffer sizes in
run-time.
Closes#7022
- Disable auto credentials by default. This is a breaking change
for clients that are using it, wittingly or not.
- New libcurl ssl option value CURLSSLOPT_AUTO_CLIENT_CERT tells libcurl
to automatically locate and use a client certificate for
authentication, when requested by the server.
- New curl tool options --ssl-auto-client-cert and
--proxy-ssl-auto-client-cert map to CURLSSLOPT_AUTO_CLIENT_CERT.
This option is only supported for Schannel (the native Windows SSL
library). Prior to this change Schannel would, with no notification to
the client, attempt to locate a client certificate and send it to the
server, when requested by the server. Since the server can request any
certificate that supports client authentication in the OS certificate
store it could be a privacy violation and unexpected.
Fixes https://github.com/curl/curl/issues/2262
Reported-by: Jeroen Ooms
Assisted-by: Wes Hinsley
Assisted-by: Rich FitzJohn
Ref: https://curl.se/mail/lib-2021-02/0066.html
Reported-by: Morten Minde Neergaard
Closes https://github.com/curl/curl/pull/6673
Both were used for the same purposes and there was no logical separation
between them. Combined, this also saves 16 bytes in less holes in my
test build.
Closes#6798
The Curl_easy pointer struct entry in connectdata is now gone. Just
before commit 215db086e0 landed on January 8, 2021 there were 919
references to conn->data.
Closes#6608
- New libcurl options CURLOPT_DOH_SSL_VERIFYHOST,
CURLOPT_DOH_SSL_VERIFYPEER and CURLOPT_DOH_SSL_VERIFYSTATUS do the
same as their respective counterparts.
- New curl tool options --doh-insecure and --doh-cert-status do the same
as their respective counterparts.
Prior to this change DOH SSL certificate verification settings for
verifyhost and verifypeer were supposed to be inherited respectively
from CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER, but due to a bug
were not. As a result DOH verification remained at the default, ie
enabled, and it was not possible to disable. This commit changes
behavior so that the DOH verification settings are independent and not
inherited.
Ref: https://github.com/curl/curl/pull/4579#issuecomment-554723676
Fixes https://github.com/curl/curl/issues/4578
Closes https://github.com/curl/curl/pull/6597
HTTP auth "accidentally" worked before this cleanup since the code would
always overwrite the connection credentials with the credentials from
the most recent transfer and since HTTP auth is typically done first
thing, this has not been an issue. It was still wrong and subject to
possible race conditions or future breakage if the sequence of functions
would change.
The data.set.str[] strings MUST remain unmodified exactly as set by the
user, and the credentials to use internally are instead set/updated in
state.aptr.*
Added test 675 to verify different credentials used in two requests done
over a reused HTTP connection, which previously behaved wrongly.
Fixes#6542Closes#6545
Rename it to 'httpwant' and make a cloned field in the state struct as
well for run-time updates.
Also: refuse non-supported HTTP versions. Verified with test 129.
Closes#6585
and rename it from 'ftp_list_only' since it is also used for SSH and
POP3. The state is updated internally for 'type=D' FTP URLs.
Added test case 1570 to verify.
Closes#6578
... and make sure the code never updates 'set.prefer_ascii' as it breaks
handle reuse which should use the setting as the user specified it.
Added test 1569 to verify: it first makes an FTP transfer with ';type=A'
and then another without type on the same handle and the second should
then use binary. Previously, curl failed this.
Closes#6578
Since the set value then risks getting used like that when the easy
handle is reused by the application.
Also: renamed the struct field from 'ftp_append' to 'remote_append'
since it is also used for SSH protocols.
Closes#6579
- Add support services without region and service prefixes in
the URL endpoint (ex. Min.IO, GCP, Yandex Cloud, Mail.Ru Cloud Solutions, etc)
by providing region and service parameters via aws-sigv4 option.
- Add [:region[:service]] suffix to aws-sigv4 option;
- Fix memory allocation errors.
- Refactor memory management.
- Use Curl_http_method instead() STRING_CUSTOMREQUEST.
- Refactor canonical headers generating.
- Remove repeated sha256_to_hex() usage.
- Add some docs fixes.
- Add some codestyle fixes.
- Add overloaded strndup() for debug - curl_dbg_strndup().
- Update tests.
Closes#6524
As the info is already stored in the transfer handle anyway, there's no
need to carry around a duplicate buffer for the life-time of the handle.
Closes#6534
... and use 'int' for ports. We don't use 'unsigned short' since -1 is
still often used internally to signify "unknown value" and 0 - 65535 are
all valid port numbers.
Closes#6534
- Reorder some internal struct members so that less padding is used.
This is an attempt at saving a bit of space by packing some structs
(using pahole to find the holes) where it might make sense to do
so without losing readability.
I.e., I tried to avoid separating fields that seem grouped
together (like the cwd... fields in struct ftp_conn for instance).
Also abstained from touching fields behind conditional macros as
that quickly can get complicated.
Closes https://github.com/curl/curl/pull/6483
... instead of having it static within the Curl_easy struct. This takes
away 1176 bytes (18%) from the Curl_easy struct that aren't used very
often and instead makes the code allocate it when needed.
Closes#6492
The SOCKS code now uses the generic download buffer for temporary
storage during the connection procedure, instead of having its own
private 600 byte buffer that adds to the connectdata struct size. This
works fine because this point the buffer is allocated but is not use for
download yet since the connection hasn't completed.
This reduces the connection struct size by 22% on a 64bit arch!
The SOCKS buffer needs to be at least 600 bytes, and the download buffer
is guaranteed to never be smaller than 1000 bytes.
Closes#6491
By making the `magic` identifier the same size and at the same place
within the structs (easy, multi, share), libcurl will be able to more
reliably detect and safely error out if an application passes in the
wrong handle to APIs. Easier to detect and less likely to cause crashes
if done.
Such mixups can't be detected at compile-time due to them being
typedefed void pointers - unless `CURL_STRICTER` is defined.
Closes#6484
... in most cases instead of 'struct connectdata *' but in some cases in
addition to.
- We mostly operate on transfers and not connections.
- We need the transfer handle to log, store data and more. Everything in
libcurl is driven by a transfer (the CURL * in the public API).
- This work clarifies and separates the transfers from the connections
better.
- We should avoid "conn->data". Since individual connections can be used
by many transfers when multiplexing, making sure that conn->data
points to the current and correct transfer at all times is difficult
and has been notoriously error-prone over the years. The goal is to
ultimately remove the conn->data pointer for this reason.
Closes#6425
It is a security process for HTTP.
It doesn't seems to be standard, but it is used by some cloud providers.
Aws:
https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html
Outscale:
https://wiki.outscale.net/display/EN/Creating+a+Canonical+Request
GCP (I didn't test that this code work with GCP though):
https://cloud.google.com/storage/docs/access-control/signing-urls-manually
most of the code is in lib/http_v4_signature.c
Information require by the algorithm:
- The URL
- Current time
- some prefix that are append to some of the signature parameters.
The data extracted from the URL are: the URI, the region,
the host and the API type
example:
https://api.eu-west-2.outscale.com/api/latest/ReadNets
~~~ ~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
^ ^ ^
/ \ URI
API type region
Small description of the algorithm:
- make canonical header using content type, the host, and the date
- hash the post data
- make canonical_request using custom request, the URI,
the get data, the canonical header, the signed header
and post data hash
- hash canonical_request
- make str_to_sign using one of the prefix pass in parameter,
the date, the credential scope and the canonical_request hash
- compute hmac from date, using secret key as key.
- compute hmac from region, using above hmac as key
- compute hmac from api_type, using above hmac as key
- compute hmac from request_type, using above hmac as key
- compute hmac from str_to_sign using above hmac as key
- create Authorization header using above hmac, prefix pass in parameter,
the date, and above hash
Signed-off-by: Matthias Gatto <matthias.gatto@outscale.com>
Closes#5703
When the initial request isn't possible to send in its entirety, the
remainder of request would be delivered to the debug callback as data
and would wrongly be counted internally as body-bytes sent.
Extended test 1295 to verify.
Closes#6328
To reduce use of types that can't be checked at compile time. Also
removes several typecasts.
... and rename the struct field from 'os_specific' to 'tdata'.
Closes#6239
Reviewed-by: Jay Satiro
- enable in the build (configure)
- header parsing
- host name lookup
- unit tests for the above
- CI build
- CURL_VERSION_HSTS bit
- curl_version_info support
- curl -V output
- curl-config --features
- CURLOPT_HSTS_CTRL
- man page for CURLOPT_HSTS_CTRL
- curl --hsts (sets CURLOPT_HSTS_CTRL and works with --libcurl)
- man page for --hsts
- save cache to disk
- load cache from disk
- CURLOPT_HSTS
- man page for CURLOPT_HSTS
- added docs/HSTS.md
- fixed --version docs
- adjusted curl_easy_duphandle
Closes#5896
When using HTTPS proxy, SSL is used but not in the view of the FTP
protocol handler itself so separate the connection's use of SSL from the
FTP control connection's sue.
Reported-by: Mingtao Yang
Fixes#5523Closes#6006
Failures clearly returned from a (SOCKS) proxy now causes this return
code. Previously the situation was not very clear as what would be
returned and when.
In addition: when this error code is returned, an application can use
CURLINFO_PROXY_ERROR to query libcurl for the detailed error, which then
returns a value from the new 'CURLproxycode' enum.
Closes#5770
This flag was applied to the connection struct that is released on
retry. These changes move the retry counter into Curl_easy struct that
lives across retries and retains the new connection.
Reported-by: Cherish98 on github
Fixes#5794Closes#5800
Provide the HTTP method that was used on the latest request, which might
be relevant for users when there was one or more redirects involved.
Closes#5511
Updated terminology in docs, comments and phrases to refer to C strings
as "null-terminated". Done to unify with how most other C oriented docs
refer of them and what users in general seem to prefer (based on a
single highly unscientific poll on twitter).
Reported-by: coinhubs on github
Fixes#5598Closes#5608
For QUIC but also for regular TCP when the second family runs out of IPs
with a failure while the first family is still trying to connect.
Separated the timeout handling for IPv4 and IPv6 connections when they
both have a number of addresses to iterate over.
Since the connection can be used by many independent requests (using
HTTP/2 or HTTP/3), things like user-agent and other transfer-specific
data MUST NOT be kept connection oriented as it could lead to requests
getting the wrong string for their requests. This struct data was
lingering like this due to old HTTP1 legacy thinking where it didn't
mattered..
Fixes#5566Closes#5567
When the method is updated inside libcurl we must still not change the
method as set by the user as then repeated transfers with that same
handle might not execute the same operation anymore!
This fixes the libcurl part of #5462
Test 1633 added to verify.
Closes#5499
... and free it as soon as the transfer is done. It removes the extra
alloc when a new size is set with setopt() and reduces memory for unused
easy handles.
In addition: the closure_handle now doesn't use an allocated buffer at
all but the smallest supported size as a stack based one.
Closes#5472
When USE_RESOLVE_ON_IPS is set (defined on macOS), it means that
numerical IP addresses still need to get "resolved" - but not with DoH.
Reported-by: Viktor Szakats
Fixes#5454Closes#5459
They're only limited to the maximum string input restrictions, not to
256 bytes.
Added test 1178 to verify
Reported-by: Will Roberts
Fixes#5448Closes#5449
This change introduces a generic way to provide binary data in setopt
options, called BLOBs.
This change introduces these new setopts:
CURLOPT_ISSUERCERT_BLOB, CURLOPT_PROXY_SSLCERT_BLOB,
CURLOPT_PROXY_SSLKEY_BLOB, CURLOPT_SSLCERT_BLOB and CURLOPT_SSLKEY_BLOB.
Reviewed-by: Daniel Stenberg
Closes#5357
- Stick to a single unified way to use structs
- Make checksrc complain on 'typedef struct {'
- Allow them in tests, public headers and examples
- Let MD4_CTX, MD5_CTX, and SHA256_CTX typedefs remain as they actually
typedef different types/structs depending on build conditions.
Closes#5338
A common set of functions instead of many separate implementations for
creating buffers that can grow when appending data to them. Existing
functionality has been ported over.
In my early basic testing, the total number of allocations seem at
roughly the same amount as before, possibly a few less.
See docs/DYNBUF.md for a description of the API.
Closes#5300
- Implement new option CURLSSLOPT_REVOKE_BEST_EFFORT and
--ssl-revoke-best-effort to allow a "best effort" revocation check.
A best effort revocation check ignores errors that the revocation check
was unable to take place. The reasoning is described in detail below and
discussed further in the PR.
---
When running e.g. with Fiddler, the schannel backend fails with an
unhelpful error message:
Unknown error (0x80092012) - The revocation function was unable
to check revocation for the certificate.
Sadly, many enterprise users who are stuck behind MITM proxies suffer
the very same problem.
This has been discussed in plenty of issues:
https://github.com/curl/curl/issues/3727,
https://github.com/curl/curl/issues/264, for example.
In the latter, a Microsoft Edge developer even made the case that the
common behavior is to ignore issues when a certificate has no recorded
distribution point for revocation lists, or when the server is offline.
This is also known as "best effort" strategy and addresses the Fiddler
issue.
Unfortunately, this strategy was not chosen as the default for schannel
(and is therefore a backend-specific behavior: OpenSSL seems to happily
ignore the offline servers and missing distribution points).
To maintain backward-compatibility, we therefore add a new flag
(`CURLSSLOPT_REVOKE_BEST_EFFORT`) and a new option
(`--ssl-revoke-best-effort`) to select the new behavior.
Due to the many related issues Git for Windows and GitHub Desktop, the
plan is to make this behavior the default in these software packages.
The test 2070 was added to verify this behavior, adapted from 310.
Based-on-work-by: georgeok <giorgos.n.oikonomou@gmail.com>
Co-authored-by: Markus Olsson <j.markus.olsson@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes https://github.com/curl/curl/pull/4981
When libcurl retries a connection due to it being "seemingly dead" or by
REFUSED_STREAM, it will now only do it up five times before giving up,
to avoid never-ending loops.
Reported-by: Dima Tisnek
Bug: https://curl.haxx.se/mail/lib-2020-03/0044.htmlCloses#5074
Make sure each separate index in connn->tempaddr[] is used for a fixed
family (and only that family) during the connection process.
If family one takes a long time and family two fails immediately, the
previous logic could misbehave and retry the same family two address
repeatedly.
Reported-by: Paul Vixie
Reported-by: Jay Satiro
Fixes#5083Fixes#4954Closes#5089
With c-ares the dns parameters lives in ares_channel. Store them in the
curl handle and set them again in easy_duphandle.
Regression introduced in #3228 (6765e6d), shipped in curl 7.63.0.
Fixes#4893Closes#5020
Signed-off-by: Ernst Sjöstrand <ernst.sjostrand@verisure.com>
When doing a request with a body + Expect: 100-continue and the server
responds with a 417, the same request will be retried immediately
without the Expect: header.
Added test 357 to verify.
Also added a control instruction to tell the sws test server to not read
the request body if Expect: is present, which the new test 357 uses.
Reported-by: bramus on github
Fixes#4949Closes#4964
The 'share object' only sets the storage area for cookies. The "cookie
engine" still needs to be enabled or activated using the normal cookie
options.
This caused the curl command line tool to accidentally use cookies
without having been told to, since curl switched to using shared cookies
in 7.66.0.
Test 1166 verifies
Updated test 506
Fixes#4429Closes#4434
Prior to this change non-ssl/non-ssh connections that were reused set
TIMER_APPCONNECT [1]. Arguably that was incorrect since no SSL/SSH
handshake took place.
[1]: TIMER_APPCONNECT is publicly known as CURLINFO_APPCONNECT_TIME in
libcurl and %{time_appconnect} in the curl tool. It is documented as
"the time until the SSL/SSH handshake is completed".
Reported-by: Marcel Hernandez
Ref: https://github.com/curl/curl/issues/3760
Closes https://github.com/curl/curl/pull/3773
When a username and password are provided in the URL, they were wrongly
removed from the stored URL so that subsequent uses of the same URL
wouldn't find the crendentials. This made doing HTTP auth with multiple
connections (like Digest) mishave.
Regression from 46e164069d (7.62.0)
Test case 335 added to verify.
Reported-by: Mike Crowe
Fixes#4228Closes#4229
RFC 7838 section 5:
When using an alternative service, clients SHOULD include an Alt-Used
header field in all requests.
Removed CURLALTSVC_ALTUSED again (feature is still EXPERIMENTAL thus
this is deemed ok).
You can disable sending this header just like you disable any other HTTP
header in libcurl.
Closes#4199
Even though it cannot fall-back to a lower HTTP version automatically. The
safer way to upgrade remains via CURLOPT_ALTSVC.
CURLOPT_H3 no longer has any bits that do anything and might be removed
before we remove the experimental label.
Updated the curl tool accordingly to use "--http3".
Closes#4197
This is only the libcurl part that provides the information. There's no
user of the parsed value. This change includes three new tests for the
parser.
Ref: #3794
Added the ability for the calling program to specify the authorisation
identity (authzid), the identity to act as, in addition to the
authentication identity (authcid) and password when using SASL PLAIN
authentication.
Fixes#3653Closes#3790
NOTE: This commit was cherry-picked and is part of a series of commits
that added the authzid feature for upcoming 7.66.0. The series was
temporarily reverted in db8ec1f so that it would not ship in a 7.65.x
patch release.
Closes https://github.com/curl/curl/pull/4186
It was used (intended) to pass in the size of the 'socks' array that is
also passed to these functions, but was rarely actually checked/used and
the array is defined to a fixed size of MAX_SOCKSPEREASYHANDLE entries
that should be used instead.
Closes#4169
USe configure --with-ngtcp2 or --with-quiche
Using either option will enable a HTTP3 build.
Co-authored-by: Alessandro Ghedini <alessandro@ghedini.me>
Closes#3500
Since more than one socket can be used by each transfer at a given time,
each sockhash entry how has its own hash table with transfers using that
socket.
In addition, the sockhash entry can now be marked 'blocked = TRUE'"
which then makes the delete function just set 'removed = TRUE' instead
of removing it "for real", as a way to not rip out the carpet under the
feet of a parent function that iterates over the transfers of that same
sockhash entry.
Reported-by: Tom van der Woerdt
Fixes#3961Fixes#3986Fixes#3995Fixes#4004Closes#3997
Responses with status codes 1xx, 204 or 304 don't have a response body. For
these, don't parse these headers:
- Content-Encoding
- Content-Length
- Content-Range
- Last-Modified
- Transfer-Encoding
This change ensures that HTTP/2 upgrades work even if a
"Content-Length: 0" or a "Transfer-Encoding: chunked" header is present.
Co-authored-by: Daniel Stenberg
Closes#3702Fixes#3968Closes#3977
They need to be removed from the socket hash linked list with more care.
When sh_delentry() is called to remove a sockethash entry, remove all
individual transfers from the list first. To enable this, each Curl_easy struct
now stores a pointer to the sockethash entry to know how to remove itself.
Reported-by: Tom van der Woerdt and Kunal Ekawde
Fixes#3952Fixes#3904Closes#3953
- Revert all commits related to the SASL authzid feature since the next
release will be a patch release, 7.65.1.
Prior to this change CURLOPT_SASL_AUTHZID / --sasl-authzid was destined
for the next release, assuming it would be a feature release 7.66.0.
However instead the next release will be a patch release, 7.65.1 and
will not contain any new features.
After the patch release after the reverted commits can be restored by
using cherry-pick:
git cherry-pick a14d72ca9499ff8c1cc36c2a8d520edf690
Details for all reverted commits:
Revert "os400: take care of CURLOPT_SASL_AUTHZID in curl_easy_setopt_ccsid()."
This reverts commit 0edf6907ae.
Revert "tests: Fix the line endings for the SASL alt-auth tests"
This reverts commit c2a8d52a13.
Revert "examples: Added SASL PLAIN authorisation identity (authzid) examples"
This reverts commit 8c1cc369d0.
Revert "curl: --sasl-authzid added to support CURLOPT_SASL_AUTHZID from the tool"
This reverts commit a9499ff136.
Revert "sasl: Implement SASL authorisation identity via CURLOPT_SASL_AUTHZID"
This reverts commit a14d72ca2f.
Added the ability for the calling program to specify the authorisation
identity (authzid), the identity to act as, in addition to the
authentication identity (authcid) and password when using SASL PLAIN
authentication.
Fixed#3653Closes#3790