Out of 415 labels throughout the code base, 86 of those labels were
not at the start of the line. Which means labels always at the start of
the line is the favoured style overall with 329 instances.
Out of the 86 labels not at the start of the line:
* 75 were indented with the same indentation level of the following line
* 8 were indented with exactly one space
* 2 were indented with one fewer indentation level then the following
line
* 1 was indented with the indentation level of the following line minus
three space (probably unintentional)
Co-Authored-By: Viktor Szakats
Closes#11134
- expression 'hostptr' is always true
- a part of conditional expression is always true: proxypasswd
- expression 'proxyuser' is always true
- avoid multiple Curl_now() calls in allocate_conn
Ref: #10929Closes#10959
- move host checks together
- simplify the scheme parser loop and the end of host name parser
- avoid itermediate buffer storing in multiple places
- reduce scope for several variables
- skip the Curl_dyn_tail() call for speed
- detect IPv6 earlier and skip extra checks for such hosts
- normalize directly in dynbuf instead of itermediate buffer
- split out the IPv6 parser into its own funciton
- call the IPv6 parser directly for ipv6 addresses
- remove (unused) special treatment of % in host names
- junkscan() once in the beginning instead of scattered
- make junkscan return error code
- remove unused query management from dedotdotify()
- make Curl_parse_login_details use memchr
- more use of memchr() instead of strchr() and less strlen() calls
- make junkscan check and return the URL length
An optimized build runs one of my benchmark URL parsing programs ~41%
faster using this branch. (compared against the shipped 7.88.1 library
in Debian)
Closes#10935
As they are not driving transfers or any socket activity, the main loop
does not need to iterate over these handles. A performance improvement.
They are instead only held in their own separate lists.
'data->multi' is kept a pointer to the multi handle as long as the easy
handle is actually part of it even when the handle is moved to the
pending/msgsent lists. It needs to know which multi handle it belongs
to, if for example curl_easy_cleanup() is called before the handle is
removed from the multi handle.
Alll 'data->multi' pointers of handles still part of the multi handle
gets cleared by curl_multi_cleanup() which "orphans" all previously
attached easy handles.
This is take 2. The first version was reverted for the 8.0.1 release.
Assisted-by: Stefan Eissing
Closes#10801
Linked lists themselves do not carry any allocations, so for the lists
that do not have have a set destructor we can just skip the
Curl_llist_destroy() call and save CPU time.
Closes#10764
- add parameter to `conn_is_alive()` cfilter method that returns
if there is input data waiting on the connection
- refrain from re-using connnection from the cache that have
input pending
- adapt http/2 and http/3 alive checks to digest pending input
to check the connection state
- remove check_cxn method from openssl as that was just doing
what the socket filter now does.
- add tests for connection reuse with special server configs
Closes#10690
As tested in test_02_07, when firing off 200 urls with --parallel, 199
wait for the first connection to be established. if that is multiuse,
urls are added up to its capacity.
The first url over capacity opens another connection. But subsequent
urls found the same situation and open a connection too. They should
have waited for the second connection to actually connect and make its
capacity known.
This change fixes that by
- setting `connkeep()` early in the HTTP setup handler. as otherwise
a new connection is marked as closeit by default and not considered
for multiuse at all
- checking the "connected" status for a candidate always and continuing
to PIPEWAIT if no alternative is found.
pytest:
- removed "skip" from test_02_07
- added test_02_07b to check that http/1.1 continues to work as before
Closes#10456
- Curl_write_plain/Curl_read_plain have been eliminated. Last code use
now uses Curl_conn_send/recv so that requests use conn->send/revc
callbacks which defaults to cfilters use.
- Curl_recv_plain/Curl_send_plain have been internalized in cf-socket.c.
- USE_RECV_BEFORE_SEND_WORKAROUND (active on Windows) has been moved
into cf-socket.c. The pre_recv buffer is held at the socket filter
context. `postponed_data` structures have been removed from
`connectdata`.
- the hanger in HTTP/2 request handling was a result of read buffering
on all sends and the multi handling is not prepared for this. The
following happens:
- multi preforms on a HTTP/2 easy handle
- h2 reads and processes data
- this leads to a send of h2 data
- which receives and buffers before the send
- h2 returns
- multi selects on the socket, but no data arrives (its in the buffer already)
the workaround now receives data in a loop as long as there is something in
the buffer. The real fix would be for multi to change, so that `data_pending`
is evaluated before deciding to wait on the socket.
io_buffer, optional, in cf-socket.c, http/2 sets state.drain if lower
filter have pending data.
This io_buffer is only available/used when the
-DUSE_RECV_BEFORE_SEND_WORKAROUND is active, e.g. on Windows
configurations. It also maintains the original checks on protocol
handler being HTTP and conn->send/recv not being replaced.
The HTTP/2 (nghttp2) cfilter now sets data->state.drain when it finds
out that the "lower" filter chain has still pending data at the end of
its IO operation. This prevents the processing from becoming stalled.
Closes#10280
- copy `struct Curl_addrinfo` on filter setup into context
- remove `struct Curl_addrinfoi *` with `struct Curl_sockaddr_ex *` in
connectdata that is set and NULLed by the socket filter
- this means we have no reference to the resolver info in connectdata or
its filters
- trigger the CF_CTRL_CONN_INFO_UPDATE event when the complete filter
chain reaches connected status
- update easy handle connection information on CF_CTRL_DATA_SETUP event.
Closes#10213
- they are mostly pointless in all major jurisdictions
- many big corporations and projects already don't use them
- saves us from pointless churn
- git keeps history for us
- the year range is kept in COPYING
checksrc is updated to allow non-year using copyright statements
Closes#10205
The only TLS auth type libcurl ever supported is SRP and that is the
default type. Since nobody ever sets any other type, there is no point
in wasting space to store the set type and code to check the type.
If TLS auth is used, SRP is now implied.
Closes#10181
Refactoring of connection setup and happy eyeballing. Move
nghttp2. ngtcp2, quiche and msh3 into connection filters.
- eyeballing cfilter that uses sub-filters for performing parallel connects
- socket cfilter for all transport types, including QUIC
- QUIC implementations in cfilter, can now participate in eyeballing
- connection setup is more dynamic in order to adapt to what filter did
really connect. Relevant to see if a SSL filter needs to be added or
if SSL has already been provided
- HTTP/3 test cases similar to HTTP/2
- multiuse of parallel transfers for HTTP/3, tested for ngtcp2 and quiche
- Fix for data attach/detach in VTLS filters that could lead to crashes
during parallel transfers.
- Eliminating setup() methods in cfilters, no longer needed.
- Improving Curl_conn_is_alive() to replace Curl_connalive() and
integrated ssl alive checks into cfilter.
- Adding CF_CNTRL_CONN_INFO_UPDATE to tell filters to update
connection into and persist it at the easy handle.
- Several more cfilter related cleanups and moves:
- stream_weigth and dependency info is now wrapped in struct
Curl_data_priority
- Curl_data_priority members depend is available in HTTP2|HTTP3
- Curl_data_priority members depend on NGHTTP2 support
- handling init/reset/cleanup of priority part of url.c
- data->state.priority same struct, but shallow copy for compares only
- PROTOPT_STREAM has been removed
- Curl_conn_is_mulitplex() now available to check on capability
- Adding query method to connection filters.
- ngtcp2+quiche: implementing query for max concurrent transfers.
- Adding is_alive and keep_alive cfilter methods. Adding DATA_SETUP event.
- setting keepalive timestamp on connect
- DATA_SETUP is called after the connection has been completely
setup (but may not connected yet) to allow filters to initialize
data members they use.
- there is no socket to be had with msh3, it is unclear how select
shall work
- manual test via "curl --http3 https://curl.se" fail with "empty
reply from server".
- Various socket/conn related cleanups:
- Curl_socket is now Curl_socket_open and in cf-socket.c
- Curl_closesocket is now Curl_socket_close and in cf-socket.c
- Curl_ssl_use has been replaced with Cur_conn_is_ssl
- Curl_conn_tcp_accepted_set has been split into
Curl_conn_tcp_listen_set and Curl_conn_tcp_accepted_set
with a clearer purpose
Closes#10141
- source_quote, source_prequote and source_postquote have not been used since
5e0d9aea3; September 2006
- make several fields conditional on proxy support
- make three quote struct fields conditional on FTP || SSH
- make 'mime_options' depend on MIME
- make trailer_* fields depend on HTTP
- change 'gssapi_delegation' from long to unsigned char
- make 'localportrange' unsigned short instead of int
- conn->trailer now depends on HTTP
Closes#10147
The cookiefile entries are set into the handle and should remain set for
the lifetime of the handle so that duplicating it also duplicates the
list. Therefore, the struct field is moved from 'state' to 'set'.
Fixes#10133Closes#10134
Deprecation and removal of codeset conversion support from the library
have released the strict need for an early binding of mime structures to
an easy handle (https://github.com/curl/curl/commit/2610142).
This constraint currently forces to create the handle before the mime
structure and the latter cannot be attached to another handle once
created (see https://curl.se/mail/lib-2022-08/0027.html).
This commit removes the handle pointers from the mime structures
allowing more flexibility on their use.
When an easy handle is duplicated, bound mime structures must however
still be duplicated too as their components hold send-time dynamic
information.
Closes#9927
- `Curl_ssl_get_config()` now returns the first config if no SSL proxy
filter is active
- socket filter starts connection only on first invocation of its
connect method
Fixes#9982Closes#9983
- almost all backend calls pass the Curl_cfilter intance instead of
connectdata+sockindex
- ssl_connect_data is remove from struct connectdata and made internal
to vtls
- ssl_connect_data is allocated in the added filter, kept at cf->ctx
- added function to let a ssl filter access its ssl_primary_config and
ssl_config_data this selects the propert subfields in conn and data,
for filters added as plain or proxy
- adjusted all backends to use the changed api
- adjusted all backends to access config data via the exposed
functions, no longer using conn or data directly
cfilter renames for clear purpose:
- methods `Curl_conn_*(data, conn, sockindex)` work on the complete
filter chain at `sockindex` and connection `conn`.
- methods `Curl_cf_*(cf, ...)` work on a specific Curl_cfilter
instance.
- methods `Curl_conn_cf()` work on/with filter instances at a
connection.
- rebased and resolved some naming conflicts
- hostname validation (und session lookup) on SECONDARY use the same
name as on FIRST (again).
new debug macros and removing connectdata from function signatures where not
needed.
adapting schannel for new Curl_read_plain paramter.
Closes#9919
Regression: in commit 53bcf55 we moved the IDN conversion calls to
happen before the HSTS checks. But the HSTS checks are only done on the
server host name, not the proxy names. By moving the proxy name IDN
conversions, we accidentally broke the verbose output showing the proxy
name.
This change moves back the IDN conversions for the proxy names to the
place in the code path they were before 53bcf55.
Reported-by: Andy Stamp
Fixes#9937Closes#9939
This struct field MUST remain what the application set it to, so that
handle reuse and handle duplication work.
Instead, the request state bit 'no_body' is introduced for code flows
that need to change this in run-time.
Closes#9888
- general construct/destroy in connectdata
- default implementations of callback functions
- connect: cfilters for connect and accept
- socks: cfilter for socks proxying
- http_proxy: cfilter for http proxy tunneling
- vtls: cfilters for primary and proxy ssl
- change in general handling of data/conn
- Curl_cfilter_setup() sets up filter chain based on data settings,
if none are installed by the protocol handler setup
- Curl_cfilter_connect() boot straps filters into `connected` status,
used by handlers and multi to reach further stages
- Curl_cfilter_is_connected() to check if a conn is connected,
e.g. all filters have done their work
- Curl_cfilter_get_select_socks() gets the sockets and READ/WRITE
indicators for multi select to work
- Curl_cfilter_data_pending() asks filters if the have incoming
data pending for recv
- Curl_cfilter_recv()/Curl_cfilter_send are the general callbacks
installed in conn->recv/conn->send for io handling
- Curl_cfilter_attach_data()/Curl_cfilter_detach_data() inform filters
and addition/removal of a `data` from their connection
- adding vtl functions to prevent use of Curl_ssl globals directly
in other parts of the code.
Reviewed-by: Daniel Stenberg
Closes#9855
Adds a new option to control the maximum time that a cached
certificate store may be retained for.
Currently only the OpenSSL backend implements support for
caching certificate stores.
Closes#9620
No point in having two entry points for the same functions.
Also merged the *safe* function treatment into these so that they can
also be used when one or both pointers are NULL.
Closes#9837
- Replace `Github` with `GitHub`.
- Replace `windows` with `Windows`
- Replace `advice` with `advise` where a verb is used.
- A few fixes on removing repeated words.
- Replace `a HTTP` with `an HTTP`
Closes#9802
Even though `PICKY_COMPILER=ON` is the default, warnings were not
enabled when using llvm/clang, because `CMAKE_COMPILER_IS_CLANG` was
always false (in my tests at least).
This is the single use of this variable in curl, and in a different
place we already use `CMAKE_C_COMPILER_ID MATCHES "Clang"`, which works
as expected, so change the condition to use that instead.
Also fix the warnings uncovered by the above:
- lib: add casts to silence clang warnings
- schannel: add casts to silence clang warnings in ALPN code
Assuming the code is correct, solve the warnings with a cast.
This particular build case isn't CI tested.
There is a chance the warning is relevant for some platforms, perhaps
Windows 32-bit ARM7.
Closes#9783
For both IPv4 and IPv6 addresses. Now also checks IPv6 addresses "correctly"
and not with string comparisons.
Split out the noproxy checks and functionality into noproxy.c
Added unit test 1614 to verify checking functions.
Reported-by: Mathieu Carbonneaux
Fixes#9773Fixes#5745Closes#9775
When the user name is provided in the URL it is URL encoded there, but
when used for authentication the encoded version should be used.
Regression introduced after 7.83.0
Reported-by: Jonas Haag
Fixes#9709Closes#9715
This is a strcmp() alternative function for comparing "secrets",
designed to take the same time no matter the content to not leak
match/non-match info to observers based on how fast it is.
The time this function takes is only a function of the shortest input
string.
Reported-by: Trail of Bits
Closes#9658
According to `docs/INTERNALS.md`, internal function names spanning source
files start with uppercase `Curl_`. Bring these two functions in
alignment with this.
This also stops exporting them from `libcurl.dll` in autotools builds.
Reviewed-by: Daniel Stenberg
Closes#9598
Move the curl_prot_t to its own conditional block. Introduce symbol
PROTO_TYPE_SMALL to control it.
Fix a cast in a curl_prot_t assignment.
Remove an outdated comment.
Follow-up to cd5ca80.
Closes#9534
This also returns error CURLE_UNSUPPORTED_PROTOCOL rather than
CURLE_BAD_FUNCTION_ARGUMENT when a listed protocol name is not found.
A new schemelen parameter is added to Curl_builtin_scheme() to support
this extended use.
Note that disabled protocols are not recognized anymore.
Tests adapted accordingly.
Closes#9472
When the parser is not allowed to guess scheme, it should consider the
word ending at the first colon to be the scheme, independently of number
of slashes.
The parser now checks that the scheme is known before it counts slashes,
to improve the error messge for URLs with unknown schemes and maybe no
slashes.
When following redirects, no scheme guessing is allowed and therefore
this change effectively prevents redirects to unknown schemes such as
"data".
Fixes#9503
This internal-use-only storage type can be bumped to a curl_off_t once
we need to use bit 32 as the previous 'unsigned int' can no longer hold
them all then.
The websocket protocols take bit 30 and 31 so they are the last ones
that fit within 32 bits - but cannot properly be exported through APIs
since those use *signed* 32 bit types (long) in places.
Closes#9481
Slightly faster with more robust code. Uses fewer and smaller mallocs.
- remove two fields from the URL handle struct
- reduce copies and allocs
- use dynbuf buffers more instead of custom malloc + copies
- uses dynbuf to build the host name in reduces serial alloc+free within
the same function.
- move dedotdotify into urlapi.c and make it static, not strdup the input
and optimize it by checking for . and / before using strncmp
- remove a few strlen() calls
- add Curl_dyn_setlen() that can "trim" an existing dynbuf
Closes#9408
Next Protocol Negotiation is a TLS extension that was created and used
for agreeing to use the SPDY protocol (the precursor to HTTP/2) for
HTTPS. In the early days of HTTP/2, before the spec was finalized and
shipped, the protocol could be enabled using this extension with some
servers.
curl supports the NPN extension with some TLS backends since then, with
a command line option `--npn` and in libcurl with
`CURLOPT_SSL_ENABLE_NPN`.
HTTP/2 proper is made to use the ALPN (Application-Layer Protocol
Negotiation) extension and the NPN extension has no purposes
anymore. The HTTP/2 spec was published in May 2015.
Today, use of NPN in the wild should be extremely rare and most likely
totally extinct. Chrome removed NPN support in Chrome 51, shipped in
June 2016. Removed in Firefox 53, April 2017.
Closes#9307
If the user is specified as part of the URL, and the same user exists
in .netrc, Authorization header was not sent at all.
The user and password fields were assigned in conn->user and password
but the user was not assigned to data->state.aptr, which is the field
that is used in output_auth_headers and friends.
Fix by assigning the user also to aptr.
Amends commit d1237ac906.
Fixes#9243
- If, after parsing netrc, there is a password with no username then
set a blank username.
This used to be the case prior to 7d600ad (precedes 7.82). Note
parseurlandfillconn already does the same thing for URLs.
Reported-by: Raivis <standsed@users.noreply.github.com>
Testing-by: Domen Kožar
Fixes https://github.com/curl/curl/issues/8653Closes#9334Closes#9066
This commit splits the branch-heavy resolve_server() function into
various sub-functions, in order to reduce the amount of nested
if/else-statements.
Beside this, it also removes many else-sequences, by returning in the
previous if-statement.
Closes#9283
So that an address used from the DNS cache that was previously used for
QUIC can be reused for TCP and vice versa.
To make this possible, set conn->transport to "unix" for unix domain
connections ... and store the transport struct field in an unsigned char
to use less space.
Reported-by: ウさん
Fixes#9274Closes#9276
and make 'dnstype' in 'struct dnsprobe' use the DNStype to fix the icc compiler warning:
doh.c(924): error #188: enumerated type mixed with another type
Reported-by: Matthew Thompson
Ref #9156Closes#9174
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
The .netrc parser now accepts strings within double-quotes in order to
deal with for example passwords containing white space - which
previously was not possible.
A password that starts with a double-quote also ends with one, and
double-quotes themselves are escaped with backslashes, like \". It also
supports \n, \r and \t for newline, carriage return and tabs
respectively.
If the password does not start with a double quote, it will end at first
white space and no escaping is performed.
WARNING: this change is not entirely backwards compatible. If anyone
previously used a double-quote as the first letter of their password,
the parser will now get it differently compared to before. This is
highly unfortunate but hard to avoid.
Reported-by: ImpatientHippo on GitHub
Fixes#8908Closes#8937
Usage:
curl -x "socks5h://localhost/run/tor/socks" "https://example.com"
Updated runtests.pl to run a socksd server listening on unix socket
Added tests test1467 test1468
Added documentation for proxy command line option and socks proxy
options
Closes#8668
These two options were only ever used for the OpenSSL backend for
versions before 1.1.0. They were never used for other backends and they
are not used with recent OpenSSL versions. They were never used much by
applications.
The defines RANDOM_FILE and EGD_SOCKET can still be set at build-time
for ancient EOL OpenSSL versions.
Closes#8670
To simplify, and also since the returned name is not the full actual
name used for the check. The port number and zone id is also involved,
so just showing the name is misleading.
Closes#8750
Also move static function safecmp() as non-static Curl_safecmp() since
its purpose is needed at several places.
Bug: https://curl.se/docs/CVE-2022-22576.html
CVE-2022-22576
Closes#8746
Reverts 5de8d84098 (May 2014, shipped in 7.37.0) and the
follow-up changes done afterward.
Keep the dot in names for everything except the SNI to make curl behave
more similar to current browsers. This means 'name' and 'name.' send the
same SNI for different 'Host:' headers.
Updated test 1322 accordingly
Fixes#8290
Reported-by: Charles Cazabon
Closes#8320
The tools.ietf.org domain has been deprecated a while now, with the
links being redirected to datatracker.ietf.org.
Rather than make people eat that redirect time, this change switches the
URL to a more canonical source.
Closes#8317
1. The function would only ever return CURLE_OK anyway
2. Only one caller actually used the return code
3. Most callers did (void)Curl_disconnect()
Closes#8303
Instad of having all callers pass in the maximum length, always use
it. The passed in length is instead used only as the length of the
target buffer for to storing the scheme name in, if used.
Added the scheme max length restriction to the curl_url_set.3 man page.
Follow-up to 45bcb2eaa7Closes#8047
Add curl_url_strerror() to convert CURLUcode into readable string and
facilitate easier troubleshooting in programs using URL API.
Extend CURLUcode with CURLU_LAST for iteration in unit tests.
Update man pages with a mention of new function.
Update example code and tests with new functionality where it fits.
Closes#7605
We have and provide Curl_strerror() internally for a reason: strerror()
is not necessarily thread-safe so we should always try to avoid it.
Extended checksrc to warn for this, but feature the check disabled by
default and only enable it in lib/
Closes#7685
warning C4189: 'netrc_user_changed': local variable is initialized but
not referenced
warning C4189: 'netrc_passwd_changed': local variable is initialized but
not referenced
Closes#7423
- the data needs to be "line-based" anyway since it's also passed to the
debug callback/application
- it makes infof() work like failf() and consistency is good
- there's an assert that triggers on newlines in the format string
- Also removes a few instances of "..."
- Removes the code that would append "..." to the end of the data *iff*
it was truncated in infof()
Closes#7357
Coverity (CID 1486645) pointed out a use of curl_url_get() in the
parse_proxy function where the return code wasn't checked. A
(void)-prefix makes the intention obvious.
Closes#7320
Fix typos in code comments which repeat various words. In trivial
cases, just delete the repeated word. Reword the affected sentence in
"lib/url.c" for it to make sense.
Closes#7303
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
- Check whether a connection has succeded before checking whether it's
timed out.
This means if we've connected quickly, but subsequently been
descheduled, we allow the connection to succeed. Note, if we timeout,
but between checking the timeout, and connecting to the server the
connection succeeds, we will allow it to go ahead. This is viewed as
an acceptable trade off.
- Add additional failf logging around failed connection attempts to
propogate the cause up to the caller.
Co-Authored-by: Martin Howarth
Closes#7178
Unicode Windows builds use UTF-8 strings internally in libcurl,
so make sure to call the UTF-8 flavour of the libidn2 API. Also
document that Windows builds with libidn2 and UNICODE do expect
CURLOPT_URL as an UTF-8 string.
Reported-by: dEajL3kA on github
Assisted-by: Jay Satiro
Reviewed-by: Marcel Raad
Closes#7246Fixes#7228
Also, use a single function library-wide for detecting if a given hostname is
a numerical IP address.
Reported-by: Harry Sintonen
Fixes#7146Closes#7149
In some situations, it was possible that a transfer was setup to
use an specific IP version, but due do DNS caching or connection
reuse, it ended up using a different IP version from requested.
This commit changes the effect of CURLOPT_IPRESOLVE from simply
restricting address resolution to preventing the wrong connection
type being used, when choosing a connection from the pool, and
to restricting what addresses could be used when establishing
a new connection.
It is important that all addresses versions are resolved, even if
not used in that transfer in particular, because the result is
cached, and could be useful for a different transfer with a
different CURLOPT_IPRESOLVE setting.
Closes#6853
The libssh2 backend has SSH session associated with the connection but
the callback context is the easy handle, so when a connection gets
attached to a transfer, the protocol handler now allows for a custom
function to get used to set things up correctly.
Reported-by: Michael O'Farrell
Fixes#6898Closes#7078
Both were used for the same purposes and there was no logical separation
between them. Combined, this also saves 16 bytes in less holes in my
test build.
Closes#6798
The Curl_easy pointer struct entry in connectdata is now gone. Just
before commit 215db086e0 landed on January 8, 2021 there were 919
references to conn->data.
Closes#6608
- New libcurl options CURLOPT_DOH_SSL_VERIFYHOST,
CURLOPT_DOH_SSL_VERIFYPEER and CURLOPT_DOH_SSL_VERIFYSTATUS do the
same as their respective counterparts.
- New curl tool options --doh-insecure and --doh-cert-status do the same
as their respective counterparts.
Prior to this change DOH SSL certificate verification settings for
verifyhost and verifypeer were supposed to be inherited respectively
from CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER, but due to a bug
were not. As a result DOH verification remained at the default, ie
enabled, and it was not possible to disable. This commit changes
behavior so that the DOH verification settings are independent and not
inherited.
Ref: https://github.com/curl/curl/pull/4579#issuecomment-554723676
Fixes https://github.com/curl/curl/issues/4578
Closes https://github.com/curl/curl/pull/6597
HTTP auth "accidentally" worked before this cleanup since the code would
always overwrite the connection credentials with the credentials from
the most recent transfer and since HTTP auth is typically done first
thing, this has not been an issue. It was still wrong and subject to
possible race conditions or future breakage if the sequence of functions
would change.
The data.set.str[] strings MUST remain unmodified exactly as set by the
user, and the credentials to use internally are instead set/updated in
state.aptr.*
Added test 675 to verify different credentials used in two requests done
over a reused HTTP connection, which previously behaved wrongly.
Fixes#6542Closes#6545
Rename it to 'httpwant' and make a cloned field in the state struct as
well for run-time updates.
Also: refuse non-supported HTTP versions. Verified with test 129.
Closes#6585
As the info is already stored in the transfer handle anyway, there's no
need to carry around a duplicate buffer for the life-time of the handle.
Closes#6534
... and use 'int' for ports. We don't use 'unsigned short' since -1 is
still often used internally to signify "unknown value" and 0 - 65535 are
all valid port numbers.
Closes#6534