1. The function would only ever return CURLE_OK anyway
2. Only one caller actually used the return code
3. Most callers did (void)Curl_disconnect()
Closes#8303
Instad of having all callers pass in the maximum length, always use
it. The passed in length is instead used only as the length of the
target buffer for to storing the scheme name in, if used.
Added the scheme max length restriction to the curl_url_set.3 man page.
Follow-up to 45bcb2eaa7Closes#8047
Add curl_url_strerror() to convert CURLUcode into readable string and
facilitate easier troubleshooting in programs using URL API.
Extend CURLUcode with CURLU_LAST for iteration in unit tests.
Update man pages with a mention of new function.
Update example code and tests with new functionality where it fits.
Closes#7605
We have and provide Curl_strerror() internally for a reason: strerror()
is not necessarily thread-safe so we should always try to avoid it.
Extended checksrc to warn for this, but feature the check disabled by
default and only enable it in lib/
Closes#7685
warning C4189: 'netrc_user_changed': local variable is initialized but
not referenced
warning C4189: 'netrc_passwd_changed': local variable is initialized but
not referenced
Closes#7423
- the data needs to be "line-based" anyway since it's also passed to the
debug callback/application
- it makes infof() work like failf() and consistency is good
- there's an assert that triggers on newlines in the format string
- Also removes a few instances of "..."
- Removes the code that would append "..." to the end of the data *iff*
it was truncated in infof()
Closes#7357
Coverity (CID 1486645) pointed out a use of curl_url_get() in the
parse_proxy function where the return code wasn't checked. A
(void)-prefix makes the intention obvious.
Closes#7320
Fix typos in code comments which repeat various words. In trivial
cases, just delete the repeated word. Reword the affected sentence in
"lib/url.c" for it to make sense.
Closes#7303
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
- Check whether a connection has succeded before checking whether it's
timed out.
This means if we've connected quickly, but subsequently been
descheduled, we allow the connection to succeed. Note, if we timeout,
but between checking the timeout, and connecting to the server the
connection succeeds, we will allow it to go ahead. This is viewed as
an acceptable trade off.
- Add additional failf logging around failed connection attempts to
propogate the cause up to the caller.
Co-Authored-by: Martin Howarth
Closes#7178
Unicode Windows builds use UTF-8 strings internally in libcurl,
so make sure to call the UTF-8 flavour of the libidn2 API. Also
document that Windows builds with libidn2 and UNICODE do expect
CURLOPT_URL as an UTF-8 string.
Reported-by: dEajL3kA on github
Assisted-by: Jay Satiro
Reviewed-by: Marcel Raad
Closes#7246Fixes#7228
Also, use a single function library-wide for detecting if a given hostname is
a numerical IP address.
Reported-by: Harry Sintonen
Fixes#7146Closes#7149
In some situations, it was possible that a transfer was setup to
use an specific IP version, but due do DNS caching or connection
reuse, it ended up using a different IP version from requested.
This commit changes the effect of CURLOPT_IPRESOLVE from simply
restricting address resolution to preventing the wrong connection
type being used, when choosing a connection from the pool, and
to restricting what addresses could be used when establishing
a new connection.
It is important that all addresses versions are resolved, even if
not used in that transfer in particular, because the result is
cached, and could be useful for a different transfer with a
different CURLOPT_IPRESOLVE setting.
Closes#6853
The libssh2 backend has SSH session associated with the connection but
the callback context is the easy handle, so when a connection gets
attached to a transfer, the protocol handler now allows for a custom
function to get used to set things up correctly.
Reported-by: Michael O'Farrell
Fixes#6898Closes#7078
Both were used for the same purposes and there was no logical separation
between them. Combined, this also saves 16 bytes in less holes in my
test build.
Closes#6798
The Curl_easy pointer struct entry in connectdata is now gone. Just
before commit 215db086e0 landed on January 8, 2021 there were 919
references to conn->data.
Closes#6608
- New libcurl options CURLOPT_DOH_SSL_VERIFYHOST,
CURLOPT_DOH_SSL_VERIFYPEER and CURLOPT_DOH_SSL_VERIFYSTATUS do the
same as their respective counterparts.
- New curl tool options --doh-insecure and --doh-cert-status do the same
as their respective counterparts.
Prior to this change DOH SSL certificate verification settings for
verifyhost and verifypeer were supposed to be inherited respectively
from CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER, but due to a bug
were not. As a result DOH verification remained at the default, ie
enabled, and it was not possible to disable. This commit changes
behavior so that the DOH verification settings are independent and not
inherited.
Ref: https://github.com/curl/curl/pull/4579#issuecomment-554723676
Fixes https://github.com/curl/curl/issues/4578
Closes https://github.com/curl/curl/pull/6597
HTTP auth "accidentally" worked before this cleanup since the code would
always overwrite the connection credentials with the credentials from
the most recent transfer and since HTTP auth is typically done first
thing, this has not been an issue. It was still wrong and subject to
possible race conditions or future breakage if the sequence of functions
would change.
The data.set.str[] strings MUST remain unmodified exactly as set by the
user, and the credentials to use internally are instead set/updated in
state.aptr.*
Added test 675 to verify different credentials used in two requests done
over a reused HTTP connection, which previously behaved wrongly.
Fixes#6542Closes#6545
Rename it to 'httpwant' and make a cloned field in the state struct as
well for run-time updates.
Also: refuse non-supported HTTP versions. Verified with test 129.
Closes#6585
As the info is already stored in the transfer handle anyway, there's no
need to carry around a duplicate buffer for the life-time of the handle.
Closes#6534
... and use 'int' for ports. We don't use 'unsigned short' since -1 is
still often used internally to signify "unknown value" and 0 - 65535 are
all valid port numbers.
Closes#6534
... instead of having it static within the Curl_easy struct. This takes
away 1176 bytes (18%) from the Curl_easy struct that aren't used very
often and instead makes the code allocate it when needed.
Closes#6492
... in most cases instead of 'struct connectdata *' but in some cases in
addition to.
- We mostly operate on transfers and not connections.
- We need the transfer handle to log, store data and more. Everything in
libcurl is driven by a transfer (the CURL * in the public API).
- This work clarifies and separates the transfers from the connections
better.
- We should avoid "conn->data". Since individual connections can be used
by many transfers when multiplexing, making sure that conn->data
points to the current and correct transfer at all times is difficult
and has been notoriously error-prone over the years. The goal is to
ultimately remove the conn->data pointer for this reason.
Closes#6425
... and not in the connection setup, as for multiplexed transfers the
connection setup might be skipped and then the transfer would end up
without the set user-agent!
Reported-by: Flameborn on github
Assisted-by: Andrey Gursky
Assisted-by: Jay Satiro
Assisted-by: Mike Gelfand
Fixes#6312Closes#6417
When doing HTTP authentication and a port number set with CURLOPT_PORT,
the code would previously have the URL's port number override as if it
had been a redirect to an absolute URL.
Added test 1568 to verify.
Reported-by: UrsusArctos on github
Fixes#6397Closes#6400
We currently use both spellings the british "behaviour" and the american
"behavior". However "behavior" is more used in the project so I think
it's worth dropping the british name.
Closes#6395
This commit introduces a "gophers" handler inside the gopher protocol if
USE_SSL is defined. This protocol is no different than the usual gopher
prococol, with the added TLS encapsulation upon connecting. The protocol
has been adopted in the gopher community, and many people have enabled
TLS in their gopher daemons like geomyidae(8), and clients, like clic(1)
and hurl(1).
I have not implemented test units for this protocol because my knowledge
of Perl is sub-par. However, for someone more knowledgeable it might be
fairly trivial, because the same test that tests the plain gopher
protocol can be used for "gophers" just by adding a TLS listener.
Signed-off-by: parazyd <parazyd@dyne.org>
Closes#6208
The command line tool also independently sets --ftp-skip-pasv-ip by
default.
Ten test cases updated to adapt the modified --libcurl output.
Bug: https://curl.se/docs/CVE-2020-8284.html
CVE-2020-8284
Reported-by: Varnavas Papaioannou
- enable in the build (configure)
- header parsing
- host name lookup
- unit tests for the above
- CI build
- CURL_VERSION_HSTS bit
- curl_version_info support
- curl -V output
- curl-config --features
- CURLOPT_HSTS_CTRL
- man page for CURLOPT_HSTS_CTRL
- curl --hsts (sets CURLOPT_HSTS_CTRL and works with --libcurl)
- man page for --hsts
- save cache to disk
- load cache from disk
- CURLOPT_HSTS
- man page for CURLOPT_HSTS
- added docs/HSTS.md
- fixed --version docs
- adjusted curl_easy_duphandle
Closes#5896
The cache content is not duplicated, like other caches, but the setting
and specified file name are.
Test 1908 is extended to verify this somewhat. Since the duplicated
handle gets the same file name, the test unfortunately overwrites the
same file twice (with different contents) which makes it hard to check
automatically.
Closes#5923
`USE_WINDOWS_SSPI` without `USE_WIN32_CRYPTO` but with any other DES
backend is fine, but was excluded before.
This also fixes test 1013 as the condition for SMB support in
configure.ac didn't match the condition in the source code. Now it
does.
Fixes https://github.com/curl/curl/issues/1262
Closes https://github.com/curl/curl/pull/5771
Since commit f3d501dc67, if proxy support is disabled, MSVC warns:
url.c : warning C4701: potentially uninitialized local variable
'hostaddr' used
url.c : error C4703: potentially uninitialized local pointer variable
'hostaddr' used
That could actually only happen if both `conn->bits.proxy` and
`CURL_DISABLE_PROXY` were enabled.
Initialize it to NULL to silence the warning.
Closes https://github.com/curl/curl/pull/5638
Follow-up to c4e6968127
When a new transfer is created, as a resuly of an acknowledged push,
that transfer needs a download buffer allocated.
Closes#5590
Since the connection can be used by many independent requests (using
HTTP/2 or HTTP/3), things like user-agent and other transfer-specific
data MUST NOT be kept connection oriented as it could lead to requests
getting the wrong string for their requests. This struct data was
lingering like this due to old HTTP1 legacy thinking where it didn't
mattered..
Fixes#5566Closes#5567
When the method is updated inside libcurl we must still not change the
method as set by the user as then repeated transfers with that same
handle might not execute the same operation anymore!
This fixes the libcurl part of #5462
Test 1633 added to verify.
Closes#5499
... and free it as soon as the transfer is done. It removes the extra
alloc when a new size is set with setopt() and reduces memory for unused
easy handles.
In addition: the closure_handle now doesn't use an allocated buffer at
all but the smallest supported size as a stack based one.
Closes#5472
They're only limited to the maximum string input restrictions, not to
256 bytes.
Added test 1178 to verify
Reported-by: Will Roberts
Fixes#5448Closes#5449
This change introduces a generic way to provide binary data in setopt
options, called BLOBs.
This change introduces these new setopts:
CURLOPT_ISSUERCERT_BLOB, CURLOPT_PROXY_SSLCERT_BLOB,
CURLOPT_PROXY_SSLKEY_BLOB, CURLOPT_SSLCERT_BLOB and CURLOPT_SSLKEY_BLOB.
Reviewed-by: Daniel Stenberg
Closes#5357
Since input passed to libcurl with CURLOPT_USERPWD and
CURLOPT_PROXYUSERPWD circumvents the regular string length check we have
in Curl_setstropt(), the input length limit is enforced in
Curl_parse_login_details too, separately.
Reported-by: Thomas Bouzerar
Closes#5383
When looking for a protocol match among supported schemes, check the
most "popular" schemes first. It has zero functionality difference and
for all practical purposes a speed difference will not be measureable
but it still think it makes sense to put the least likely matches last.
"Popularity" based on the 2019 user survey.
Closes#5377
A common set of functions instead of many separate implementations for
creating buffers that can grow when appending data to them. Existing
functionality has been ported over.
In my early basic testing, the total number of allocations seem at
roughly the same amount as before, possibly a few less.
See docs/DYNBUF.md for a description of the API.
Closes#5300
More connection cache accesses are protected by locks.
CONNCACHE_* is a beter prefix for the connection cache lock macros.
Curl_attach_connnection: now called as soon as there's a connection
struct available and before the connection is added to the connection
cache.
Curl_disconnect: now assumes that the connection is already removed from
the connection cache.
Ref: #4915Closes#5009
... since the current transfer is being killed. Setting to NULL is
wrong, leaving it pointing to 'data' is wrong since that handle might be
about to get freed.
Fixes#4845Closes#4858
Reported-by: dmitrmax on github
A regression made the code use 'multiplexed' as a boolean instead of the
counter it is intended to be. This made curl try to "over-populate"
connections with new streams.
This regression came with 41fcdf71a1, shipped in curl 7.65.0.
Also, respect the CURLMOPT_MAX_CONCURRENT_STREAMS value in the same
check.
Reported-by: Kunal Ekawde
Fixes#4779Closes#4784
... as it would previously prefer new connections rather than
multiplexing in most conditions! The (now removed) code was a leftover
from the Pipelining code that was translated wrongly into a
multiplex-only world.
Reported-by: Kunal Ekawde
Bug: https://curl.haxx.se/mail/lib-2019-12/0060.htmlCloses#4732
It could accidentally let the connection get used by more than one
thread, leading to double-free and more.
Reported-by: Christopher Reid
Fixes#4544Closes#4557
- Disable warning C4127 "conditional expression is constant" globally
in curl_setup.h for when building with Microsoft's compiler.
This mainly affects building with the Visual Studio project files found
in the projects dir.
Prior to this change the cmake and winbuild build systems already
disabled 4127 globally for when building with Microsoft's compiler.
Also, 4127 was already disabled for all build systems in the limited
circumstance of the WHILE_FALSE macro which disabled the warning
specifically for while(0). This commit removes the WHILE_FALSE macro and
all other cruft in favor of disabling globally in curl_setup.
Background:
We have various macros that cause 0 or 1 to be evaluated, which would
cause warning C4127 in Visual Studio. For example this causes it:
#define Curl_resolver_asynch() 1
Full behavior is not clearly defined and inconsistent across versions.
However it is documented that since VS 2015 Update 3 Microsoft has
addressed this somewhat but not entirely, not warning on while(true) for
example.
Prior to this change some C4127 warnings occurred when I built with
Visual Studio using the generated projects in the projects dir.
Closes https://github.com/curl/curl/pull/4658
The URL extracted with CURLINFO_EFFECTIVE_URL was returned as given as
input in most cases, which made it not get a scheme prefixed like before
if the URL was given without one, and it didn't remove dotdot sequences
etc.
Added test case 1907 to verify that this now works as intended and as
before 7.62.0.
Regression introduced in 7.62.0
Reported-by: Christophe Dervieux
Fixes#4491Closes#4493
To make sure that the HTTP/2 state is initialized correctly for
duplicated handles. It would otherwise easily generate "spurious"
PRIORITY frames to get sent over HTTP/2 connections when duplicated easy
handles were used.
Reported-by: Daniel Silverstone
Fixes#4303Closes#4442
Prior to this change non-ssl/non-ssh connections that were reused set
TIMER_APPCONNECT [1]. Arguably that was incorrect since no SSL/SSH
handshake took place.
[1]: TIMER_APPCONNECT is publicly known as CURLINFO_APPCONNECT_TIME in
libcurl and %{time_appconnect} in the curl tool. It is documented as
"the time until the SSL/SSH handshake is completed".
Reported-by: Marcel Hernandez
Ref: https://github.com/curl/curl/issues/3760
Closes https://github.com/curl/curl/pull/3773
If you set the same URL for target as for DoH (and it isn't a DoH
server), like "https://example.com" in both, the easy handles used for
the DoH requests could be left "dangling" and end up not getting freed.
Reported-by: Paul Dreik
Closes#4366
So that users can mask in/out specific HTTP versions when Alt-Svc is
used.
- Removed "h2c" and updated test case accordingly
- Changed how the altsvc struct is laid out
- Added ifdefs to make the unittest run even in a quiche-tree
Closes#4201
RFC 7838 section 5:
When using an alternative service, clients SHOULD include an Alt-Used
header field in all requests.
Removed CURLALTSVC_ALTUSED again (feature is still EXPERIMENTAL thus
this is deemed ok).
You can disable sending this header just like you disable any other HTTP
header in libcurl.
Closes#4199
Added the ability for the calling program to specify the authorisation
identity (authzid), the identity to act as, in addition to the
authentication identity (authcid) and password when using SASL PLAIN
authentication.
Fixes#3653Closes#3790
NOTE: This commit was cherry-picked and is part of a series of commits
that added the authzid feature for upcoming 7.66.0. The series was
temporarily reverted in db8ec1f so that it would not ship in a 7.65.x
patch release.
Closes https://github.com/curl/curl/pull/4186
As the plan has been laid out in DEPRECATED. Update docs accordingly and
verify in test 1174. Now requires the option to be set to allow HTTP/0.9
responses.
Closes#4191
It was used (intended) to pass in the size of the 'socks' array that is
also passed to these functions, but was rarely actually checked/used and
the array is defined to a fixed size of MAX_SOCKSPEREASYHANDLE entries
that should be used instead.
Closes#4169
USe configure --with-ngtcp2 or --with-quiche
Using either option will enable a HTTP3 build.
Co-authored-by: Alessandro Ghedini <alessandro@ghedini.me>
Closes#3500
All protocols except for CURLPROTO_FILE/CURLPROTO_SMB and their TLS
counterpart were allowed for redirect. This vastly broadens the
exploitation surface in case of a vulnerability such as SSRF [1], where
libcurl-based clients are forced to make requests to arbitrary hosts.
For instance, CURLPROTO_GOPHER can be used to smuggle any TCP-based
protocol by URL-encoding a payload in the URI. Gopher will open a TCP
connection and send the payload.
Only HTTP/HTTPS and FTP are allowed. All other protocols have to be
explicitly enabled for redirects through CURLOPT_REDIR_PROTOCOLS.
[1]: https://www.acunetix.com/blog/articles/server-side-request-forgery-vulnerability/
Signed-off-by: Linos Giannopoulos <lgian@skroutz.gr>
Closes#4094
Old connections are meant to expire from the connection cache after
CURLOPT_MAXAGE_CONN seconds. However, they actually expire after 1000x
that value. This occurs because a time value measured in milliseconds is
accidentally divided by 1M instead of by 1,000.
Closes https://github.com/curl/curl/pull/4013
Since more than one socket can be used by each transfer at a given time,
each sockhash entry how has its own hash table with transfers using that
socket.
In addition, the sockhash entry can now be marked 'blocked = TRUE'"
which then makes the delete function just set 'removed = TRUE' instead
of removing it "for real", as a way to not rip out the carpet under the
feet of a parent function that iterates over the transfers of that same
sockhash entry.
Reported-by: Tom van der Woerdt
Fixes#3961Fixes#3986Fixes#3995Fixes#4004Closes#3997
This fixes the static dependency on iphlpapi.lib and allows curl to
build for targets prior to Windows Vista.
This partially reverts 170bd047.
Fixes#3960Closes#3958
... so that it has a sensible value when ConnectionExists() is called which
needs it set to differentiate host "bundles" correctly on port number!
Also, make conncache:hashkey() use correct port for bundles that are proxy vs
host connections.
Probably a regression from 7.62.0
Reported-by: Tom van der Woerdt
Fixes#3956Closes#3957
Only HTTP proxy use where multiple host names can be used over the same
connection should use the proxy host name for bundles.
Reported-by: Tom van der Woerdt
Fixes#3951Closes#3955
- Revert all commits related to the SASL authzid feature since the next
release will be a patch release, 7.65.1.
Prior to this change CURLOPT_SASL_AUTHZID / --sasl-authzid was destined
for the next release, assuming it would be a feature release 7.66.0.
However instead the next release will be a patch release, 7.65.1 and
will not contain any new features.
After the patch release after the reverted commits can be restored by
using cherry-pick:
git cherry-pick a14d72ca9499ff8c1cc36c2a8d520edf690
Details for all reverted commits:
Revert "os400: take care of CURLOPT_SASL_AUTHZID in curl_easy_setopt_ccsid()."
This reverts commit 0edf6907ae.
Revert "tests: Fix the line endings for the SASL alt-auth tests"
This reverts commit c2a8d52a13.
Revert "examples: Added SASL PLAIN authorisation identity (authzid) examples"
This reverts commit 8c1cc369d0.
Revert "curl: --sasl-authzid added to support CURLOPT_SASL_AUTHZID from the tool"
This reverts commit a9499ff136.
Revert "sasl: Implement SASL authorisation identity via CURLOPT_SASL_AUTHZID"
This reverts commit a14d72ca2f.
Added the ability for the calling program to specify the authorisation
identity (authzid), the identity to act as, in addition to the
authentication identity (authcid) and password when using SASL PLAIN
authentication.
Fixed#3653Closes#3790
If the proxy string is given as an IPv6 numerical address with a zone
id, make sure to use that for the connect to the proxy.
Reported-by: Edmond Yu
Fixes#3482Closes#3918
Given that Curl_disconnect() calls Curl_http_auth_cleanup_ntlm() prior
to calling conn_shutdown() and it in turn performs this, there is no
need to perform the same action in conn_shutdown().
Closes#3881
- make sure an already "owned" connection isn't returned unless
multiplexed.
- clear ->data when returning the connection to the cache again
Regression since 7.62.0 (probably in commit 1b76c38904)
Bug: https://curl.haxx.se/mail/lib-2019-03/0064.htmlCloses#3686
* Adjusted unit tests 2056, 2057
* do not generally close connections with CURLAUTH_NEGOTIATE after every request
* moved negotiatedata from UrlState to connectdata
* Added stream rewind logic for CURLAUTH_NEGOTIATE
* introduced negotiatedata::GSS_AUTHDONE and negotiatedata::GSS_AUTHSUCC
* Consider authproblem state for CURLAUTH_NEGOTIATE
* Consider reuse_forbid for CURLAUTH_NEGOTIATE
* moved and adjusted negotiate authentication state handling from
output_auth_headers into Curl_output_negotiate
* Curl_output_negotiate: ensure auth done is always set
* Curl_output_negotiate: Set auth done also if result code is
GSS_S_CONTINUE_NEEDED/SEC_I_CONTINUE_NEEDED as this result code may
also indicate the last challenge request (only works with disabled
Expect: 100-continue and CURLOPT_KEEP_SENDING_ON_ERROR -> 1)
* Consider "Persistent-Auth" header, detect if not present;
Reset/Cleanup negotiate after authentication if no persistent
authentication
* apply changes introduced with #2546 for negotiate rewind logic
Fixes#1261Closes#1975
- no need to have them protocol specific
- no need to set pointers to them with the Curl_setup_transfer() call
- make Curl_setup_transfer() operate on a transfer pointer, not
connection
- switch some counters from long to the more proper curl_off_t type
Closes#3627
- Split off connection shutdown procedure from Curl_disconnect into new
function conn_shutdown.
- Change the shutdown procedure to close the sockets before
disassociating the transfer.
Prior to this change the sockets were closed after disassociating the
transfer so SOCKETFUNCTION wasn't called since the transfer was already
disassociated. That likely came about from recent work started in
Jan 2019 (#3442) to separate transfers from connections.
Bug: https://curl.haxx.se/mail/lib-2019-02/0101.html
Reported-by: Pavel Löbl
Closes https://github.com/curl/curl/issues/3597
Closes https://github.com/curl/curl/pull/3598
- Save the original conn->data before it's changed to the specified
data transfer for the connection check and then restore it afterwards.
This is a follow-up to 38d8e1b 2019-02-11.
History:
It was discovered a month ago that before checking whether to extract a
dead connection that that connection should be associated with a "live"
transfer for the check (ie original conn->data ignored and set to the
passed in data). A fix was landed in 54b201b which did that and also
cleared conn->data after the check. The original conn->data was not
restored, so presumably it was thought that a valid conn->data was no
longer needed.
Several days later it was discovered that a valid conn->data was needed
after the check and follow-up fix was landed in bbae24c which partially
reverted the original fix and attempted to limit the scope of when
conn->data was changed to only when pruning dead connections. In that
case conn->data was not cleared and the original conn->data not
restored.
A month later it was discovered that the original fix was somewhat
correct; a "live" transfer is needed for the check in all cases
because original conn->data could be null which could cause a bad deref
at arbitrary points in the check. A fix was landed in 38d8e1b which
expanded the scope to all cases. conn->data was not cleared and the
original conn->data not restored.
A day later it was discovered that not restoring the original conn->data
may lead to busy loops in applications that use the event interface, and
given this observation it's a pretty safe assumption that there is some
code path that still needs the original conn->data. This commit is the
follow-up fix for that, it restores the original conn->data after the
connection check.
Assisted-by: tholin@users.noreply.github.com
Reported-by: tholin@users.noreply.github.com
Fixes https://github.com/curl/curl/issues/3542Closes#3559
The http2 code for connection checking needs a transfer to use. Make
sure a working one is set before handler->connection_check() is called.
Reported-by: jnbr on github
Fixes#3541Closes#3547
urlapi: turn three local-only functions into statics
conncache: make conncache_find_first_connection static
multi: make detach_connnection static
connect: make getaddressinfo static
curl_ntlm_core: make hmac_md5 static
http2: make two functions static
http: make http_setup_conn static
connect: make tcpnodelay static
tests: make UNITTEST a thing to mark functions with, so they can be static for
normal builds and non-static for unit test builds
... and mark Curl_shuffle_addr accordingly.
url: make up_free static
setopt: make vsetopt static
curl_endian: make write32_le static
rtsp: make rtsp_connisdead static
warnless: remove unused functions
memdebug: remove one unused function, made another static
Stick to "Schannel" everywhere. The configure option --with-winssl is
kept to allow existing builds to work but --with-schannel is added as an
alias.
Closes#3504
extract_if_dead() dead is called from two functions, and only one of
them should get conn->data updated and now neither call path clears it.
scan-build found a case where conn->data would be NULL dereferenced in
ConnectionExists() otherwise.
Closes#3473
Make sure that this function sets a proper "live" transfer for the
connection before calling the protocol-specific connection check
function, and then clear it again afterward as a non-used connection has
no current transfer.
Reported-by: Jeroen Ooms
Reviewed-by: Marcel Raad
Reviewed-by: Daniel Gustafsson
Fixes#3463Closes#3464
We use "conn" everywhere to be a pointer to the connection.
Introduces two functions that "attaches" and "detaches" the connection
to and from the transfer.
Going forward, we should favour using "data->conn" (since a transfer
always only has a single connection or none at all) to "conn->data"
(since a connection can have none, one or many transfers associated with
it and updating conn->data to be correct is error prone and a frequent
reason for internal issues).
Closes#3442
Do not assume/store assocation between a given easy handle and the
connection if it can be avoided.
Long-term, the 'conn->data' pointer should probably be removed as it is a
little too error-prone. Still used very widely though.
Reported-by: masbug on github
Fixes#3391Closes#3400
Added CURLOPT_HTTP09_ALLOWED and --http0.9 for this purpose.
For now, both the tool and library allow HTTP/0.9 by default.
docs/DEPRECATE.md lays out the plan for when to reverse that default: 6
months after the 7.64.0 release. The options are added already now so
that applications/scripts can start using them already now.
Fixes#2873Closes#3383
The function does not return the same value as snprintf() normally does,
so readers may be mislead into thinking the code works differently than
it actually does. A different function name makes this easier to detect.
Reported-by: Tomas Hoger
Assisted-by: Daniel Gustafsson
Fixes#3296Closes#3297
When using c-ares for asyn dns, the dns socket fd was silently closed
by c-ares without curl being aware. curl would then 'realize' the fd
has been removed at next call of Curl_resolver_getsock, and only then
notify the CURLMOPT_SOCKETFUNCTION to remove fd from its poll set with
CURL_POLL_REMOVE. At this point the fd is already closed.
By using ares socket state callback (ARES_OPT_SOCK_STATE_CB), this
patch allows curl to be notified that the fd is not longer needed
for neither for write nor read. At this point by calling
Curl_multi_closed we are able to notify multi with CURL_POLL_REMOVE
before the fd is actually closed by ares.
In asyn-ares.c Curl_resolver_duphandle we can't use ares_dup anymore
since it does not allow passing a different sock_state_cb_data
Closes#3238
The function identifying a leading "scheme" part of the URL considered a
few letters ending with a colon to be a scheme, making something like
"short:80" to become an unknown scheme instead of a short host name and
a port number.
Extended test 1560 to verify.
Also fixed test203 to use file_pwd to make it get the correct path on
windows. Removed test 2070 since it was a duplicate of 203.
Assisted-by: Marcel Raad
Reported-by: Hagai Auro
Fixes#3220Fixes#3233Closes#3223Closes#3235
- for "--netrc", don't ignore the login/password specified with "--user",
only ignore the login/password in the URL.
This restores the netrc behaviour of curl 7.61.1 and earlier.
- fix the documentation of CURL_NETRC_REQUIRED
- improve the detection of login/password changes when reading .netrc
- don't read .netrc if both login and password are already set
Fixes#3213Closes#3224
Now FILE transfers send headers to the header callback like HTTP and
other protocols. Also made curl_easy_getinfo(...CURLINFO_PROTOCOL...)
work for FILE in the callbacks.
Makes "curl -i file://.." and "curl -I file://.." work like before
again. Applied the bold header logic to them too.
Regression from c1c2762 (7.61.0)
Reported-by: Shaun Jackman
Fixes#3083Closes#3101
Add functionality so that protocols can do custom keepalive on their
connections, when an external API function is called.
Add docs for the new options in 7.62.0
Closes#1641
This fixes a memory leak when CURLOPT_LOGIN_OPTIONS is used, together with
connection reuse.
I found this with oss-fuzz on GDAL and curl master:
https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=9582
I couldn't reproduce with the oss-fuzz original test case, but looking
at curl source code pointed to this well reproducable leak.
Closes#2790
Follow-up to 1b76c38904. The VTLS backends that close down the TLS
layer for a connection still needs a Curl_easy handle for the session_id
cache etc.
Fixes#2764Closes#2771
By masking sure to use the *current* easy handle with extracted
connections from the cache, and make sure to NULLify the ->data pointer
when the connection is put into the cache to make this mistake easier to
detect in the future.
Reported-by: Will Dietz
Fixes#2669Closes#2672
- Get rid of variable that was generating false positive warning
(unitialized)
- Fix issues in tests
- Reduce scope of several variables all over
etc
Closes#2631
... instead of previous separate struct fields, to make it easier to
extend and change individual backends without having to modify them all.
closes#2547
- Move verify_certificate functionality in schannel.c into a new
file called schannel_verify.c. Additionally, some structure defintions
from schannel.c have been moved to schannel.h to allow them to be
used in schannel_verify.c.
- Make verify_certificate functionality for Schannel available on
all versions of Windows instead of just Windows CE. verify_certificate
will be invoked on Windows CE or when the user specifies
CURLOPT_CAINFO and CURLOPT_SSL_VERIFYPEER.
- In verify_certificate, create a custom certificate chain engine that
exclusively trusts the certificate store backed by the CURLOPT_CAINFO
file.
- doc updates of --cacert/CAINFO support for schannel
- Use CERT_NAME_SEARCH_ALL_NAMES_FLAG when invoking CertGetNameString
when available. This implements a TODO in schannel.c to improve
handling of multiple SANs in a certificate. In particular, all SANs
will now be searched instead of just the first name.
- Update tool_operate.c to not search for the curl-ca-bundle.crt file
when using Schannel to maintain backward compatibility. Previously,
any curl-ca-bundle.crt file found in that search would have been
ignored by Schannel. But, with CAINFO support, the file found by
that search would have been used as the certificate store and
could cause issues for any users that have curl-ca-bundle.crt in
the search path.
- Update url.c to not set the build time CURL_CA_BUNDLE if the selected
SSL backend is Schannel. We allow setting CA location for schannel
only when explicitly specified by the user via CURLOPT_CAINFO /
--cacert.
- Add new test cases 3000 and 3001. These test cases check that the first
and last SAN, respectively, matches the connection hostname. New test
certificates have been added for these cases. For 3000, the certificate
prefix is Server-localhost-firstSAN and for 3001, the certificate
prefix is Server-localhost-secondSAN.
- Remove TODO 15.2 (Add support for custom server certificate
validation), this commit addresses it.
Closes https://github.com/curl/curl/pull/1325
curl 7.57.0 and up interpret this according to Appendix E.3.2 of RFC
8089 but then returns an error saying this is unimplemented. This is
actually a regression in behavior on both Windows and Unix.
Before curl 7.57.0 this URL was treated as a path of "//foo/bar" and
then passed to the relevant OS API. This means that the behavior of this
case is actually OS dependent.
The Unix path resolution rules say that the OS must handle swallowing
the extra "/" and so this path is the same as "/foo/bar"
The Windows path resolution rules say that this is a UNC path and
automatically handles the SMB access for the program. So curl on Windows
was already doing Appendix E.3.2 without any special code in curl.
Regression
Closes#2438
- Add new option CURLOPT_HAPPY_EYEBALLS_TIMEOUT to set libcurl's happy
eyeball timeout value.
- Add new optval macro CURL_HET_DEFAULT to represent the default happy
eyeballs timeout value (currently 200 ms).
- Add new tool option --happy-eyeballs-timeout-ms to expose
CURLOPT_HAPPY_EYEBALLS_TIMEOUT. The -ms suffix is used because the
other -timeout options in the tool expect seconds not milliseconds.
Closes https://github.com/curl/curl/pull/2260
Move curl_mime_initpart() and curl_mime_cleanpart() calls to lower-level
functions dealing with UserDefined structure contents.
This avoids memory leakages on curl-generated part mime headers.
New test 2073 checks this using the cli tool --next option: it
triggers a valgrind error if bug is present.
Bug: https://curl.haxx.se/mail/lib-2017-12/0060.html
Reported-by: Martin Galvan
These are OS/2-specific things added to the code in the year 2000. They
were always ugly. If there's any user left, they still don't need it
done this way.
Closes#2166