warning C4189: 'netrc_user_changed': local variable is initialized but
not referenced
warning C4189: 'netrc_passwd_changed': local variable is initialized but
not referenced
Closes#7423
- the data needs to be "line-based" anyway since it's also passed to the
debug callback/application
- it makes infof() work like failf() and consistency is good
- there's an assert that triggers on newlines in the format string
- Also removes a few instances of "..."
- Removes the code that would append "..." to the end of the data *iff*
it was truncated in infof()
Closes#7357
- Don't set the size of the piece of data to send to the rate limit if
that limit is larger than the buffer size that will hold the piece.
Prior to this change if CURLOPT_MAX_SEND_SPEED_LARGE
(curl tool: --limit-rate) was set then it was possible that a temporary
buffer used for uploading could be written to out of bounds. A likely
scenario for this would be a non-trivial amount of post data combined
with a rate limit larger than CURLOPT_UPLOAD_BUFFERSIZE (default 64k).
The bug was introduced in 24e469f which is in releases since 7.76.0.
perl -e "print '0' x 200000" > tmp
curl --limit-rate 128k -d @tmp httpbin.org/post
Reported-by: Richard Marion
Fixes https://github.com/curl/curl/issues/7308
Closes https://github.com/curl/curl/pull/7315
Introducing a 'isproxy' argument to the connect function so that it
knows wether to store the time stamp or not.
Reported-by: Yongkang Huang
Fixes#7274Closes#7274
The libssh2 backend has SSH session associated with the connection but
the callback context is the easy handle, so when a connection gets
attached to a transfer, the protocol handler now allows for a custom
function to get used to set things up correctly.
Reported-by: Michael O'Farrell
Fixes#6898Closes#7078
Assumed to be a minor coding style improvement with no behavior change.
A modern compiler is expected to have the calculation optimized during
compilation. It may be deemed okay even if that's not the case, since
the added overhead is considered very low.
Closes#7032
Previously this logic would cap the send to CURL_MAX_WRITE_SIZE bytes,
but for the situations where a larger upload buffer has been set, this
function can benefit from sending more bytes. With default size used,
this does the same as before.
Also changed the storage of the size to an 'unsigned int' as it is not
allowed to be set larger than 2M.
Also added cautions to the man pages about changing buffer sizes in
run-time.
Closes#7022
A reused transfer handle could otherwise reuse the previous leftover
buffer and havoc would ensue.
Reported-by: sergio-nsk on github
Fixes#7018Closes#7021
By making sure never to send off more than the allowed number of bytes
per second the speed limit logic is given more room to actually work.
Reported-by: Fabian Keil
Bug: https://curl.se/mail/lib-2021-03/0042.htmlCloses#6797
Both were used for the same purposes and there was no logical separation
between them. Combined, this also saves 16 bytes in less holes in my
test build.
Closes#6798
To make sure the Host: header and the URL provide the same authority
portion when sent to the proxy, strip the default port number from the
URL if one was provided.
Reported-by: Michael Brown
Fixes#6769Closes#6778
When asked to resume a download, libcurl will convert that to HTTP logic
and if then the entire file is already transferred it will result in a
416 response from the HTTP server. With CURLOPT_FAILONERRROR set in that
scenario, it should *not* lead to an error return.
Updated test 1156, added test 1273
Reported-by: Jonathan Watt
Fixes#6740Closes#6753
... but instead use a private alternative that points to the "driving
transfer" from the connection. We set the "user data" associated with
the connection to be the connectdata struct, but when we drive transfers
the code still needs to know the pointer to the transfer. We can change
the user data to become the Curl_easy handle, but with older nghttp2
version we cannot dynamically update that pointer properly when
different transfers are used over the same connection.
Closes#6520
HTTP auth "accidentally" worked before this cleanup since the code would
always overwrite the connection credentials with the credentials from
the most recent transfer and since HTTP auth is typically done first
thing, this has not been an issue. It was still wrong and subject to
possible race conditions or future breakage if the sequence of functions
would change.
The data.set.str[] strings MUST remain unmodified exactly as set by the
user, and the credentials to use internally are instead set/updated in
state.aptr.*
Added test 675 to verify different credentials used in two requests done
over a reused HTTP connection, which previously behaved wrongly.
Fixes#6542Closes#6545
Rename it to 'httpwant' and make a cloned field in the state struct as
well for run-time updates.
Also: refuse non-supported HTTP versions. Verified with test 129.
Closes#6585
... and make sure the code never updates 'set.prefer_ascii' as it breaks
handle reuse which should use the setting as the user specified it.
Added test 1569 to verify: it first makes an FTP transfer with ';type=A'
and then another without type on the same handle and the second should
then use binary. Previously, curl failed this.
Closes#6578
... in most cases instead of 'struct connectdata *' but in some cases in
addition to.
- We mostly operate on transfers and not connections.
- We need the transfer handle to log, store data and more. Everything in
libcurl is driven by a transfer (the CURL * in the public API).
- This work clarifies and separates the transfers from the connections
better.
- We should avoid "conn->data". Since individual connections can be used
by many transfers when multiplexing, making sure that conn->data
points to the current and correct transfer at all times is difficult
and has been notoriously error-prone over the years. The goal is to
ultimately remove the conn->data pointer for this reason.
Closes#6425
When doing a request with a request body expecting a 401/407 back, that
initial request is sent with a zero content-length. Test 177 and more.
Closes#6424
... so that Retry-After and other meta-content can still be used.
Added 1634 to verify. Adjusted test 194 and 281 since --fail now also
includes the header-terminating CRLF in the output before it exits.
Fixes#6408Closes#6409
When the initial request isn't possible to send in its entirety, the
remainder of request would be delivered to the debug callback as data
and would wrongly be counted internally as body-bytes sent.
Extended test 1295 to verify.
Closes#6328
- enable in the build (configure)
- header parsing
- host name lookup
- unit tests for the above
- CI build
- CURL_VERSION_HSTS bit
- curl_version_info support
- curl -V output
- curl-config --features
- CURLOPT_HSTS_CTRL
- man page for CURLOPT_HSTS_CTRL
- curl --hsts (sets CURLOPT_HSTS_CTRL and works with --libcurl)
- man page for --hsts
- save cache to disk
- load cache from disk
- CURLOPT_HSTS
- man page for CURLOPT_HSTS
- added docs/HSTS.md
- fixed --version docs
- adjusted curl_easy_duphandle
Closes#5896
... when the chunked framing was added, the size of the "body part" of
the data was calculated wrongly so the debug callback would get told a
header chunk a few bytes too big that would also contain the first few
bytes of the request body.
Reported-by: Dirk Wetter
Ref: #6144Closes#6147
Whitespace is spelled without a space between white and space, so
make sure to consistently spell it that way across the codebase.
Closes#6023
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
Reviewed-by: Emil Engler <me@emilengler.com>
Provide the HTTP method that was used on the latest request, which might
be relevant for users when there was one or more redirects involved.
Closes#5511
Updated terminology in docs, comments and phrases to refer to C strings
as "null-terminated". Done to unify with how most other C oriented docs
refer of them and what users in general seem to prefer (based on a
single highly unscientific poll on twitter).
Reported-by: coinhubs on github
Fixes#5598Closes#5608
Since the connection can be used by many independent requests (using
HTTP/2 or HTTP/3), things like user-agent and other transfer-specific
data MUST NOT be kept connection oriented as it could lead to requests
getting the wrong string for their requests. This struct data was
lingering like this due to old HTTP1 legacy thinking where it didn't
mattered..
Fixes#5566Closes#5567
Instead of discussing if there's value or meaning (implied or not) in
the colors, let's use words without the same possibly negative
associations.
Closes#5546
When the method is updated inside libcurl we must still not change the
method as set by the user as then repeated transfers with that same
handle might not execute the same operation anymore!
This fixes the libcurl part of #5462
Test 1633 added to verify.
Closes#5499
A common set of functions instead of many separate implementations for
creating buffers that can grow when appending data to them. Existing
functionality has been ported over.
In my early basic testing, the total number of allocations seem at
roughly the same amount as before, possibly a few less.
See docs/DYNBUF.md for a description of the API.
Closes#5300
In a debug build, settting the environment variable "CURL_SMALLREQSEND"
will make the first HTTP request send not send more bytes than the set
amount, thus ending up verifying that the logic for handling a split
HTTP request send works correctly.
As we have logic that checks if we get a >= 400 reponse code back before
the upload is done, which then got confused since it wasn't "done" but
yet there was no data to send!
Reported-by: IvanoG on github
Fixes#4996Closes#5002
When doing a request with a body + Expect: 100-continue and the server
responds with a 417, the same request will be retried immediately
without the Expect: header.
Added test 357 to verify.
Also added a control instruction to tell the sws test server to not read
the request body if Expect: is present, which the new test 357 uses.
Reported-by: bramus on github
Fixes#4949Closes#4964
It could accidentally let the connection get used by more than one
thread, leading to double-free and more.
Reported-by: Christopher Reid
Fixes#4544Closes#4557
... and use internally. This function will return TIME_T_MAX instead of
failure if the parsed data is found to be larger than what can be
represented. TIME_T_MAX being the largest value curl can represent.
Reviewed-by: Daniel Gustafsson
Reported-by: JanB on github
Fixes#4152Closes#4651
The 'share object' only sets the storage area for cookies. The "cookie
engine" still needs to be enabled or activated using the normal cookie
options.
This caused the curl command line tool to accidentally use cookies
without having been told to, since curl switched to using shared cookies
in 7.66.0.
Test 1166 verifies
Updated test 506
Fixes#4429Closes#4434
When a username and password are provided in the URL, they were wrongly
removed from the stored URL so that subsequent uses of the same URL
wouldn't find the crendentials. This made doing HTTP auth with multiple
connections (like Digest) mishave.
Regression from 46e164069d (7.62.0)
Test case 335 added to verify.
Reported-by: Mike Crowe
Fixes#4228Closes#4229
RFC 7838 section 5:
When using an alternative service, clients SHOULD include an Alt-Used
header field in all requests.
Removed CURLALTSVC_ALTUSED again (feature is still EXPERIMENTAL thus
this is deemed ok).
You can disable sending this header just like you disable any other HTTP
header in libcurl.
Closes#4199
Even though it cannot fall-back to a lower HTTP version automatically. The
safer way to upgrade remains via CURLOPT_ALTSVC.
CURLOPT_H3 no longer has any bits that do anything and might be removed
before we remove the experimental label.
Updated the curl tool accordingly to use "--http3".
Closes#4197
This is only the libcurl part that provides the information. There's no
user of the parsed value. This change includes three new tests for the
parser.
Ref: #3794
It was used (intended) to pass in the size of the 'socks' array that is
also passed to these functions, but was rarely actually checked/used and
the array is defined to a fixed size of MAX_SOCKSPEREASYHANDLE entries
that should be used instead.
Closes#4169
If using the read callback for HTTP_POST, and POSTFIELDSIZE is not set,
automatically add a Transfer-Encoding: chunked header, same as it is
already done for HTTP_PUT, HTTP_POST_FORM and HTTP_POST_MIME. Update
test 1514 according to the new behaviour.
Closes#4138
USe configure --with-ngtcp2 or --with-quiche
Using either option will enable a HTTP3 build.
Co-authored-by: Alessandro Ghedini <alessandro@ghedini.me>
Closes#3500
With CURLOPT_TIMECONDITION set, a header is automatically added (e.g.
If-Modified-Since). Allow this to be replaced or suppressed with
CURLOPT_HTTPHEADER.
Fixes#4103Closes#4109
There were a leftover few prototypes of Curl_ functions that we used to
export but no longer do, this removes those prototypes and cleans up any
comments still referring to them.
Curl_write32_le(), Curl_strcpy_url(), Curl_strlen_url(), Curl_up_free()
Curl_concat_url(), Curl_detach_connnection(), Curl_http_setup_conn()
were made static in 05b100aee2.
Curl_http_perhapsrewind() made static in 574aecee20.
For the remainder, I didn't trawl the Git logs hard enough to capture
their exact time of deletion, but they were all gone: Curl_splayprint(),
Curl_http2_send_request(), Curl_global_host_cache_dtor(),
Curl_scan_cache_used(), Curl_hostcache_destroy(), Curl_second_connect(),
Curl_http_auth_stage() and Curl_close_connections().
Closes#4096
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
The header buffer size calculation can from static analysis seem to
overlow as it performs an addition between two size_t variables and
stores the result in a size_t variable. Overflow is however guarded
against elsewhere since the input to the addition is regulated by
the maximum read buffer size. Clarify this with a comment since the
question was asked.
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
To make sure a HTTP/2 stream registers the end of stream.
Bug #4043 made me find this problem but this fix doesn't correct the
reported issue.
Closes#4068
Responses with status codes 1xx, 204 or 304 don't have a response body. For
these, don't parse these headers:
- Content-Encoding
- Content-Length
- Content-Range
- Last-Modified
- Transfer-Encoding
This change ensures that HTTP/2 upgrades work even if a
"Content-Length: 0" or a "Transfer-Encoding: chunked" header is present.
Co-authored-by: Daniel Stenberg
Closes#3702Fixes#3968Closes#3977
They serve very little purpose and mostly just add noise. Most of them
have been around for a very long time. I read them all before removing
or rephrasing them.
Ref: #3876Closes#3883
* Adjusted unit tests 2056, 2057
* do not generally close connections with CURLAUTH_NEGOTIATE after every request
* moved negotiatedata from UrlState to connectdata
* Added stream rewind logic for CURLAUTH_NEGOTIATE
* introduced negotiatedata::GSS_AUTHDONE and negotiatedata::GSS_AUTHSUCC
* Consider authproblem state for CURLAUTH_NEGOTIATE
* Consider reuse_forbid for CURLAUTH_NEGOTIATE
* moved and adjusted negotiate authentication state handling from
output_auth_headers into Curl_output_negotiate
* Curl_output_negotiate: ensure auth done is always set
* Curl_output_negotiate: Set auth done also if result code is
GSS_S_CONTINUE_NEEDED/SEC_I_CONTINUE_NEEDED as this result code may
also indicate the last challenge request (only works with disabled
Expect: 100-continue and CURLOPT_KEEP_SENDING_ON_ERROR -> 1)
* Consider "Persistent-Auth" header, detect if not present;
Reset/Cleanup negotiate after authentication if no persistent
authentication
* apply changes introduced with #2546 for negotiate rewind logic
Fixes#1261Closes#1975
The check that prevents payload from sending in case of authentication
doesn't check properly if the authentication is done or not.
They're cases where the proxy respond "200 OK" before sending
authentication challenge. This change takes care of that.
Fixes#2431Closes#3669
- no need to have them protocol specific
- no need to set pointers to them with the Curl_setup_transfer() call
- make Curl_setup_transfer() operate on a transfer pointer, not
connection
- switch some counters from long to the more proper curl_off_t type
Closes#3627
Without it set, we would unwillingly triger the "HTTP error before end
of send, stop sending" condition even if the entire POST body had been
sent (since it wouldn't know the expected size) which would
unnecessarily log that message and close the connection when it didn't
have to.
Reported-by: Matt McClure
Bug: https://curl.haxx.se/mail/archive-2019-02/0023.htmlCloses#3624
This allows the compiler to pack and align the structs better in
memory. For a rather feature-complete build on x86_64 Linux, gcc 8.1.2
makes the Curl_easy struct 4.9% smaller. From 6312 bytes to 6000.
Removed an unused struct field.
No functionality changes.
Closes#3610
Previously the function would edit the provided header in-place when a
semicolon is used to signify an empty header. This made it impossible to
use the same set of custom headers in multiple threads simultaneously.
This approach now makes a local copy when it needs to edit the string.
Reported-by: d912e3 on github
Fixes#3578Closes#3579
urlapi: turn three local-only functions into statics
conncache: make conncache_find_first_connection static
multi: make detach_connnection static
connect: make getaddressinfo static
curl_ntlm_core: make hmac_md5 static
http2: make two functions static
http: make http_setup_conn static
connect: make tcpnodelay static
tests: make UNITTEST a thing to mark functions with, so they can be static for
normal builds and non-static for unit test builds
... and mark Curl_shuffle_addr accordingly.
url: make up_free static
setopt: make vsetopt static
curl_endian: make write32_le static
rtsp: make rtsp_connisdead static
warnless: remove unused functions
memdebug: remove one unused function, made another static
Added CURLOPT_HTTP09_ALLOWED and --http0.9 for this purpose.
For now, both the tool and library allow HTTP/0.9 by default.
docs/DEPRECATE.md lays out the plan for when to reverse that default: 6
months after the 7.64.0 release. The options are added already now so
that applications/scripts can start using them already now.
Fixes#2873Closes#3383
This adds the CURLOPT_TRAILERDATA and CURLOPT_TRAILERFUNCTION
options that allow a callback based approach to sending trailing headers
with chunked transfers.
The test server (sws) was updated to take into account the detection of the
end of transfer in the case of trailing headers presence.
Test 1591 checks that trailing headers can be sent using libcurl.
Closes#3350
Only allow secure origins to be able to write cookies with the
'secure' flag set. This reduces the risk of non-secure origins
to influence the state of secure origins. This implements IETF
Internet-Draft draft-ietf-httpbis-cookie-alone-01 which updates
RFC6265.
Closes#2956
Reviewed-by: Daniel Stenberg <daniel@haxx.se>
- Include query in the path passed to generate HTTP auth.
Recent changes to use the URL API internally (46e1640, 7.62.0)
inadvertently broke authentication URIs by omitting the query.
Fixes https://github.com/curl/curl/issues/3353Closes#3356
The http status code 204 (No Content) should not change the "condition
unmet" flag. Only the http status code 304 (Not Modified) should do
this.
Closes#359
The function does not return the same value as snprintf() normally does,
so readers may be mislead into thinking the code works differently than
it actually does. A different function name makes this easier to detect.
Reported-by: Tomas Hoger
Assisted-by: Daniel Gustafsson
Fixes#3296Closes#3297
Deal with tiny "HTTP/0.9" (header-less) responses by checking the
status-line early, even before a full "HTTP/" is received to allow
detecting 0.9 properly.
Test 1266 and 1267 added to verify.
Fixes#2420Closes#2872
So far, the code tries to pick an authentication method only if
user/password credentials are available, which is not the case for
Bearer authentictation...
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#2754
The Bearer authentication was added to cURL 7.61.0, but there is a
problem: if CURLAUTH_ANY is selected, and the server supports multiple
authentication methods including the Bearer method, we strongly prefer
that latter method (only CURLAUTH_NEGOTIATE beats it), and if the Bearer
authentication fails, we will never even try to attempt any other
method.
This is particularly unfortunate when we already know that we do not
have any Bearer token to work with.
Such a scenario happens e.g. when using Git to push to Visual Studio
Team Services (which supports Basic and Bearer authentication among
other methods) and specifying the Personal Access Token directly in the
URL (this aproach is frequently taken by automated builds).
Let's make sure that we have a Bearer token to work with before we
select the Bearer authentication among the available authentication
methods.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Closes#2754
- Get rid of variable that was generating false positive warning
(unitialized)
- Fix issues in tests
- Reduce scope of several variables all over
etc
Closes#2631
This avoids appending error data to already existing good data.
Test 92 is updated to match this change.
New test 1156 checks all combinations of --range/--resume, --fail,
Content-Range header and http status code 200/416.
Fixes#1163
Reported-By: Ithubg on github
Closes#2578
In debug mode, MingGW-w64's GCC 7.3 issues null-dereference warnings
when dereferencing pointers after DEBUGASSERT-ing that they are not
NULL.
Fix this by removing the DEBUGASSERTs.
Suggested-by: Daniel Stenberg
Ref: https://github.com/curl/curl/pull/2463
Previously, it would only check for max length if the existing alloc
buffer was to small to fit it, which often would make the header still
get used.
Reported-by: Guido Berhoerster
Bug: https://curl.haxx.se/mail/lib-2018-02/0056.htmlCloses#2315
... unless CURLOPT_UNRESTRICTED_AUTH is set to allow them. This matches how
curl already handles Authorization headers created internally.
Note: this changes behavior slightly, for the sake of reducing mistakes.
Added test 317 and 318 to verify.
Reported-by: Craig de Stigter
Bug: https://curl.haxx.se/docs/adv_2018-b3bf.html
This is implemented as an output streaming stack of unencoders, the last
calling the client write procedure.
New test 230 checks this feature.
Bug: https://github.com/curl/curl/pull/2002
Reported-By: Daniel Bankhead
Since curl 7.55.0, NetworkManager almost always failed its connectivity
check by timeout. I bisected this to 5113ad04 (http-proxy: do the HTTP
CONNECT process entirely non-blocking).
This patch replaces !Curl_connect_complete with Curl_connect_ongoing,
which returns false if the CONNECT state was left uninitialized and lets
the connection continue.
Closes#1803Fixes#1804
Also-fixed-by: Gergely Nagy
Add a new type of callback to Curl_handler which performs checks on
the connection. Alter RTSP so that it uses this callback to do its
own check on connection health.
... to enable sending "OPTIONS *" which wasn't possible previously.
This option currently only works for HTTP.
Added test cases 1298 + 1299 to verify
Fixes#1280Closes#1462
... with a strlen() if no size was set, and do this in the pretransfer
function so that the info is set early. Otherwise, the default strlen()
done on the POSTFIELDS data never sets state.infilesize.
Reported-by: Vincas Razma
Bug: #1294
... since the total amount is low this is faster, easier and reduces
memory overhead.
Also, Curl_expire_done() can now mark an expire timeout as done so that
it never times out.
Closes#1472
If we use FTPS over CONNECT, the TLS handshake for the FTPS control
connection needs to be initiated in the SENDPROTOCONNECT state, not
the WAITPROXYCONNECT state. Otherwise, if the TLS handshake completed
without blocking, the information about the completed TLS handshake
would be saved to a wrong flag. Consequently, the TLS handshake would
be initiated in the SENDPROTOCONNECT state once again on the same
connection, resulting in a failure of the TLS handshake. I was able to
observe the failure with the NSS backend if curl ran through valgrind.
Note that this commit partially reverts curl-7_21_6-52-ge34131d.
When using basic-auth, connections and proxy connections
can be re-used with different Authorization headers since
it does not authenticate the connection (like NTLM does).
For instance, the below command should re-use the proxy
connection, but it currently doesn't:
curl -v -U alice:a -x http://localhost:8181http://localhost/
--next -U bob:b -x http://localhost:8181http://localhost/
This is a regression since refactoring of ConnectionExists()
as part of: cb4e2be7c6
Fix the above by removing the username and password compare
when re-using proxy connection at proxy_info_matches().
However, this fix brings back another bug would make curl
to re-print the old proxy-authorization header of previous
proxy basic-auth connection because it wasn't cleared.
For instance, in the below command the second request should
fail if the proxy requires authentication, but would succeed
after the above fix (and before aforementioned commit):
curl -v -U alice:a -x http://localhost:8181http://localhost/
--next -x http://localhost:8181http://localhost/
Fix this by clearing conn->allocptr.proxyuserpwd after use
unconditionally, same as we do for conn->allocptr.userpwd.
Also fix test 540 to not expect digest auth header to be
resent when connection is reused.
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Closes https://github.com/curl/curl/pull/1350
This flag is meant for the current request based on authentication
state, once the request is done we can clear the flag.
Also change auth.multi to auth.multipass for better readability.
Fixes https://github.com/curl/curl/issues/1095
Closes https://github.com/curl/curl/pull/1326
Signed-off-by: Isaac Boukris <iboukris@gmail.com>
Reported-by: Michael Kaufmann
This fixes assertion error which occurs when redirect is done with 0
length body via HTTP/2, and the easy handle is reused, but new
connection is established due to hostname change:
curl: http2.c:1572: ssize_t http2_recv(struct connectdata *,
int, char *, size_t, CURLcode *):
Assertion `httpc->drain_total >= data->state.drain' failed.
To fix this bug, ensure that http2_handle_stream is called.
Fixes#1286Closes#1302
- While negotiating auth during PUT/POST if a user-specified
Content-Length header is set send 'Content-Length: 0'.
This is what we do already in HTTPREQ_POST_FORM and what we did in the
HTTPREQ_POST case (regression since afd288b).
Prior to this change no Content-Length header would be sent in such a
case.
Bug: https://curl.haxx.se/mail/lib-2017-02/0006.html
Reported-by: Dominik Hölzl
Closes https://github.com/curl/curl/pull/1242
Replace use of fixed macro BUFSIZE to define the size of the receive
buffer. Reappropriate CURLOPT_BUFFERSIZE to include enlarging receive
buffer size. Upon setting, resize buffer if larger than the current
default size up to a MAX_BUFSIZE (512KB). This can benefit protocols
like SFTP.
Closes#1222
* HTTPS proxies:
An HTTPS proxy receives all transactions over an SSL/TLS connection.
Once a secure connection with the proxy is established, the user agent
uses the proxy as usual, including sending CONNECT requests to instruct
the proxy to establish a [usually secure] TCP tunnel with an origin
server. HTTPS proxies protect nearly all aspects of user-proxy
communications as opposed to HTTP proxies that receive all requests
(including CONNECT requests) in vulnerable clear text.
With HTTPS proxies, it is possible to have two concurrent _nested_
SSL/TLS sessions: the "outer" one between the user agent and the proxy
and the "inner" one between the user agent and the origin server
(through the proxy). This change adds supports for such nested sessions
as well.
A secure connection with a proxy requires its own set of the usual SSL
options (their actual descriptions differ and need polishing, see TODO):
--proxy-cacert FILE CA certificate to verify peer against
--proxy-capath DIR CA directory to verify peer against
--proxy-cert CERT[:PASSWD] Client certificate file and password
--proxy-cert-type TYPE Certificate file type (DER/PEM/ENG)
--proxy-ciphers LIST SSL ciphers to use
--proxy-crlfile FILE Get a CRL list in PEM format from the file
--proxy-insecure Allow connections to proxies with bad certs
--proxy-key KEY Private key file name
--proxy-key-type TYPE Private key file type (DER/PEM/ENG)
--proxy-pass PASS Pass phrase for the private key
--proxy-ssl-allow-beast Allow security flaw to improve interop
--proxy-sslv2 Use SSLv2
--proxy-sslv3 Use SSLv3
--proxy-tlsv1 Use TLSv1
--proxy-tlsuser USER TLS username
--proxy-tlspassword STRING TLS password
--proxy-tlsauthtype STRING TLS authentication type (default SRP)
All --proxy-foo options are independent from their --foo counterparts,
except --proxy-crlfile which defaults to --crlfile and --proxy-capath
which defaults to --capath.
Curl now also supports %{proxy_ssl_verify_result} --write-out variable,
similar to the existing %{ssl_verify_result} variable.
Supported backends: OpenSSL, GnuTLS, and NSS.
* A SOCKS proxy + HTTP/HTTPS proxy combination:
If both --socks* and --proxy options are given, Curl first connects to
the SOCKS proxy and then connects (through SOCKS) to the HTTP or HTTPS
proxy.
TODO: Update documentation for the new APIs and --proxy-* options.
Look for "Added in 7.XXX" marks.
... to make it less likely that we forget that the function actually
does case insentive compares. Also replaced several invokes of the
function with a plain strcmp when case sensitivity is not an issue (like
comparing with "-").
Previously it only held references to them, which was reckless as the
thread lock was released so the cookies could get modified by other
handles that share the same cookie jar over the share interface.
CVE-2016-8623
Bug: https://curl.haxx.se/docs/adv_20161102I.html
Reported-by: Cure53
Add the new option CURLOPT_KEEP_SENDING_ON_ERROR to control whether
sending the request body shall be completed when the server responds
early with an error status code.
This is suitable for manual NTLM authentication.
Reviewed-by: Jay Satiro
Closes https://github.com/curl/curl/pull/904
With HTTP/2 each transfer is made in an indivial logical stream over the
connection, making most previous errors that caused the connection to get
forced-closed now instead just kill the stream and not the connection.
Fixes#941
Hooked up the HTTP authentication layer to query the new 'is mechanism
supported' functions when deciding what mechanism to use.
As per commit 00417fd66c existing functionality is maintained for now.
- the expression of an 'if' was always true
- a 'while' contained a condition that was always true
- use 'if(k->exp100 > EXP100_SEND_DATA)' instead of 'if(k->exp100)'
- fixed a typo
Closes#889
- Change the parser to not require a minor version for HTTP/2.
HTTP/2 connection reuse broke when we changed from HTTP/2.0 to HTTP/2
in 8243a95 because the parser still expected a minor version.
Bug: https://github.com/curl/curl/issues/855
Reported-by: Andrew Robbins, Frank Gevaerts
Only protocols that actually have a protocol registered for ALPN and NPN
should try to get that negotiated in the TLS handshake. That is only
HTTPS (well, http/1.1 and http/2) right now. Previously ALPN and NPN
would wrongly be used in all handshakes if libcurl was built with it
enabled.
Reported-by: Jay Satiro
Fixes#789
curl_printf.h defines printf to curl_mprintf, etc. This can cause
problems with external headers which may use
__attribute__((format(printf, ...))) markers etc.
To avoid that they cause problems with system includes, we include
curl_printf.h after any system headers. That makes the three last
headers to always be, and we keep them in this order:
curl_printf.h
curl_memory.h
memdebug.h
None of them include system headers, they all do funny #defines.
Reported-by: David Benjamin
Fixes#743
Previously, connections were closed immediately before the user had a
chance to extract the socket when the proxy required Negotiate
authentication.
This regression was brought in with the security fix in commit
79b9d5f1a4Closes#655
Supports HTTP/2 over clear TCP
- Optimize switching to HTTP/2 by removing calls to init and setup
before switching. Switching will eventually call setup and setup calls
init.
- Supports new version to “force” the use of HTTP/2 over clean TCP
- Add common line parameter “--http2-prior-knowledge” to the Curl
command line tool.
Renamed the header and source files for this module as they are HTTP
specific and as such, they should use the naming convention as other
HTTP authentication source files do - this revert commit 260ee6b7bf.
Note: We could also rename curl_ntlm_wb.[c|h], however, the Winbind
code needs separating from the HTTP protocol and migrating into the
vauth directory, thus adding support for Winbind to the SASL based
protocols such as IMAP, POP3 and SMTP.
At one point during the development of HTTP/2, the commit 133cdd29ea
introduced automatic decompression of Content-Encoding as that was what
the spec said then. Now however, HTTP/2 should work the same way as
HTTP/1 in this regard.
Reported-by: Kazuho Oku
Closes#661
nghttp2 callback deals with TLS layer and therefore the header does not
need to be broken into chunks.
Bug: https://github.com/curl/curl/issues/659
Reported-by: Kazuho Oku
This commit adds trailer support in HTTP/2. In HTTP/1.1, chunked
encoding must be used to send trialer fields. HTTP/2 deprecated any
trandfer-encoding, including chunked. But trailer fields are now
always available.
Since trailer fields are relatively rare these days (gRPC uses them
extensively though), allocating buffer for trailer fields is done when
we detect that HEADERS frame containing trailer fields is started. We
use Curl_add_buffer_* functions to buffer all trailers, just like we
do for regular header fields. And then deliver them when stream is
closed. We have to be careful here so that all data are delivered to
upper layer before sending trailers to the application.
We can deliver trailer field one by one using NGHTTP2_ERR_PAUSE
mechanism, but current method is far more simple.
Another possibility is use chunked encoding internally for HTTP/2
traffic. I have not tested it, but it could add another overhead.
Closes#564
The push headers are freed after the push callback has been invoked,
meaning this code should only free the headers if the callback was never
invoked and thus the headers weren't freed at that time.
Reported-by: Davey Shafik
They tend to never get updated anyway so they're frequently inaccurate
and we never go back to revisit them anyway. We document issues to work
on properly in KNOWN_BUGS and TODO instead.
... and assign it from the set.fread_func_set pointer in the
Curl_init_CONNECT function. This A) avoids that we have code that
assigns fields in the 'set' struct (which we always knew was bad) and
more importantly B) it makes it impossibly to accidentally leave the
wrong value for when the handle is re-used etc.
Introducing a state-init functionality in multi.c, so that we can set a
specific function to get called when we enter a state. The
Curl_init_CONNECT is thus called when switching to the CONNECT state.
Bug: https://github.com/bagder/curl/issues/346Closes#346
Return 0 instead of NGHTTP2_ERR_CALLBACK_FAILURE if we can't locate the
SessionHandle. Apparently mod_h2 will sometimes send a frame for a
stream_id we're finished with.
Use nghttp2_session_get_stream_user_data and
nghttp2_session_set_stream_user_data to identify SessionHandles instead
of a hash.
Closes#372
Otherwise it would never be called for an HTTP/2 connection, which has
its own disconnect handler.
I spotted this while debugging <https://bugzilla.redhat.com/1248389>
where the http_disconnect() handler was called on an FTP session handle
causing 'dnf' to crash. conn->data->req.protop of type (struct FTP *)
was reinterpreted as type (struct HTTP *) which resulted in SIGSEGV in
Curl_add_buffer_free() after printing the "Connection cache is full,
closing the oldest one." message.
A previously working version of libcurl started to crash after it was
recompiled with the HTTP/2 support despite the HTTP/2 protocol was not
actually used. This commit makes it work again although I suspect the
root cause (reinterpreting session handle data of incompatible protocol)
still has to be fixed. Otherwise the same will happen when mixing FTP
and HTTP/2 connections and exceeding the connection cache limit.
Reported-by: Tomas Tomecek
Bug: https://bugzilla.redhat.com/1248389
Currently, libcurl rejects responses with "Content-Encoding: compress"
when CURLOPT_ACCEPT_ENCODING is set to "". I think that libcurl should
treat the Content-Encoding "compress" the same as other
Content-Encodings that it does not support, e.g. "bzip2". That means
just ignoring it.