A regfression in 7.86.0 (via 1e9a538e05) made the tailmatch work
differently than before. This restores the logic to how it used to work:
All names listed in NO_PROXY are tailmatched against the used domain
name, if the lengths are identical it needs a full match.
Update the docs, update test 1614.
Reported-by: Stuart Henderson
Fixes#9842Closes#9858
Also ignore trailing dots in both host name and comparison pattern.
Regression in 7.86.0 (from 1e9a538e05)
Extended test 1614 to verify better.
Reported-by: Henning Schild
Fixes#9821Closes#9822
If the host name is an IP address and the noproxy string contained that
IP address with a following comma, it would erroneously not match.
Extended test 1614 to verify this combo as well.
Reported-by: Henning Schild
Fixes#9813Closes#9814
- Replace `Github` with `GitHub`.
- Replace `windows` with `Windows`
- Replace `advice` with `advise` where a verb is used.
- A few fixes on removing repeated words.
- Replace `a HTTP` with `an HTTP`
Closes#9802
For both IPv4 and IPv6 addresses. Now also checks IPv6 addresses "correctly"
and not with string comparisons.
Split out the noproxy checks and functionality into noproxy.c
Added unit test 1614 to verify checking functions.
Reported-by: Mathieu Carbonneaux
Fixes#9773Fixes#5745Closes#9775
When CURLU_URLENCODE is set, the parser would mistreat the path
component if the URL was specified without a slash like in
http://local.test:80?-123
Extended test 1560 to reproduce and verify the fix.
Reported-by: Trail of Bits
Closes#9763
An input like "%.*1$.9999d" would first use the precision taken as an
argument *and* then the precision specified in the string, which is
confusing and wrong. pass1 will now instead return error on this double
use.
Adjusted unit test 1398 to verify
Reported-by: Peter Goodman
Closes#9754
In some circumstances when doing parallel transfers, the
single_transfer_cleanup() would not be called and then 'inglob' could
leak.
Test 496 verifies
Reported-by: Trail of Bits
Closes#9749
Handle canonical headers and signed headers creation as explained here:
https://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
The algo tells that signed and canonical must contain at last host and
x-amz-date.
So we check whatever thoses are present in the curl http headers list.
If they are, we use the one enter by curl user, otherwise we generate
them. then we to lower, and remove space from each http headers plus
host and x-amz-date, then sort them all by alphabetical order.
This patch also fix a bug with host header, which was ignoring the port.
Closes#7966
To avoid URL tricks, use the URL parser for this.
This update changes curl's behavior slightly in that it will ignore the
possible query part from the URL and only use the file name from the
actual path from the URL. I consider it a bugfix.
"curl -O localhost/name?giveme-giveme" will now save the output in the
local file named 'name'
Updated test 1210 to verify
Assisted-by: Jay Satiro
Closes#9684
curl_ws_recv() now receives data to fill up the provided buffer, but can
return a partial fragment. The function now also get a pointer to a
curl_ws_frame struct with metadata that also mentions the offset and
total size of the fragment (of which you might be receiving a smaller
piece). This way, large incoming fragments will be "streamed" to the
application. When the curl_ws_frame struct field 'bytesleft' is 0, the
final fragment piece has been delivered.
curl_ws_recv() was also adjusted to work with a buffer size smaller than
the fragment size. (Possibly needless to say as the fragment size can
now be 63 bit large).
curl_ws_send() now supports sending a piece of a fragment, in a
streaming manner, in addition to sending the entire fragment in a single
call if it is small enough. To send a huge fragment, curl_ws_send() can
be used to send it in many small calls by first telling libcurl about
the total expected fragment size, and then send the payload in N number
of separate invokes and libcurl will stream those over the wire.
The struct curl_ws_meta() returns is now called 'curl_ws_frame' and it
has been extended with two new fields: *offset* and *bytesleft*. To help
describe the passed on data chunk when a fragment is delivered in many
smaller pieces.
The documentation has been updated accordingly.
Closes#9636
The ci-test is the normal makefile target invoked in CI jobs. This has
been using the -r option to runtests.pl since a long time, but I find
that it mostly just adds many lines to the test output report without
anyone caring much about those stats.
Remove it.
Closes#9656
This is a bit shorter and a lot safer.
Substrings of unescaped characters are added by a single call to reduce
overhead.
Extend test 1465 to handle more kind of escapes.
Closes#9653
C string hexadecimal-escaped characters may have more than 2 digits.
This results in a wrong C compiler interpretation of a 2-digit escaped
character when followed by an hex digit character.
The solution retained here is to represent such characters as 3-digit
octal escapes.
Adjust and extend test 1465 for this case.
Closes#9643
- Don't show TESTFAIL message (ie tests failed which aren't ignored) if
only ignored tests failed.
Before:
IGNORED: failed tests: 571 612 1056
TESTDONE: 1214 tests out of 1217 reported OK: 99%
Use of uninitialized value $failed in concatenation (.) or string at
./runtests.pl line 6290.
TESTFAIL: These test cases failed:
After:
IGNORED: failed tests: 571 612 1056
TESTDONE: 1214 tests out of 1217 reported OK: 99%
Closes https://github.com/curl/curl/pull/9648
The introduction of CURL_DISABLE_MIME came with some additional bugs:
- Disabled MIME is compiled-in anyway if SMTP and/or IMAP is enabled.
- CURLOPT_MIMEPOST, CURLOPT_MIME_OPTIONS and CURLOPT_HTTPHEADER are
conditioned on HTTP, although also needed for SMTP and IMAP MIME mail
uploads.
In addition, the CURLOPT_HTTPHEADER and --header documentation does not
mention their use for MIME mail.
This commit fixes the problems above.
Closes#9610
The existing code tried but did not properly reject alternative services
using negative or too large port numbers.
With this fix, the logic now also flushes the old entries immediately
before adding a new one, making a following header with an illegal entry
not flush the already stored entry.
Report from the ongoing source code audit by Trail of Bits.
Adjusted test 356 to verify.
Closes#9607