This integrates the "main" test suite (test/test.cc) in Meson.
This allows to run the tests in the CI with the Meson-built version of
the library to ensure that nothing breaks unexpectedly.
It also simplifies life of downstream packagers, that do not have to
write a custom build script to split the library and run tests but can
instead just let Meson do that for them.
In order to test the split version (.h + .cc via split.py):
- Added a test_split program in the test directory whose main purpose is
to verify that it works to compile and link the test case code against
the split httplib.h version.
- Moved types needed for test cases to the “header part” of httplib.h.
Also added forward declarations of functions needed by test cases.
- Added an include_httplib.cc file which is linked together with test.cc
to verify that inline keywords have not been forgotten.
The changes to httplib.h just move code around (or add forward
declarations), with one exception: detail::split and
detail::process_client_socket have been converted to non-template
functions (taking an std::function instead of using a type parameter for
the function) and forward-declared instead. This avoids having to move
the templates to the “header part”.
The RedirectToDifferentPort.Redirect test assumes that port 8080 and
8081 are available on localhost. They aren’t on my system so the test
fails. Improve this by binding to available ports instead of hardcoded
ones.
* Fix client.cc code, since res.error() without operator overloading causing error in Xcode
* Add unit test to check new error to string with operator overloading
* Add inline as requested in code review comment
* Special function to encode query params
* Fix #include <iomanip>
* Added unescaped charsets to encode_query_param
* Unit tests for encode_query_param
Add unit test for issue #682 fixed in PR #728, which does not contain
the test of its own.
The test creates a fake SSL server, inherited from SSLServer, which
does not create an SSL context. When an SSL client attempts to send it
a request, it gets a timeout error. Prior to PR #728, the client would
wait indefinitely
Co-authored-by: Michael Tseitlin <michael.tseitlin@concertio.com>
* ssl-verify-host: fix verifying ip addresses containing zero's
If the subject alternate name contained an ip address with an zero
(like 10.42.0.1) it could not successfully verify.
It is because in c++ strings are null-terminated
and therefore strlen(name) would return a wrong result.
As I can not see why we can not trust the length returned by openssl,
lets drop this check.
* ssl-verify-host: add test case
lets try to validate against 127.0.0.1
Co-authored-by: Daniel Ottiger <daniel.ottiger@ch.schindler.com>
* *Add server fuzzer target and seed corpus
* Add fuzz_test option to Makefile
* Fix#685
* Try to fix Github actions on Ubuntu
* Added ReadTimeoutSSL test
* Comment out `-fsanitize=address`
* Rebase upstream changes
* remove address sanitizer temporarily
* Add separate Makefile for fuzzing
* 1. Remove special char from dictionary
2. Clean fuzzing/Makefile
* Use specific path to avoid accidently linking openssl version brought in by oss-fuzz
* remove addition of flags
* Refactor Makefile
* Add missing newline
* Add fuzztest to github workflow
* Fix
Co-authored-by: yhirose <yuji.hirose.bug@gmail.com>
* Fix memory leak due caused due to X509_STORE
* Add test for repro and address sanitizer to compiler flags
* Add comment
* Sync
* Associate ca_store with ssl context within set_ca_cert_store()
* Split SlowPost test
* Fix#674
Co-authored-by: yhirose <yuji.hirose.bug@gmail.com>
* Fix parsing to parse query string with single space char.
When passed ' ' as a query string, the server crashes cause of illegal memory access done in httplib::detail::split. Have added checks to make sure the split function has a valid string with length > 0.
* Fix parsing to parse query string with single space char.
* Fix server crash caused due to regex complexity while matching headers.
While parsing content-type header in multipart form request the server crashes due to the exhaustion of max iterations performed while matching the input string with content-type regex.
Have removed the regex which might use backtracking while matching and replaced it with manual string processing. Have added tests as well.
* Remove magic number
Co-authored-by: Ivan Fefer <fefer.ivan@gmail.com>
Co-authored-by: yhirose <yhirose@users.noreply.github.com>
Co-authored-by: Ivan Fefer <fefer.ivan@gmail.com>
* Fix parsing to parse query string with single space char.
When passed ' ' as a query string, the server crashes cause of illegal memory access done in httplib::detail::split. Have added checks to make sure the split function has a valid string with length > 0.
* Fix parsing to parse query string with single space char.
In case we want to send a lot of data,
and the receiver is slower than the sender.
This will first fill up the receivers queues and after this
eventually also the senders queues,
until the socket is temporarily unable to accept more data to send.
select_write is done with an timeout of zero,
which makes the select call used always return immediately:
(see http://man7.org/linux/man-pages/man2/select.2.html)
This means that every marginal unavailability will make it return false
for is_writable and therefore httplib will immediately abort the transfer.
Therefore make this values configurable in the same way
as the read timeout already is.
Set the default write timeout to 5 seconds,
the same default value used for the read timeout.
According to RFC 3493 the socket option IPV6_V6ONLY
should be off by default, see
https://tools.ietf.org/html/rfc3493#page-22 (chapter 5.3).
However this does not seem to be the case on all systems.
For instance on any Windows OS, the option is on by default.
Therefore clear this option in order to allow
an server socket which can support IPv6 and IPv4 at the same time.
We cannot trivially support such large chunks, and the maximum value
std::strtoul can parse accurately is ULONG_MAX-1. Error out early if the
length is longer than that.
detail::read_content_chunked was using std::stoul to parse the
hexadecimal chunk lengths for "Transfer-Encoding: chunked" requests.
This throws an exception if the string does not begin with any valid
digits. read_content_chunked is not called in the context of a try block
so this caused the process to terminate.
Rather than use exceptions, I opted for std::stroul, which is similar to
std::stoul but does not throw exceptions. Since malformed user input is
not particularly exceptional, and some projects are compiled without
exception support, this approach seems both more portable and more
correct.
* SSLServer: add constructor to pass ssl-certificates and key from memory
* SSLClient: add constructor to pass ssl-certificates and key from memory
* add TestCase for passing certificates from memory to SSLClient/SSLServer
The regex that parses header lines potentially causes an unlimited
amount of backtracking, which can cause an exception in the libc++ regex
engine.
The exception that occurs looks like this and is identical to the
message of the exception fixed in
https://github.com/yhirose/cpp-httplib/pull/280:
libc++abi.dylib: terminating with uncaught exception of type
std::__1::regex_error: The complexity of an attempted match
against a regular expression exceeded a pre-set level.
This commit eliminates the problematic backtracking.
libc++ (the implementation of the C++ standard library usually used by
Clang) throws an exception for the regex used by parse_headers before
this patch for certain strings. Work around this by simplifying the
regex and parsing the header lines "by hand" partially. I have repro'd
this problem with Xcode 11.1 which I believe uses libc++ version 8.
This may be a bug in libc++ as I can't see why the regex would result in
asymptotic run-time complexity for any strings. However, it may take a
while for libc++ to be fixed and for everyone to migrate to it, so it
makes sense to work around it in this codebase for now.
HTTP Whitespace and regex whitespace are not the same, so we can't use
\s in regexes when parsing HTTP headers. Instead, explicitly specify
what is considered whitespace in the regex.
While trying to implement streaming of internet radio, where a ContentReceiver is needed to handle the audio data, I had the problem, that important information about the stream data is part of the HTTP header (e.g. size of audio chunks between meta data), so I added a ResponseHandler and a new Get variant, to gain access to the header before handling the first chunk of data.
The ResponseHandler can abort the request by returning false, in the same way as the ContentReceiver.
A test case was also added.
This commit modifies the signature of the `Progress` callback
so that its return value will indicate whether the request shall
continue to be processed by returning `true`, or if it shall
be aborted by returning `false`. Such modification will allow
one to cancel an ongoing request before it has completed.
When migrating, developers should modify there `Progress`
callbacks to always return `true` by default in case there
do not want to benefit from the cancelation feature.
A few unit tests use cases were provided, but anyone should feel
free to provide additional uses cases that they find relevant.