- add `SingleRequest->download_done` as indicator that
all download bytes have been received
- remove `stop_reading` bool from readwrite functions
- move excess body handling into client download writer
Closes#12371
- changed header/chunk/handler->readwrite prototypes to accept `buf`,
`blen` and a `pconsumed` pointer. They now get the buffer to work on
and report back how many bytes they consumed
- eliminated `k->str` in SingleRequest
- improved excess data handling to properly calculate with any body data
left in the headerb buffer
- eliminated `k->badheader` enum to only be a bool
Closes#12283
Fix compiler warnings in builds with disabled auths, NTLM and SPNEGO.
E.g. with `CURL_DISABLE_BASIC_AUTH` + `CURL_DISABLE_BEARER_AUTH` +
`CURL_DISABLE_DIGEST_AUTH` + `CURL_DISABLE_NEGOTIATE_AUTH` +
`CURL_DISABLE_NTLM` on non-Windows.
```
./curl/lib/http.c:737:12: warning: unused variable 'result' [-Wunused-variable]
CURLcode result = CURLE_OK;
^
./curl/lib/http.c:995:18: warning: variable 'availp' set but not used [-Wunused-but-set-variable]
unsigned long *availp;
^
./curl/lib/http.c:996:16: warning: variable 'authp' set but not used [-Wunused-but-set-variable]
struct auth *authp;
^
```
Regression from e92edfbef6#11490Fixes#12228Closes#12335
GCC 14 introduces a new -Walloc-size included in -Wextra which gives:
```
src/tool_operate.c: In function ‘add_per_transfer’:
src/tool_operate.c:213:5: warning: allocation of insufficient size ‘1’ for type ‘struct per_transfer’ with size ‘480’ [-Walloc-size]
213 | p = calloc(sizeof(struct per_transfer), 1);
| ^
src/var.c: In function ‘addvariable’:
src/var.c:361:5: warning: allocation of insufficient size ‘1’ for type ‘struct var’ with size ‘32’ [-Walloc-size]
361 | p = calloc(sizeof(struct var), 1);
| ^
```
The calloc prototype is:
```
void *calloc(size_t nmemb, size_t size);
```
So, just swap the number of members and size arguments to match the
prototype, as we're initialising 1 struct of size `sizeof(struct
...)`. GCC then sees we're not doing anything wrong.
Closes#12292
Some servers don't support the ALPN protocol "http/1.0" (e.g. IIS 10),
avoid it and use "http/1.1" instead.
This reverts commit df856cb5c9 (#10183).
Fixes#12259Closes#12285
This change fixes a compiler warning with gcc-12.2.0 when
`-DCURL_DISABLE_BEARER_AUTH=ON` is used.
/home/tox/src/curl/lib/http.c: In function 'Curl_http_input_auth':
/home/tox/src/curl/lib/http.c:1147:12: warning: suggest braces around empty body in an 'else' statement [-Wempty-body]
1147 | ;
| ^
Closes#12262
Finding a 'Content-Range:' in the response changed the handling.
Add test case 1475 to verify -C - with 416 and Content-Range: header,
which is almost exactly like test 194 which instead uses a fixed -C
offset. Adjusted test 194 to also be considered fine.
Fixes#10521
Reported-by: Smackd0wn
Fixes#12174
Reported-by: Anubhav Rai
Closes#12176
- fix HTTP header parsing to report incomplete
lines it buffers as consumed!
- re-implement the RTP parser for interleave RTP
messages for robustness. It is now keeping its
state at the connection
- RTSP protocol handler "readwrite" implementation
now tracks if the response is before/in/after
header parsing or "in" a bod by calling
"Curl_http_readwrite_headers()" itself. This
allows it to know when non-RTP bytes are "junk"
or HEADER or BODY.
- tested with #12035 and various small receive
sizes where current master fails
Closes#12052
- move definitions from content_encoding.h to sendf.h
- move create/cleanup/add code into sendf.c
- installed content_encoding writers will always be called
on Curl_client_write(CLIENTWRITE_BODY)
- Curl_client_cleanup() frees writers and tempbuffers from
paused transfers, irregardless of protocol
Closes#11908
- use CLIENTWRITE_BODY *only* when data is actually body data
- add CLIENTWRITE_INFO for meta data that is *not* a HEADER
- debug assertions that BODY/INFO/HEADER is not used mixed
- move `data->set.include_header` check into Curl_client_write
so protocol handlers no longer have to care
- add special in FTP for `data->set.include_header` for historic,
backward compatible reasons
- move unpausing of client writes from easy.c to sendf.c, so that
code is in one place and can forward flags correctly
Closes#11885
When bearer auth was disabled, the if/else logic got wrong and caused
problems.
Follow-up to e92edfbef6Fixes#11892
Reported-by: Aleksander Mazur
Closes#11895
Not the counter that accumulates all headers over all redirects.
Follow-up to 3ee79c1674
Do a second check for 20 times the limit for the accumulated size for
all headers.
Fixes#11871
Reported-by: Joshix-1 on github
Closes#11872
- set CURL_CI for pytest runs in CI environments
- exclude timing sensitive tests from CI runs
- for failed results, list only the log and stat of
the failed transfer
- fix type in http.c comment
Closes#11812
- refs #11342 where errors with git https interactions
were observed
- problem was caused by 1st sends of size larger than 64KB
which resulted in later retries of 64KB only
- limit sending of 1st block to 64KB
- adjust h2/h3 filters to cope with parsing the HTTP/1.1
formatted request in chunks
- introducing Curl_nwrite() as companion to Curl_write()
for the many cases where the sockindex is already known
Fixes#11342 (again)
Closes#11803
In this situation, only part of the data has been sent before aborting
so the connection is no longer usable.
Assisted-by: Jay Satiro
Fixes#11678Closes#11679
When the legacy CURLOPT_HTTPPOST option is used, it gets converted into
the modem mimpost struct at first use. This data is (now) kept for the
entire transfer and not only per single HTTP request. This re-enables
rewind in the beginning of the second request instead of in end of the
first, as brought by 1b39731.
The request struct is per-request data only.
Extend test 650 to verify.
Fixes#11680
Reported-by: yushicheng7788 on github
Closes#11682
In order to get Negotiate (SPNEGO) authentication to work in HTTP you
used to be required to provide a (fake) user name (this concerned both
curl and the lib) because the code wrongly only considered
authentication if there was a user name provided, as in:
curl -u : --negotiate https://example.com/
This commit leverages the `struct auth` want member to figure out if the
user enabled CURLAUTH_NEGOTIATE, effectively removing the requirement of
setting a user name both in curl and the lib.
Signed-off-by: Marin Hannache <git@mareo.fr>
Reported-by: Enrico Scholz
Fixes https://sourceforge.net/p/curl/bugs/440/Fixes#1161Closes#9047
To avoid abuse. The limit is set to 300 KB for the accumulated size of
all received HTTP headers for a single response. Incomplete research
suggests that Chrome uses a 256-300 KB limit, while Firefox allows up to
1MB.
Closes#11582
HTTP/1 connections that are upgraded to HTTP/2 should not be picked up
for reuse and multiplexing by other handles until the 101 switching
process is completed.
Lots-of-debgging-by: Stefan Eissing
Reported-by: Richard W.M. Jones
Bug: https://curl.se/mail/lib-2023-07/0045.htmlCloses#11557
Introduce a --enable-form-api configure option to control its inclusion
in builds. The condition name defined for it is CURL_DISABLE_FORM_API.
Form api code is dependent of MIME: configure and CMake handle this
dependency automatically: CMake by making it a dependent option
explicitly, configure by inheriting the MIME value by default and
rejecting explicit incompatible values.
"form-api" is now a new hidden test feature.
Update libcurl modules to respect this option and adjust tests
accordingly.
Closes#9621
- adding tests using very large passwords in auth
- fixes general http sending to treat h3 like h2, and
not like http1.1
- eliminate H2_HEADER max definitions and use the commmon
DYN_HTTP_REQUEST everywhere, different limits do not help
- fix http2 handling of requests denied by nghttp2 on send
to immediately report the refused stream
Closes#11509
- a regression introduced by c9ec851211
where optimization of small POST bodies leads to a new code path
for such uploads that did not trigger the "done sending" event
- add triggering this event for early "upload_done" situations
Fixes#11485Closes#11487
Reported-by: Aleksander Mazur
Previously it would count the size of the entire outgoing request and
not just the size of only the Cookie: header field - which was the
intention.
This could make the check be off by several hundred bytes in some cases.
Closes#11331
Out of 415 labels throughout the code base, 86 of those labels were
not at the start of the line. Which means labels always at the start of
the line is the favoured style overall with 329 instances.
Out of the 86 labels not at the start of the line:
* 75 were indented with the same indentation level of the following line
* 8 were indented with exactly one space
* 2 were indented with one fewer indentation level then the following
line
* 1 was indented with the indentation level of the following line minus
three space (probably unintentional)
Co-Authored-By: Viktor Szakats
Closes#11134
By making sure we set state.upload based on the set.method value and not
independently as set.upload, we reduce confusion and mixup risks, both
internally and externally.
Closes#11017
- with `--proxy-http2` allow h2 ALPN negotiation to
forward proxies
- applies to http: requests against a https: proxy only,
as https: requests will auto-tunnel
- adding a HTTP/1 request parser in http1.c
- removed h2h3.c
- using new request parser in nghttp2 and all h3 backends
- adding test 2603 for request parser
- adding h2 proxy test cases to test_10_*
scorecard.py: request scoring accidentally always run curl
with '-v'. Removed that, expect double numbers.
labeller: added http1.* and h2-proxy sources to detection
Closes#10967
- use bufq for send/receive of network data
- usd bufq for send/receive of stream data
- use HTTP/2 flow control with no-auto updates to control the
amount of data we are buffering for a stream
HTTP/2 stream window set to 128K after local tests, defined
code constant for now
- elminiating PAUSEing nghttp2 processing when receiving data
since a stream can now take in all DATA nghttp2 forwards
Improved scorecard and adjuste http2 stream window sizes
- scorecard improved output formatting and options default
- scorecard now also benchmarks small requests / second
Closes#10771
Adding `bufq`:
- at init() time configured to hold up to `n` chunks of `m` bytes each.
- various methods for reading from and writing to it.
- `peek` support to get access to buffered data without copy
- `pass` support to allow buffer flushing on write if it becomes full
- use case: IO buffers for dynamic reads and writes that do not blow up
- distinct from `dynbuf` in that:
- it maintains a read position
- writes on a full bufq return CURLE_AGAIN instead of nuking itself
- Init options:
- SOFT_LIMIT: allow writes into a full bufq
- NO_SPARES: free empty chunks right away
- a `bufc_pool` that can keep a number of spare chunks to
be shared between different `bufq` instances
Adding `dynhds`:
- a straightforward list of name+value pairs as used for HTTP headers
- headers can be appended dynamically
- headers can be removed again
- headers can be replaced
- headers can be looked up
- http/1.1 formatting into a `dynbuf`
- configured at init() with limits on header counts and total string
sizes
- use case: pass a HTTP request or response around without being version
specific
- express a HTTP request without a curl easy handle (used in h2 proxy
tunnels)
- future extension possibilities:
- conversions of `dynhds` to nghttp2/nghttp3 name+value arrays
Closes#10720
This is already how curl is documented to behave in Everything curl, but
in actuality only short POSTs skip this. This should knock 30 seconds
off a full run of the test suite since the 100-continue timeout will no
longer be hit.
Closes#10740
- when h2/h3 eyeballing was involved, unix domain socket
configurations were not honoured
- configuring --unix-socket will disable HTTP/3 as candidate for eyeballing
- combinatino of --unix-socket and --http3-only will fail during initialisation
- adding pytest test_11 to reproduce
Reported-by: Jelle van der Waa
Fixes#10633Closes#10641
As tested in test_02_07, when firing off 200 urls with --parallel, 199
wait for the first connection to be established. if that is multiuse,
urls are added up to its capacity.
The first url over capacity opens another connection. But subsequent
urls found the same situation and open a connection too. They should
have waited for the second connection to actually connect and make its
capacity known.
This change fixes that by
- setting `connkeep()` early in the HTTP setup handler. as otherwise
a new connection is marked as closeit by default and not considered
for multiuse at all
- checking the "connected" status for a candidate always and continuing
to PIPEWAIT if no alternative is found.
pytest:
- removed "skip" from test_02_07
- added test_02_07b to check that http/1.1 continues to work as before
Closes#10456
[CWE-570] V560: A part of conditional expression is always false: conn->bits.authneg.
[CWE-570] V560: A part of conditional expression is always false: conn->handler->protocol & (0 | 0).
https://pvs-studio.com/en/docs/warnings/v560/Closes#10399
New cfilter HTTP-CONNECT for h3/h2/http1.1 eyeballing.
- filter is installed when `--http3` in the tool is used (or
the equivalent CURLOPT_ done in the library)
- starts a QUIC/HTTP/3 connect right away. Should that not
succeed after 100ms (subject to change), a parallel attempt
is started for HTTP/2 and HTTP/1.1 via TCP
- both attempts are subject to IPv6/IPv4 eyeballing, same
as happens for other connections
- tie timeout to the ip-version HAPPY_EYEBALLS_TIMEOUT
- use a `soft` timeout at half the value. When the soft timeout
expires, the HTTPS-CONNECT filter checks if the QUIC filter
has received any data from the server. If not, it will start
the HTTP/2 attempt.
HTTP/3(ngtcp2) improvements.
- setting call_data in all cfilter calls similar to http/2 and vtls filters
for use in callback where no stream data is available.
- returning CURLE_PARTIAL_FILE for prematurely terminated transfers
- enabling pytest test_05 for h3
- shifting functionality to "connect" UDP sockets from ngtcp2
implementation into the udp socket cfilter. Because unconnected
UDP sockets are weird. For example they error when adding to a
pollset.
HTTP/3(quiche) improvements.
- fixed upload bug in quiche implementation, now passes 251 and pytest
- error codes on stream RESET
- improved debug logs
- handling of DRAIN during connect
- limiting pending event queue
HTTP/2 cfilter improvements.
- use LOG_CF macros for dynamic logging in debug build
- fix CURLcode on RST streams to be CURLE_PARTIAL_FILE
- enable pytest test_05 for h2
- fix upload pytests and improve parallel transfer performance.
GOAWAY handling for ngtcp2/quiche
- during connect, when the remote server refuses to accept new connections
and closes immediately (so the local conn goes into DRAIN phase), the
connection is torn down and a another attempt is made after a short grace
period.
This is the behaviour observed with nghttpx when we tell it to shut
down gracefully. Tested in pytest test_03_02.
TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
and connection related things based on the negotiated protocol (or lack thereof).
- new tests/tests-httpd/scorecard.py for testing h3/h2 protocol implementation.
Invoke:
python3 tests/tests-httpd/scorecard.py --help
for usage.
Improvements on gathering connect statistics and socket access.
- new CF_CTRL_CONN_REPORT_STATS cfilter control for having cfilters
report connection statistics. This is triggered when the connection
has completely connected.
- new void Curl_pgrsTimeWas(..) method to report a timer update with
a timestamp of when it happend. This allows for updating timers
"later", e.g. a connect statistic after full connectivity has been
reached.
- in case of HTTP eyeballing, the previous changes will update
statistics only from the filter chain that "won" the eyeballing.
- new cfilter query CF_QUERY_SOCKET for retrieving the socket used
by a filter chain.
Added methods Curl_conn_cf_get_socket() and Curl_conn_get_socket()
for convenient use of this query.
- Change VTLS backend to query their sub-filters for the socket when
checks during the handshake are made.
HTTP/3 documentation on how https eyeballing works.
TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
and connection related things based on the negotiated protocol (or lack thereof).
Scorecard with Caddy.
- configure can be run with `--with-test-caddy=path` to specify which caddy to use for testing
- tests/tests-httpd/scorecard.py now measures download speeds with caddy
pytest improvements
- adding Makfile to clean gen dir
- adding nghttpx rundir creation on start
- checking httpd version 2.4.55 for test_05 cases where it is needed. Skipping with message if too old.
- catch exception when checking for caddy existance on system.
Closes#10349
- they are mostly pointless in all major jurisdictions
- many big corporations and projects already don't use them
- saves us from pointless churn
- git keeps history for us
- the year range is kept in COPYING
checksrc is updated to allow non-year using copyright statements
Closes#10205
Refactoring of connection setup and happy eyeballing. Move
nghttp2. ngtcp2, quiche and msh3 into connection filters.
- eyeballing cfilter that uses sub-filters for performing parallel connects
- socket cfilter for all transport types, including QUIC
- QUIC implementations in cfilter, can now participate in eyeballing
- connection setup is more dynamic in order to adapt to what filter did
really connect. Relevant to see if a SSL filter needs to be added or
if SSL has already been provided
- HTTP/3 test cases similar to HTTP/2
- multiuse of parallel transfers for HTTP/3, tested for ngtcp2 and quiche
- Fix for data attach/detach in VTLS filters that could lead to crashes
during parallel transfers.
- Eliminating setup() methods in cfilters, no longer needed.
- Improving Curl_conn_is_alive() to replace Curl_connalive() and
integrated ssl alive checks into cfilter.
- Adding CF_CNTRL_CONN_INFO_UPDATE to tell filters to update
connection into and persist it at the easy handle.
- Several more cfilter related cleanups and moves:
- stream_weigth and dependency info is now wrapped in struct
Curl_data_priority
- Curl_data_priority members depend is available in HTTP2|HTTP3
- Curl_data_priority members depend on NGHTTP2 support
- handling init/reset/cleanup of priority part of url.c
- data->state.priority same struct, but shallow copy for compares only
- PROTOPT_STREAM has been removed
- Curl_conn_is_mulitplex() now available to check on capability
- Adding query method to connection filters.
- ngtcp2+quiche: implementing query for max concurrent transfers.
- Adding is_alive and keep_alive cfilter methods. Adding DATA_SETUP event.
- setting keepalive timestamp on connect
- DATA_SETUP is called after the connection has been completely
setup (but may not connected yet) to allow filters to initialize
data members they use.
- there is no socket to be had with msh3, it is unclear how select
shall work
- manual test via "curl --http3 https://curl.se" fail with "empty
reply from server".
- Various socket/conn related cleanups:
- Curl_socket is now Curl_socket_open and in cf-socket.c
- Curl_closesocket is now Curl_socket_close and in cf-socket.c
- Curl_ssl_use has been replaced with Cur_conn_is_ssl
- Curl_conn_tcp_accepted_set has been split into
Curl_conn_tcp_listen_set and Curl_conn_tcp_accepted_set
with a clearer purpose
Closes#10141
The message "Mark bundle as not supporting multiuse" was added at commit
29364d93 when an http/2-related bug was fixed, and it appears to be a
leftover trace message.
This message should be removed because:
* it conveys no information to the user
* it is enabled in the default build (--enable-verbose)
* it reads like a warning/unexpected condition
* it is equivalent to "Detected http proto < 2", which is
not a useful message.
* it is a time-wasting red-herring for anyone who encounters
it for the first time while investigating some other, real
problem.
This commit removes the trace message "Mark bundle as not
supporting multiuse"
Closes#10159
When checking if there is a "secure context", which it is if the
connection is to localhost even if the protocol is HTTP, the comparison
for ::1 was done incorrectly and included brackets.
Reported-by: BratSinot on github
Fixes#10120Closes#10121
Otherwise it stores the info HSTS into the persistent cache for the IDN
name which will not match when the HSTS status is later checked for
using the decoded name.
Reported-by: Hiroki Kurosawa
Closes#10111
Deprecation and removal of codeset conversion support from the library
have released the strict need for an early binding of mime structures to
an easy handle (https://github.com/curl/curl/commit/2610142).
This constraint currently forces to create the handle before the mime
structure and the latter cannot be attached to another handle once
created (see https://curl.se/mail/lib-2022-08/0027.html).
This commit removes the handle pointers from the mime structures
allowing more flexibility on their use.
When an easy handle is duplicated, bound mime structures must however
still be duplicated too as their components hold send-time dynamic
information.
Closes#9927
- `Curl_ssl_get_config()` now returns the first config if no SSL proxy
filter is active
- socket filter starts connection only on first invocation of its
connect method
Fixes#9982Closes#9983
This makes a big difference for cases when the rewind is not actually
necessary to perofm (for example HTTP response code 301 converts to GET)
and therefore the rewind can be avoided. In particular for situations
when that rewind fails, for example when reading from a pipe or similar.
Reported-by: Ali Utku Selen
Fixes#9735Closes#9958
- almost all backend calls pass the Curl_cfilter intance instead of
connectdata+sockindex
- ssl_connect_data is remove from struct connectdata and made internal
to vtls
- ssl_connect_data is allocated in the added filter, kept at cf->ctx
- added function to let a ssl filter access its ssl_primary_config and
ssl_config_data this selects the propert subfields in conn and data,
for filters added as plain or proxy
- adjusted all backends to use the changed api
- adjusted all backends to access config data via the exposed
functions, no longer using conn or data directly
cfilter renames for clear purpose:
- methods `Curl_conn_*(data, conn, sockindex)` work on the complete
filter chain at `sockindex` and connection `conn`.
- methods `Curl_cf_*(cf, ...)` work on a specific Curl_cfilter
instance.
- methods `Curl_conn_cf()` work on/with filter instances at a
connection.
- rebased and resolved some naming conflicts
- hostname validation (und session lookup) on SECONDARY use the same
name as on FIRST (again).
new debug macros and removing connectdata from function signatures where not
needed.
adapting schannel for new Curl_read_plain paramter.
Closes#9919
Follow-up to dafdb20a26
HTTP/3 needs a special filter chain, since it does the TLS handling
itself. This PR adds special setup handling in the HTTP protocol handler
that takes are of it.
When a handler, in its setup method, installs filters, the default
behaviour for managing the filter chain is overridden.
Reported-by: Karthikdasari0423 on github
Fixes#9931Closes#9945
This struct field MUST remain what the application set it to, so that
handle reuse and handle duplication work.
Instead, the request state bit 'no_body' is introduced for code flows
that need to change this in run-time.
Closes#9888
- general construct/destroy in connectdata
- default implementations of callback functions
- connect: cfilters for connect and accept
- socks: cfilter for socks proxying
- http_proxy: cfilter for http proxy tunneling
- vtls: cfilters for primary and proxy ssl
- change in general handling of data/conn
- Curl_cfilter_setup() sets up filter chain based on data settings,
if none are installed by the protocol handler setup
- Curl_cfilter_connect() boot straps filters into `connected` status,
used by handlers and multi to reach further stages
- Curl_cfilter_is_connected() to check if a conn is connected,
e.g. all filters have done their work
- Curl_cfilter_get_select_socks() gets the sockets and READ/WRITE
indicators for multi select to work
- Curl_cfilter_data_pending() asks filters if the have incoming
data pending for recv
- Curl_cfilter_recv()/Curl_cfilter_send are the general callbacks
installed in conn->recv/conn->send for io handling
- Curl_cfilter_attach_data()/Curl_cfilter_detach_data() inform filters
and addition/removal of a `data` from their connection
- adding vtl functions to prevent use of Curl_ssl globals directly
in other parts of the code.
Reviewed-by: Daniel Stenberg
Closes#9855
Unlike `CONNECT`, currently we don't keep track whether `PROXY` is
already sent or not. This causes `PROXY` header to be sent twice during
`MSTATE_TUNNELING` and `MSTATE_PROTOCONNECT`.
Closes#9878Fixes#9442
- Replace `Github` with `GitHub`.
- Replace `windows` with `Windows`
- Replace `advice` with `advise` where a verb is used.
- A few fixes on removing repeated words.
- Replace `a HTTP` with `an HTTP`
Closes#9802
Since the date parser allows YYYYMMDD as a date format (due to it being
a bit too generic for parsing this particular header), a large integer
number could wrongly match that pattern and cause the parser to generate
a wrong value.
No date format accepted for this header starts with a decimal number, so
by reversing the check and trying a number first we can deduct that if
that works, it was not a date.
Reported-by Trail of Bits
Closes#9718
This function is currently located in the lib/http.c module and is
therefore disabled by the CURL_DISABLE_HTTP conditional token.
As it may be called by TLS backends, disabling HTTP results in an
undefined reference error at link time.
Move this function to vauth/vauth.c to always provide it and rename it
as Curl_auth_allowed_to_host() to respect the vauth module naming
convention.
Closes#9600
Next Protocol Negotiation is a TLS extension that was created and used
for agreeing to use the SPDY protocol (the precursor to HTTP/2) for
HTTPS. In the early days of HTTP/2, before the spec was finalized and
shipped, the protocol could be enabled using this extension with some
servers.
curl supports the NPN extension with some TLS backends since then, with
a command line option `--npn` and in libcurl with
`CURLOPT_SSL_ENABLE_NPN`.
HTTP/2 proper is made to use the ALPN (Application-Layer Protocol
Negotiation) extension and the NPN extension has no purposes
anymore. The HTTP/2 spec was published in May 2015.
Today, use of NPN in the wild should be extremely rare and most likely
totally extinct. Chrome removed NPN support in Chrome 51, shipped in
June 2016. Removed in Firefox 53, April 2017.
Closes#9307
- Send no more than 150 cookies per request
- Cap the max length used for a cookie: header to 8K
- Cap the max number of received Set-Cookie: headers to 50
Bug: https://curl.se/docs/CVE-2022-32205.html
CVE-2022-32205
Reported-by: Harry Sintonen
Closes#9048
Add licensing and copyright information for all files in this repository. This
either happens in the file itself as a comment header or in the file
`.reuse/dep5`.
This commit also adds a Github workflow to check pull requests and adapts
copyright.pl to the changes.
Closes#8869
Folded header lines will now get passed through like before. The headers
API is adapted and will provide the content unfolded.
Added test 1274 and extended test 1940 to verify.
Reported-by: Petr Pisar
Fixes#8844Closes#8899
Instead of connclose()ing, since when HTTP/2 is used it doesn't need to
close the connection as stopping the current transfer is enough.
Reported-by: Evangelos Foutras
Closes#8665
They are not allowed by the protocol and allowing them risk that curl
misbehaves somewhere where C functions are used but won't work on the
full contents. Further, they are not supported by hyper and they cause
problems for the new coming headers API work.
Updated test 262 to verify and enabled it for hyper as well
Closes#8601
Also add STRCONST, a macro that returns a string literal and it's length
for functions that take "string,len"
Removes unnecesary calls to strlen().
Closes#8391
The only h2 psuedo header that wasn't previously possible to change by a
user. This change also makes it impossible to send a HTTP/1 header that
starts with a colon, which I don't think anyone does anyway.
The other pseudo headers are possible to change indirectly by doing the
rightly crafted request.
Reported-by: siddharthchhabrap on github
Fixes#8381Closes#8393