docs: use lowercase curl and libcurl

Adjusted badwords to find them.

Plus: make badwords run on all markdown files in the repo and update
markdowns previously unchecked

Closes #15898
This commit is contained in:
Daniel Stenberg 2025-01-02 14:43:23 +01:00
parent e694c8284a
commit 3eb57d6ba7
No known key found for this signature in database
GPG Key ID: 5CC908FDB71E12C2
60 changed files with 273 additions and 241 deletions

View File

@ -8,13 +8,21 @@
# If separator is '=', the string will be compared case sensitively.
# If separator is ':', the check is done case insensitively.
#
# To add white listed uses of bad words that are removed before checking for
# the bad ones:
#
# ---(accepted word)
#
my $w;
while(<STDIN>) {
chomp;
if($_ =~ /^#/) {
next;
}
if($_ =~ /^([^:=]*)([:=])(.*)/) {
if($_ =~ /^---(.*)/) {
push @whitelist, $1;
}
elsif($_ =~ /^([^:=]*)([:=])(.*)/) {
my ($bad, $sep, $better)=($1, $2, $3);
push @w, $bad;
$alt{$bad} = $better;
@ -41,6 +49,10 @@ sub file {
$in =~ s/(\[.*\])\(.*\)/$1/g;
# remove backticked texts
$in =~ s/\`.*\`//g;
# remove whitelisted patterns
for my $p (@whitelist) {
$in =~ s/$p//g;
}
foreach my $w (@w) {
my $case = $exactcase{$w};
if(($in =~ /^(.*)$w/i && !$case) ||

View File

@ -66,3 +66,8 @@ couldn't:could not
64-bits:64 bits or 64-bit
32-bits:32 bits or 32-bit
\bvery\b:rephrase using an alternative word
\bCurl\b=curl
\bLibcurl\b=libcurl
---WWW::Curl
---NET::Curl
---Curl Corporation

View File

@ -137,7 +137,7 @@ jobs:
name: checkout
- name: badwords
run: .github/scripts/badwords.pl < .github/scripts/badwords.txt docs/*.md docs/libcurl/*.md docs/libcurl/opts/*.md docs/cmdline-opts/*.md docs/TODO docs/KNOWN_BUGS tests/*.md
run: .github/scripts/badwords.pl < .github/scripts/badwords.txt `git ls-files '**.md'` docs/TODO docs/KNOWN_BUGS packages/OS400/README.OS400
- name: verify-synopsis
run: .github/scripts/verify-synopsis.pl docs/libcurl/curl*.md

View File

@ -21,7 +21,7 @@ Daniel uses a configure line similar to this for easier development:
./configure --disable-shared --enable-debug --enable-maintainer-mode
In environments that don't support configure (i.e. Windows), do this:
In environments that do not support configure (i.e. Windows), do this:
buildconf.bat

View File

@ -6,7 +6,7 @@ SPDX-License-Identifier: curl
# [![curl logo](https://curl.se/logo/curl-logo.svg)](https://curl.se/)
Curl is a command-line tool for transferring data specified with URL syntax.
curl is a command-line tool for transferring data specified with URL syntax.
Learn how to use curl by reading [the
manpage](https://curl.se/docs/manpage.html) or [everything
curl](https://everything.curl.dev/).
@ -55,7 +55,7 @@ page](https://hackerone.com/curl) and not in public.
## Notice
Curl contains pieces of source code that is Copyright (c) 1998, 1999 Kungliga
curl contains pieces of source code that is Copyright (c) 1998, 1999 Kungliga
Tekniska Högskolan. This notice is included here to comply with the
distribution terms.

View File

@ -8,7 +8,7 @@ SPDX-License-Identifier: curl
## There are still bugs
Curl and libcurl keep being developed. Adding features and changing code
curl and libcurl keep being developed. Adding features and changing code
means that bugs sneak in, no matter how hard we try to keep them out.
Of course there are lots of bugs left. Not to mention misfeatures.

View File

@ -59,7 +59,7 @@ not be the best solution.
## Using ECH and DoH
Curl supports using DoH for A/AAAA lookups so it was relatively easy to add
curl supports using DoH for A/AAAA lookups so it was relatively easy to add
retrieval of HTTPS RRs in that situation. To use ECH and DoH together:
```bash
@ -153,7 +153,7 @@ For now, this only works for the OpenSSL and BoringSSL/AWS-LC builds.
## Default settings
Curl has various ways to configure default settings, e.g. in ``$HOME/.curlrc``,
curl has various ways to configure default settings, e.g. in ``$HOME/.curlrc``,
so one can set the DoH URL and enable ECH that way:
```bash

View File

@ -74,7 +74,7 @@ November: configure script and reported successful compiles on several
major operating systems. The never-quite-understood -F option was added and
curl could now simulate quite a lot of a browser. TELNET support was added.
Curl 5 was released in December 1998 and introduced the first ever curl man
curl 5 was released in December 1998 and introduced the first ever curl man
page. People started making Linux RPM packages out of it.
1999
@ -187,7 +187,7 @@ June: curl 7.12.0 introduced IDN support. 10 official web mirrors.
This release bumped the major SONAME to 3 due to the removal of the
`curl_formparse()` function
August: Curl and libcurl 7.12.1
August: curl and libcurl 7.12.1
Public curl release number: 82
Releases counted from the beginning: 109
@ -377,7 +377,7 @@ April: added the cyassl backend (later renamed to wolfSSL)
curl and libcurl are installed in an estimated 5 *billion* instances
world-wide.
October 31: Curl and libcurl 7.62.0
October 31: curl and libcurl 7.62.0
Public curl releases: 177
Command line options: 219

View File

@ -115,7 +115,7 @@ matching public key file must be specified using the `--pubkey` option.
### HTTP
Curl also supports user and password in HTTP URLs, thus you can pick a file
curl also supports user and password in HTTP URLs, thus you can pick a file
like:
curl http://name:passwd@http.server.example/full/path/to/file
@ -170,7 +170,7 @@ curl uses HTTP/1.0 instead of HTTP/1.1 for any `CONNECT` attempts.
curl also supports SOCKS4 and SOCKS5 proxies with `--socks4` and `--socks5`.
See also the environment variables Curl supports that offer further proxy
See also the environment variables curl supports that offer further proxy
control.
Most FTP proxy servers are set up to appear as a normal FTP server from the
@ -199,7 +199,7 @@ should be read from STDIN.
## Ranges
HTTP 1.1 introduced byte-ranges. Using this, a client can request to get only
one or more sub-parts of a specified document. Curl supports this with the
one or more sub-parts of a specified document. curl supports this with the
`-r` flag.
Get the first 100 bytes of a document:
@ -210,7 +210,7 @@ Get the last 500 bytes of a document:
curl -r -500 http://www.example.com/
Curl also supports simple ranges for FTP files as well. Then you can only
curl also supports simple ranges for FTP files as well. Then you can only
specify start and stop position.
Get the first 100 bytes of a document using FTP:
@ -238,7 +238,7 @@ Upload a local file to get appended to the remote file:
curl -T localfile -a ftp://ftp.example.com/remotefile
Curl also supports ftp upload through a proxy, but only if the proxy is
curl also supports ftp upload through a proxy, but only if the proxy is
configured to allow that kind of tunneling. If it does, you can run curl in a
fashion similar to:
@ -264,7 +264,7 @@ For other ways to do HTTP data upload, see the POST section below.
If curl fails where it is not supposed to, if the servers do not let you in,
if you cannot understand the responses: use the `-v` flag to get verbose
fetching. Curl outputs lots of info and what it sends and receives in order to
fetching. curl outputs lots of info and what it sends and receives in order to
let the user see all client-server interaction (but it does not show you the
actual data).
@ -286,7 +286,7 @@ info on a single file for HTTP and FTP. The HTTP information is a lot more
extensive.
For HTTP, you can get the header information (the same as `-I` would show)
shown before the data by using `-i`/`--include`. Curl understands the
shown before the data by using `-i`/`--include`. curl understands the
`-D`/`--dump-header` option when getting files from both FTP and HTTP, and it
then stores the headers in the specified file.
@ -407,7 +407,7 @@ contain certain data.
## User Agent
An HTTP request has the option to include information about the browser that
generated the request. Curl allows it to be specified on the command line. It
generated the request. curl allows it to be specified on the command line. It
is especially useful to fool or trick stupid servers or CGI scripts that only
accept certain browsers.
@ -456,7 +456,7 @@ Example, get a page that wants my name passed in a cookie:
curl -b "name=Daniel" www.example.com
Curl also has the ability to use previously received cookies in following
curl also has the ability to use previously received cookies in following
sessions. If you get cookies from a server and store them in a file in a
manner similar to:
@ -482,7 +482,7 @@ non-existing file to trigger the cookie awareness like:
curl -L -b empty.txt www.example.com
The file to read cookies from must be formatted using plain HTTP headers OR as
Netscape's cookie file. Curl determines what kind it is based on the file
Netscape's cookie file. curl determines what kind it is based on the file
contents. In the above command, curl parses the header and store the cookies
received from www.example.com. curl sends the stored cookies which match the
request to the server as it follows the location. The file `empty.txt` may be
@ -523,7 +523,7 @@ much explanation!
## Speed Limit
Curl allows the user to set the transfer speed conditions that must be met to
curl allows the user to set the transfer speed conditions that must be met to
let the transfer keep going. By using the switch `-y` and `-Y` you can make
curl abort transfers if the transfer speed is below the specified lowest limit
for a specified time.
@ -562,7 +562,7 @@ stalls during periods.
## Config File
Curl automatically tries to read the `.curlrc` file (or `_curlrc` file on
curl automatically tries to read the `.curlrc` file (or `_curlrc` file on
Microsoft Windows systems) from the user's home directory on startup.
The config file could be made up with normal command line switches, but you
@ -822,7 +822,7 @@ with current logon credentials (SSPI/SPNEGO).
## Environment Variables
Curl reads and understands the following environment variables:
curl reads and understands the following proxy related environment variables:
http_proxy, HTTPS_PROXY, FTP_PROXY
@ -855,7 +855,7 @@ this is a big security risk if someone else gets hold of your passwords,
therefore most Unix programs do not read this file unless it is only readable
by yourself (curl does not care though).
Curl supports `.netrc` files if told to (using the `-n`/`--netrc` and
curl supports `.netrc` files if told to (using the `-n`/`--netrc` and
`--netrc-optional` options). This is not restricted to just FTP, so curl can
use it for all protocols where authentication is used.
@ -876,7 +876,7 @@ ending newline:
## Kerberos FTP Transfer
Curl supports kerberos4 and kerberos5/GSSAPI for FTP transfers. You need the
curl supports kerberos4 and kerberos5/GSSAPI for FTP transfers. You need the
kerberos package installed and used at curl build time for it to be available.
First, get the krb-ticket the normal way, like with the `kinit`/`kauth` tool.
@ -889,7 +889,7 @@ ask for one and you already entered the real password to `kinit`/`kauth`.
## TELNET
The curl telnet support is basic and easy to use. Curl passes all data passed
The curl telnet support is basic and easy to use. curl passes all data passed
to it on stdin to the remote server. Connect to a remote telnet server using a
command line similar to:

View File

@ -6,7 +6,7 @@ SPDX-License-Identifier: curl
# Rustls
[Rustls is a TLS backend written in Rust](https://docs.rs/rustls/). Curl can
[Rustls is a TLS backend written in Rust](https://docs.rs/rustls/). curl can
be built to use it as an alternative to OpenSSL or other TLS backends. We use
the [rustls-ffi C bindings](https://github.com/rustls/rustls-ffi/). This
version of curl depends on version v0.14.0 of rustls-ffi.

View File

@ -4,7 +4,7 @@ Copyright (C) Daniel Stenberg, <daniel@haxx.se>, et al.
SPDX-License-Identifier: curl
-->
# The Art Of Scripting HTTP Requests Using Curl
# The Art Of Scripting HTTP Requests Using curl
## Background
@ -15,12 +15,12 @@ SPDX-License-Identifier: curl
extract information from the web, to fake users, to post or upload data to
web servers are all important tasks today.
Curl is a command line tool for doing all sorts of URL manipulations and
curl is a command line tool for doing all sorts of URL manipulations and
transfers, but this particular document focuses on how to use it when doing
HTTP requests for fun and profit. This documents assumes that you know how to
invoke `curl --help` or `curl --manual` to get basic information about it.
Curl is not written to do everything for you. It makes the requests, it gets
curl is not written to do everything for you. It makes the requests, it gets
the data, it sends data and it retrieves the information. You probably need
to glue everything together using some kind of script language or repeated
manual invokes.
@ -475,7 +475,7 @@ SPDX-License-Identifier: curl
new page keeping newly generated output. The header that tells the browser to
redirect is `Location:`.
Curl does not follow `Location:` headers by default, but simply displays such
curl does not follow `Location:` headers by default, but simply displays such
pages in the same manner it displays all HTTP replies. It does however
feature an option that makes it attempt to follow the `Location:` pointers.
@ -485,7 +485,7 @@ SPDX-License-Identifier: curl
If you use curl to POST to a site that immediately redirects you to another
page, you can safely use [`--location`](https://curl.se/docs/manpage.html#-L)
(`-L`) and `--data`/`--form` together. Curl only uses POST in the first
(`-L`) and `--data`/`--form` together. curl only uses POST in the first
request, and then revert to GET in the following operations.
## Other redirects
@ -532,7 +532,7 @@ SPDX-License-Identifier: curl
[`--cookie-jar`](https://curl.se/docs/manpage.html#-c) option described
below is a better way to store cookies.)
Curl has a full blown cookie parsing engine built-in that comes in use if you
curl has a full blown cookie parsing engine built-in that comes in use if you
want to reconnect to a server and use cookies that were stored from a
previous connection (or hand-crafted manually to fool the server into
believing you had a previous connection). To use previously stored cookies,
@ -540,7 +540,7 @@ SPDX-License-Identifier: curl
curl --cookie stored_cookies_in_file http://www.example.com
Curl's "cookie engine" gets enabled when you use the
curl's "cookie engine" gets enabled when you use the
[`--cookie`](https://curl.se/docs/manpage.html#-b) option. If you only
want curl to understand received cookies, use `--cookie` with a file that
does not exist. Example, if you want to let curl understand cookies from a
@ -549,7 +549,7 @@ SPDX-License-Identifier: curl
curl --cookie nada --location http://www.example.com
Curl has the ability to read and write cookie files that use the same file
curl has the ability to read and write cookie files that use the same file
format that Netscape and Mozilla once used. It is a convenient way to share
cookies between scripts or invokes. The `--cookie` (`-b`) switch
automatically detects if a given file is such a cookie file and parses it,
@ -571,7 +571,7 @@ SPDX-License-Identifier: curl
SSL (or TLS as the current version of the standard is called) offers a set of
advanced features to do secure transfers over HTTP.
Curl supports encrypted fetches when built to use a TLS library and it can be
curl supports encrypted fetches when built to use a TLS library and it can be
built to use one out of a fairly large set of libraries - `curl -V` shows
which one your curl was built to use (if any). To get a page from an HTTPS
server, simply run curl like:
@ -581,7 +581,7 @@ SPDX-License-Identifier: curl
## Certificates
In the HTTPS world, you use certificates to validate that you are the one you
claim to be, as an addition to normal passwords. Curl supports client- side
claim to be, as an addition to normal passwords. curl supports client- side
certificates. All certificates are locked with a passphrase, which you need
to enter before the certificate can be used by curl. The passphrase can be
specified on the command line or if not, entered interactively when curl

View File

@ -7,7 +7,7 @@ SPDX-License-Identifier: curl
Version Numbers and Releases
============================
Curl is not only curl. Curl is also libcurl. They are actually individually
The command line tool curl and the library libcurl are individually
versioned, but they usually follow each other closely.
The version numbering is always built up using the same system:

View File

@ -33,14 +33,14 @@ FTP accept failed. While waiting for the server to connect back when an active
FTP session is used, an error code was sent over the control connection or
similar.
## 11
FTP weird PASS reply. Curl could not parse the reply sent to the PASS request.
FTP weird PASS reply. curl could not parse the reply sent to the PASS request.
## 12
During an active FTP session while waiting for the server to connect back to
curl, the timeout expired.
## 13
FTP weird PASV reply, Curl could not parse the reply sent to the PASV request.
FTP weird PASV reply, curl could not parse the reply sent to the PASV request.
## 14
FTP weird 227 format. Curl could not parse the 227-line the server sent.
FTP weird 227 format. curl could not parse the 227-line the server sent.
## 15
FTP cannot use host. Could not resolve the host IP we got in the 227-line.
## 16
@ -61,7 +61,7 @@ HTTP page not retrieved. The requested URL was not found or returned another
error with the HTTP error code being 400 or above. This return code only
appears if --fail is used.
## 23
Write error. Curl could not write data to a local filesystem or similar.
Write error. curl could not write data to a local filesystem or similar.
## 25
Failed starting the upload. For FTP, the server typically denied the STOR
command.

View File

@ -20,7 +20,7 @@ Example:
# `--cookie-jar`
Specify to which file you want curl to write all cookies after a completed
operation. Curl writes all cookies from its in-memory cookie storage to the
operation. curl writes all cookies from its in-memory cookie storage to the
given file at the end of operations. Even if no cookies are known, a file is
created so that it removes any formerly existing cookies from the file. The
file uses the Netscape cookie file format. If you set the filename to a single

View File

@ -17,7 +17,7 @@ Example:
# `--disable-eprt`
Disable the use of the EPRT and LPRT commands when doing active FTP transfers.
Curl normally first attempts to use EPRT before using PORT, but with this
curl normally first attempts to use EPRT before using PORT, but with this
option, it uses PORT right away. EPRT is an extension to the original FTP
protocol, and does not work on all servers, but enables more functionality in
a better way than the traditional PORT command.

View File

@ -16,7 +16,7 @@ Example:
# `--disable-epsv`
Disable the use of the EPSV command when doing passive FTP transfers. Curl
Disable the use of the EPSV command when doing passive FTP transfers. curl
normally first attempts to use EPSV before PASV, but with this option, it does
not try EPSV.

View File

@ -17,7 +17,7 @@ Example:
# `--hostpubsha256`
Pass a string containing a Base64-encoded SHA256 hash of the remote host's
public key. Curl refuses the connection with the host unless the hashes match.
public key. curl refuses the connection with the host unless the hashes match.
This feature requires libcurl to be built with libssh2 and does not work with
other SSH backends.

View File

@ -21,7 +21,7 @@ Example:
Make curl scan the *.netrc* file in the user's home directory for login name
and password. This is typically used for FTP on Unix. If used with HTTP, curl
enables user authentication. See *netrc(5)* and *ftp(1)* for details on the
file format. Curl does not complain if that file does not have the right
file format. curl does not complain if that file does not have the right
permissions (it should be neither world- nor group-readable). The environment
variable "HOME" is used to find the home directory.

View File

@ -17,7 +17,7 @@ Example:
# `--silent`
Silent or quiet mode. Do not show progress meter or error messages. Makes Curl
Silent or quiet mode. Do not show progress meter or error messages. Makes curl
mute. It still outputs the data you ask for, potentially even to the
terminal/stdout unless you redirect it.

View File

@ -66,7 +66,7 @@ link your application with libcurl.
## --prefix
This is the prefix used when libcurl was installed. Libcurl is then installed
This is the prefix used when libcurl was installed. libcurl is then installed
in $prefix/lib and its header files are installed in $prefix/include and so
on. The prefix is set with "configure --prefix".

View File

@ -52,7 +52,7 @@ int main(void)
CURLMsg *msg; /* for picking up messages with the transfer status */
int msgs_left; /* how many messages are left */
/* Allocate one CURL handle per transfer */
/* Allocate one curl handle per transfer */
for(i = 0; i < HANDLECOUNT; i++)
handles[i] = curl_easy_init();

View File

@ -58,7 +58,7 @@ int main(void)
CURLMsg *msg; /* for picking up messages with the transfer status */
int msgs_left; /* how many messages are left */
/* Allocate one CURL handle per transfer */
/* Allocate one curl handle per transfer */
for(i = 0; i < HANDLECOUNT; i++)
handles[i] = curl_easy_init();
@ -183,7 +183,7 @@ int main(void)
curl_multi_cleanup(multi_handle);
/* Free the CURL handles */
/* Free the curl handles */
for(i = 0; i < HANDLECOUNT; i++)
curl_easy_cleanup(handles[i]);

View File

@ -11,8 +11,8 @@ to and read from. It manages read and write positions and has a maximum size.
## read/write
Its basic read/write functions have a similar signature and return code handling
as many internal Curl read and write ones.
Its basic read/write functions have a similar signature and return code
handling as many internal curl read and write ones.
```
@ -84,9 +84,8 @@ It is possible to undo writes by calling:
CURLcode Curl_bufq_unwrite(struct bufq *q, size_t len);
```
This will remove `len` bytes from the end of the bufq again. When removing
more bytes than are present, CURLE_AGAIN is returned and the bufq will be
empty.
This removes `len` bytes from the end of the bufq again. When removing more
bytes than are present, CURLE_AGAIN is returned and bufq is cleared.
## lifetime

View File

@ -65,12 +65,11 @@ See also `Curl_llist_insert_next`.
## Remove a node
Remove a node again from a list by calling `Curl_llist_remove()`. This
will destroy the node's `elem` (e.g. calling a registered free function).
destroys the node's `elem` (e.g. calling a registered free function).
To remove a node without destroying it's `elem`, use
`Curl_node_take_elem()` which returns the `elem` pointer and
removes the node from the list. The caller then owns this pointer
and has to take care of it.
To remove a node without destroying its `elem`, use `Curl_node_take_elem()`
which returns the `elem` pointer and removes the node from the list. The
caller then owns this pointer and has to take care of it.
## Iterate

View File

@ -23,7 +23,8 @@ Example subscribe:
curl mqtt://host.home/bedroom/temp
This will send an MQTT SUBSCRIBE packet for the topic `bedroom/temp` and listen in for incoming PUBLISH packets.
This sends an MQTT SUBSCRIBE packet for the topic `bedroom/temp` and listen in
for incoming PUBLISH packets.
### Publishing
@ -35,7 +36,8 @@ Example publish:
curl -d 75 mqtt://host.home/bedroom/dimmer
This will send an MQTT PUBLISH packet to the topic `bedroom/dimmer` with the payload `75`.
This sends an MQTT PUBLISH packet to the topic `bedroom/dimmer` with the
payload `75`.
## What does curl deliver as a response to a subscribe

View File

@ -27,7 +27,7 @@ for an insight into this topic.
These difference between TLS protocol versions are reflected in curl's
handling of session tickets. More below.
## Curl's `ssl_peer_key`
## curl's `ssl_peer_key`
In order to find a ticket from a previous TLS session, curl
needs a name for TLS sessions that uniquely identifies the peer
@ -55,18 +55,18 @@ Examples:
Different configurations produce different keys which is just what
curl needs when handling SSL session tickets.
One important thing: peer keys do not contain confidential
information. If you configure a client certificate or SRP authentication
with username/password, these will not be part of the peer key.
One important thing: peer keys do not contain confidential information. If you
configure a client certificate or SRP authentication with username/password,
these are not part of the peer key.
However, peer keys carry the hostnames you use curl for. The *do*
leak the privacy of your communication. We recommend to *not* persist
peer keys for this reason.
**Caveat**: The key may contain file names or paths. It does not
reflect the *contents* in the filesystem. If you change `/etc/ssl/cert.pem`
and reuse a previous ticket, curl might trust a server which no
longer has a root certificate in the file.
**Caveat**: The key may contain filenames or paths. It does not reflect the
*contents* in the filesystem. If you change `/etc/ssl/cert.pem` and reuse a
previous ticket, curl might trust a server which no longer has a root
certificate in the file.
## Session Cache Access
@ -76,22 +76,20 @@ longer has a root certificate in the file.
When a new connection is being established, each SSL connection filter creates
its own peer_key and calls into the cache. The cache then looks for a ticket
with exactly this peer_key. Peer keys between proxy SSL filters and SSL
filters talking through a tunnel will differ, as they talk to different
peers.
filters talking through a tunnel differ, as they talk to different peers.
If the connection filter wants to use a client certificate or SRP
authentication, the cache will check those as well. If the cache peer
carries client cert or SRP auth, the connection filter must have
those with the same values (and vice versa).
authentication, the cache checks those as well. If the cache peer carries
client cert or SRP auth, the connection filter must have those with the same
values (and vice versa).
On a match, the connection filter gets the session ticket and feeds that
to the TLS implementation which, on accepting it, will try to resume it
for a shorter handshake. In addition, the filter gets the ALPN used
before and the amount of 0-RTT data that the server announced to be
willing to accept. The filter can then decide if it wants to attempt
0-RTT or not. (The ALPN is needed to know if the server speaks the
protocol you want to send in 0-RTT. It makes no sense to send HTTP/2
requests to a server that only knows HTTP/1.1.)
On a match, the connection filter gets the session ticket and feeds that to
the TLS implementation which, on accepting it, tries to resume it for a
shorter handshake. In addition, the filter gets the ALPN used before and the
amount of 0-RTT data that the server announced to be willing to accept. The
filter can then decide if it wants to attempt 0-RTT or not. (The ALPN is
needed to know if the server speaks the protocol you want to send in 0-RTT. It
makes no sense to send HTTP/2 requests to a server that only knows HTTP/1.1.)
#### Updates
@ -106,10 +104,10 @@ when a filter accesses the session cache, it *takes*
a ticket from the cache, meaning a returned ticket is removed. The filter
then configures its TLS backend and *returns* the ticket to the cache.
The cache needs to treat tickets from TLSv1.2 and 1.3 differently.
1.2 tickets should be reused, but 1.3 tickets SHOULD NOT (RFC 8446).
The session cache will simply drop 1.3 tickets when they are returned
after use, but keep a 1.2 ticket.
The cache needs to treat tickets from TLSv1.2 and 1.3 differently. 1.2 tickets
should be reused, but 1.3 tickets SHOULD NOT (RFC 8446). The session cache
simply drops 1.3 tickets when they are returned after use, but keeps a 1.2
ticket.
When a ticket is *put* into the cache, there is also a difference. There
can be several 1.3 tickets at the same time, but only a single 1.2 ticket.
@ -117,16 +115,16 @@ TLSv1.2 tickets replace any other. 1.3 tickets accumulate up to a max
amount.
By having a "put/take/return" we reflect the 1.3 use case nicely. Two
concurrent connections will not reuse the same ticket.
concurrent connections do not reuse the same ticket.
## Session Ticket Persistence
#### Privacy and Security
As mentioned above, ssl peer keys are not intended for storage in a
file system. They'll clearly show which hosts the user talked to. This
maybe "just" privacy relevant, but has security implications as an
attacker might find worthy targets among your peer keys.
As mentioned above, ssl peer keys are not intended for storage in a file
system. They clearly show which hosts the user talked to. This maybe "just"
privacy relevant, but has security implications as an attacker might find
worthy targets among your peer keys.
Also, we do not recommend to persist TLSv1.2 tickets.
@ -137,32 +135,29 @@ it provides a salted SHA256 hash of the peer key for import and export.
#### Export
The salt is generated randomly for each peer key on export. The
SHA256 makes sure that the peer key cannot be reversed and that
a slightly different key still produces a very different result.
The salt is generated randomly for each peer key on export. The SHA256 makes
sure that the peer key cannot be reversed and that a slightly different key
still produces a different result.
This means an attacker cannot just "grep" a session file for a
particular entry, e.g. if they want to know if you accessed a
specific host. They *can* however compute the SHA256 hashes for
all salts in the file and find a specific entry. But they *cannot*
find a hostname they do not know. They'd have to brute force by
guessing.
This means an attacker cannot just "grep" a session file for a particular
entry, e.g. if they want to know if you accessed a specific host. They *can*
however compute the SHA256 hashes for all salts in the file and find a
specific entry. They *cannot* find a hostname they do not know. They would
have to brute force by guessing.
#### Import
When session tickets are imported from a file, curl only gets the
salted hashes. The tickets imported will belong to an *unknown*
peer key.
When session tickets are imported from a file, curl only gets the salted
hashes. The imported tickets belong to an *unknown* peer key.
When a connection filter tries to *take* a session ticket, it will
pass its peer key. This peer key will initially not match any
tickets in the cache. The cache then checks all entries with
unknown peer keys if the passed key matches their salted hash. If
it does, the peer key is recovered and remembered at the cache
entry.
When a connection filter tries to *take* a session ticket, it passes its peer
key. This peer key initially does not match any tickets in the cache. The
cache then checks all entries with unknown peer keys if the passed key matches
their salted hash. If it does, the peer key is recovered and remembered at the
cache entry.
This is a performance penalty in the order of "unknown" peer keys
which will diminish over time when keys are rediscovered. Note that
this also works for putting a new ticket into the cache: when no
present entry matches, a new one with peer key is created. This
peer key will then no longer bear the cost of hash computes.
This is a performance penalty in the order of "unknown" peer keys which
diminishes over time when keys are rediscovered. Note that this also works for
putting a new ticket into the cache: when no present entry matches, a new one
with peer key is created. This peer key then no longer bears the cost of hash
computes.

View File

@ -30,8 +30,8 @@ CURL *curl_easy_init();
# DESCRIPTION
This function allocates and returns a CURL easy handle. Such a handle is used
as input to other functions in the easy interface. This call must have a
This function allocates and returns an easy handle. Such a handle is used as
input to other functions in the easy interface. This call must have a
corresponding call to curl_easy_cleanup(3) when the operation is complete.
The easy handle is used to hold and control a single network transfer. It is

View File

@ -28,7 +28,7 @@ void curl_easy_reset(CURL *handle);
# DESCRIPTION
Re-initializes all options previously set on a specified CURL handle to the
Re-initializes all options previously set on a specified curl handle to the
default values. This puts back the handle to the same state as it was in when
it was just created with curl_easy_init(3).

View File

@ -35,7 +35,7 @@ protocol used does not support this.
The **ct** pointer is set to NULL or pointing to private memory. You MUST
NOT free it - it gets freed when you call curl_easy_cleanup(3) on the
corresponding CURL handle.
corresponding curl handle.
The modern way to get this header from a response is to instead use the
curl_easy_header(3) function.

View File

@ -37,7 +37,7 @@ the same method the first request would use.
The **methodp** pointer is NULL or points to private memory. You MUST NOT
free - it gets freed when you call curl_easy_cleanup(3) on the
corresponding CURL handle.
corresponding curl handle.
# %PROTOCOLS%

View File

@ -33,8 +33,8 @@ In cases when you have asked libcurl to follow redirects, it may not be the same
value you set with CURLOPT_URL(3).
The **urlp** pointer is NULL or points to private memory. You MUST NOT free
- it gets freed when you call curl_easy_cleanup(3) on the corresponding
CURL handle.
- it gets freed when you call curl_easy_cleanup(3) on the corresponding curl
handle.
# %PROTOCOLS%

View File

@ -32,8 +32,8 @@ logging on to the remote FTP server. This stores a NULL as pointer if
something is wrong.
The **path** pointer is NULL or points to private memory. You MUST NOT free
- it gets freed when you call curl_easy_cleanup(3) on the corresponding
CURL handle.
- it gets freed when you call curl_easy_cleanup(3) on the corresponding curl
handle.
# %PROTOCOLS%

View File

@ -35,9 +35,9 @@ string holding the IP address of the most recent connection done with this
get a pointer to a memory area that is reused at next request so you need to
copy the string if you want to keep the information.
The **ip** pointer is NULL or points to private memory. You MUST NOT free -
it gets freed when you call curl_easy_cleanup(3) on the corresponding
CURL handle.
The **ip** pointer is NULL or points to private memory. You MUST NOT free - it
gets freed when you call curl_easy_cleanup(3) on the corresponding curl
handle.
# %PROTOCOLS%

View File

@ -32,8 +32,8 @@ Pass in a pointer to a char pointer and get the referrer header used in the
most recent request.
The **hdrp** pointer is NULL or points to private memory you MUST NOT free -
it gets freed when you call curl_easy_cleanup(3) on the corresponding
CURL handle.
it gets freed when you call curl_easy_cleanup(3) on the corresponding curl
handle.
# %PROTOCOLS%

View File

@ -33,9 +33,9 @@ most recent RTSP Session ID.
Applications wishing to resume an RTSP session on another connection should
retrieve this info before closing the active connection.
The **id** pointer is NULL or points to private memory. You MUST NOT free -
it gets freed when you call curl_easy_cleanup(3) on the corresponding
CURL handle.
The **id** pointer is NULL or points to private memory. You MUST NOT free - it
gets freed when you call curl_easy_cleanup(3) on the corresponding curl
handle.
# %PROTOCOLS%

View File

@ -35,7 +35,7 @@ this CURL **handle**.
The **scheme** pointer is NULL or points to private memory. You MUST NOT
free - it gets freed when you call curl_easy_cleanup(3) on the corresponding
CURL handle.
curl handle.
The returned scheme might be upper or lowercase. Do comparisons case
insensitively.

View File

@ -77,7 +77,7 @@ introduced in later libcurl versions.
## CURL_PUSH_OK (0)
The application has accepted the stream and it can now start receiving data,
the ownership of the CURL handle has been taken over by the application.
the ownership of the curl handle has been taken over by the application.
## CURL_PUSH_DENY (1)

View File

@ -42,9 +42,9 @@ When CURLOPT_DOH_SSL_VERIFYHOST(3) is 2, the SSL certificate provided by
the DoH server must indicate that the server name is the same as the server
name to which you meant to connect to, or the connection fails.
Curl considers the DoH server the intended one when the Common Name field or a
curl considers the DoH server the intended one when the Common Name field or a
Subject Alternate Name field in the certificate matches the hostname in the
DoH URL to which you told Curl to connect.
DoH URL to which you told curl to connect.
When the *verify* value is set to 1L it is treated the same as 2L. However
for consistency with the other *VERIFYHOST* options we suggest use 2 and

View File

@ -44,7 +44,7 @@ This option is the DoH equivalent of CURLOPT_SSL_VERIFYPEER(3) and
only affects requests to the DoH server.
When negotiating a TLS or SSL connection, the server sends a certificate
indicating its identity. Curl verifies whether the certificate is authentic,
indicating its identity. curl verifies whether the certificate is authentic,
i.e. that you can trust that the server is who the certificate says it is.
This trust is based on a chain of digital signatures, rooted in certification
authority (CA) certificates you supply. curl uses a default bundle of CA

View File

@ -29,7 +29,7 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_HTTP_CONTENT_DECODING,
# DESCRIPTION
Pass a long to tell libcurl how to act on content decoding. If set to zero,
content decoding is disabled. If set to 1 it is enabled. Libcurl has no
content decoding is disabled. If set to 1 it is enabled. libcurl has no
default content decoding but requires you to use
CURLOPT_ACCEPT_ENCODING(3) for that.

View File

@ -35,7 +35,7 @@ shown above.
This callback function gets called by libcurl as soon as it has received
interleaved RTP data. This function gets called for each $ block and therefore
contains exactly one upper-layer protocol unit (e.g. one RTP packet). Curl
contains exactly one upper-layer protocol unit (e.g. one RTP packet). curl
writes the interleaved header as well as the included data for each call. The
first byte is always an ASCII dollar sign. The dollar sign is followed by a
one byte channel identifier and then a 2 byte integer length in network byte

View File

@ -41,7 +41,7 @@ When CURLOPT_PROXY_SSL_VERIFYHOST(3) is 2, the proxy certificate must
indicate that the server is the proxy to which you meant to connect to, or the
connection fails.
Curl considers the proxy the intended one when the Common Name field or a
curl considers the proxy the intended one when the Common Name field or a
Subject Alternate Name field in the certificate matches the hostname in the
proxy string which you told curl to use.

View File

@ -39,7 +39,7 @@ This is the proxy version of CURLOPT_SSL_VERIFYPEER(3) that is used for
ordinary HTTPS servers.
When negotiating a TLS or SSL connection, the server sends a certificate
indicating its identity. Curl verifies whether the certificate is authentic,
indicating its identity. curl verifies whether the certificate is authentic,
i.e. that you can trust that the server is who the certificate says it is.
This trust is based on a chain of digital signatures, rooted in certification
authority (CA) certificates you supply. curl uses a default bundle of CA

View File

@ -65,11 +65,11 @@ It gets called when the known_host matching has been done, to allow the
application to act and decide for libcurl how to proceed. The callback is only
called if CURLOPT_SSH_KNOWNHOSTS(3) is also set.
This callback function gets passed the CURL handle, the key from the
known_hosts file *knownkey*, the key from the remote site *foundkey*,
info from libcurl on the matching status and a custom pointer (set with
CURLOPT_SSH_KEYDATA(3)). It MUST return one of the following return
codes to tell libcurl how to act:
This callback function gets passed the curl handle, the key from the
known_hosts file *knownkey*, the key from the remote site *foundkey*, info
from libcurl on the matching status and a custom pointer (set with
CURLOPT_SSH_KEYDATA(3)). It MUST return one of the following return codes to
tell libcurl how to act:
## CURLKHSTAT_FINE_REPLACE

View File

@ -38,7 +38,7 @@ This option determines whether curl verifies the authenticity of the peer's
certificate. A value of 1 means curl verifies; 0 (zero) means it does not.
When negotiating a TLS or SSL connection, the server sends a certificate
indicating its identity. Curl verifies whether the certificate is authentic,
indicating its identity. curl verifies whether the certificate is authentic,
i.e. that you can trust that the server is who the certificate says it is.
This trust is based on a chain of digital signatures, rooted in certification
authority (CA) certificates you supply. curl uses a default bundle of CA

View File

@ -29,7 +29,7 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_STREAM_DEPENDS_E,
# DESCRIPTION
Pass a CURL pointer in *dephandle* to identify the stream within the same
Pass a `CURL` pointer in *dephandle* to identify the stream within the same
connection that this stream is depending upon exclusively. That means it
depends on it and sets the Exclusive bit.

View File

@ -1959,10 +1959,10 @@ typedef enum {
/* Set stream weight, 1 - 256 (default is 16) */
CURLOPT(CURLOPT_STREAM_WEIGHT, CURLOPTTYPE_LONG, 239),
/* Set stream dependency on another CURL handle */
/* Set stream dependency on another curl handle */
CURLOPT(CURLOPT_STREAM_DEPENDS, CURLOPTTYPE_OBJECTPOINT, 240),
/* Set E-xclusive stream dependency on another CURL handle */
/* Set E-xclusive stream dependency on another curl handle */
CURLOPT(CURLOPT_STREAM_DEPENDS_E, CURLOPTTYPE_OBJECTPOINT, 241),
/* Do not send any tftp option requests to the server */

View File

@ -78,7 +78,7 @@ CURL_EXTERN CURL *curl_easy_duphandle(CURL *curl);
*
* DESCRIPTION
*
* Re-initializes a CURL handle to the default values. This puts back the
* Re-initializes a curl handle to the default values. This puts back the
* handle to the same state as it was in when it was just created.
*
* It does keep: live connections, the Session ID cache, the DNS cache and the

View File

@ -37,7 +37,7 @@ CURLcode Curl_macos_init(void)
/*
* The automagic conversion from IPv4 literals to IPv6 literals only
* works if the SCDynamicStoreCopyProxies system function gets called
* first. As Curl currently does not support system-wide HTTP proxies, we
* first. As curl currently does not support system-wide HTTP proxies, we
* therefore do not use any value this function might return.
*
* This function is only available on macOS and is not needed for

View File

@ -59,7 +59,7 @@
/*
CURL_SOCKET_HASH_TABLE_SIZE should be a prime number. Increasing it from 97
to 911 takes on a 32-bit machine 4 x 804 = 3211 more bytes. Still, every
CURL handle takes 45-50 K memory, therefore this 3K are not significant.
curl handle takes 6K memory, therefore this 3K are not significant.
*/
#ifndef CURL_SOCKET_HASH_TABLE_SIZE
#define CURL_SOCKET_HASH_TABLE_SIZE 911

View File

@ -370,7 +370,7 @@ static struct passwd *vms_getpwuid(uid_t uid)
#define USE_UPPERCASE_KRBAPI 1
/* AI_NUMERICHOST needed for IP V6 support in Curl */
/* AI_NUMERICHOST needed for IP V6 support in curl */
#ifdef HAVE_NETDB_H
#include <netdb.h>
#ifndef AI_NUMERICHOST

View File

@ -38,7 +38,7 @@
struct ssl_peer;
/* Struct to hold a Curl OpenSSL instance */
/* Struct to hold a curl OpenSSL instance */
struct ossl_ctx {
/* these ones requires specific SSL-types */
SSL_CTX* ssl_ctx;

View File

@ -4,10 +4,10 @@ Implementation notes:
This is a true OS/400 ILE implementation, not a PASE implementation (for
PASE, use AIX implementation).
The biggest problem with OS/400 is EBCDIC. Libcurl implements an internal
The biggest problem with OS/400 is EBCDIC. libcurl implements an internal
conversion mechanism, but it has been designed for computers that have a
single native character set. OS/400 default native character set varies
depending on the country for which it has been localized. And more, a job
depending on the country for which it has been localized. Further, a job
may dynamically alter its "native" character set.
Several characters that do not have fixed code in EBCDIC variants are
used in libcurl strings. As a consequence, using the existing conversion
@ -33,7 +33,7 @@ NOT converted, so text gathered this way is (probably !) ASCII.
Another OS/400 problem comes from the fact that the last fixed argument of a
vararg procedure may not be of type char, unsigned char, short or unsigned
short. Enums that are internally implemented by the C compiler as one of these
types are also forbidden. Libcurl uses enums as vararg procedure tagfields...
types are also forbidden. libcurl uses enums as vararg procedure tagfields...
Happily, there is a pragma forcing enums to type "int". The original libcurl
header files are thus altered during build process to use this pragma, in
order to force libcurl enums of being type int (the pragma disposition in use
@ -201,7 +201,7 @@ _ curl_pushheader_bynum_cssid() and curl_pushheader_byname_ccsid()
should be released with curl_free() after use, as opposite to the non-ccsid
versions of these procedures.
Please note that HTTP2 is not (yet) implemented on OS/400, thus these
functions will always return NULL.
functions always return NULL.
_ curl_easy_option_by_name_ccsid() returns a pointer to an untranslated option
metadata structure. As each curl_easyoption structure holds the option name in
@ -216,15 +216,15 @@ hout parameter is kept in libcurl's encoding and should not be altered.
_ curl_from_ccsid() and curl_to_ccsid() are string encoding conversion
functions between ASCII (latin1) and the given CCSID. The first parameter is
the source string, the second is the CCSID and the returned value is a pointer
to the dynamically allocated string. These functions do not impact on Curl's
to the dynamically allocated string. These functions do not impact on curl's
behavior and are only provided for user convenience. After use, returned values
must be released with curl_free().
Standard compilation environment does support neither autotools nor make;
in fact, very few common utilities are available. As a consequence, the
config-os400.h has been coded manually and the compilation scripts are
a set of shell scripts stored in subdirectory packages/OS400.
Standard compilation environment does support neither autotools nor make; in
fact, few common utilities are available. As a consequence, the config-os400.h
has been coded manually and the compilation scripts are a set of shell scripts
stored in subdirectory packages/OS400.
The test environment is currently not supported on OS/400.
@ -259,7 +259,7 @@ _ TFTP
Compiling on OS/400:
These instructions targets people who knows about OS/400, compiling, IFS and
archive extraction. Do not ask questions about these subjects if you're not
archive extraction. Do not ask questions about these subjects if you are not
familiar with.
_ As a prerequisite, QADRT development environment must be installed.
@ -286,20 +286,20 @@ _ Enter the command "sh makefile.sh > makelog 2>&1"
_ Examine the makelog file to check for compilation errors. CZM0383 warnings on
C or system standard API come from QADRT inlining and can safely be ignored.
Without configuration parameters override, this will produce the following
Without configuration parameters override, this produces the following
OS/400 objects:
_ Library CURL. All other objects will be stored in this library.
_ libcurl. All other objects are stored in this library.
_ Modules for all libcurl units.
_ Binding directory CURL_A, to be used at calling program link time for
statically binding the modules (specify BNDSRVPGM(QADRTTS QGLDCLNT QGLDBRDR)
when creating a program using CURL_A).
_ Service program CURL.<soname>, where <soname> is extracted from the
lib/Makefile.am VERSION variable. To be used at calling program run-time
lib/Makefile.am VERSION variable. To be used at calling program runtime
when this program has dynamically bound curl at link time.
_ Binding directory CURL. To be used to dynamically bind libcurl when linking a
calling program.
- CLI tool bound program CURL.
- CLI command CURL.
- CLI tool bound program curl.
- CLI command curl.
_ Source file H. It contains all the include members needed to compile a C/C++
module using libcurl, and an ILE/RPG /copy member for support in this
language.

View File

@ -10,8 +10,8 @@ Building via IDE Project Files
This document describes how to compile, build and install curl and libcurl
from sources using legacy versions of Visual Studio 2010 - 2013.
You will need to generate the project files before using them. Please run
"generate -help" for usage details.
You need to generate the project files before using them. Please run "generate
-help" for usage details.
To generate project files for recent versions of Visual Studio instead, use
cmake. Refer to INSTALL-CMAKE in the docs directory.
@ -43,7 +43,7 @@ a library is being compiled against dynamic runtime libraries.
The projects files also support build configurations that require third party
dependencies such as OpenSSL and libssh2. If you wish to support these, you
will also need to download and compile those libraries as well.
also need to download and compile those libraries as well.
To support compilation of these libraries using different versions of
compilers, the following directory structure has been used for both the output
@ -70,19 +70,19 @@ of curl and libcurl as well as these dependencies.
|_VC <version>
|_<configuration>
As OpenSSL doesn't support side-by-side compilation when using different
versions of Visual Studio, a helper batch file has been provided to assist with
this. Please run `build-openssl -help` for usage details.
As OpenSSL does not support side-by-side compilation when using different
versions of Visual Studio, a helper batch file has been provided to assist
with this. Please run `build-openssl -help` for usage details.
## Building with Visual C++
To build with VC++, you will of course have to first install VC++ which is
part of Visual Studio.
To build with VC++, you have to first install VC++ which is part of Visual
Studio.
Once you have VC++ installed you should launch the application and open one of
the solution or workspace files. The VC directory names are based on the
version of Visual C++ that you will be using. Each version of Visual Studio
has a default version of Visual C++. We offer these versions:
version of Visual C++ that you use. Each version of Visual Studio has a
default version of Visual C++. We offer these versions:
- VC10 (Visual Studio 2010 Version 10.0)
- VC11 (Visual Studio 2012 Version 11.0)
@ -99,9 +99,9 @@ use `VC10\curl-all.sln` to build curl and libcurl.
## Running DLL based configurations
If you are a developer and plan to run the curl tool from Visual Studio with
any third-party libraries (such as OpenSSL or libssh2) then you will
need to add the search path of these DLLs to the configuration's PATH
environment. To do that:
any third-party libraries (such as OpenSSL or libssh2) then you need to add
the search path of these DLLs to the configuration's PATH environment. To do
that:
1. Open the 'curl-all.sln' or 'curl.sln' solutions
2. Right-click on the 'curl' project and select Properties
@ -122,8 +122,8 @@ DLL Debug - DLL OpenSSL (x64):
C:\Windows;C:\Windows\System32\Wbem
If you are using a configuration that uses multiple third-party library DLLs
(such as DLL Debug - DLL OpenSSL - DLL libssh2) then 'Path to DLL' will need
to contain the path to both of these.
(such as DLL Debug - DLL OpenSSL - DLL libssh2) then 'Path to DLL' need to
contain the path to both of these.
## Notes
@ -139,14 +139,14 @@ Should you wish to help out with some of the items on the TODO list, or find
bugs in the project files that need correcting, and would like to submit
updated files back then please note that, whilst the solution files can be
edited directly, the templates for the project files (which are stored in the
git repository) will need to be modified rather than the generated project
files that Visual Studio uses.
git repository) need to be modified rather than the generated project files
that Visual Studio uses.
## Legacy Windows and SSL
Some of the project configurations use Schannel (Windows SSPI), the native SSL
library that comes with the Windows OS. Schannel in Windows 8 and earlier is
not able to connect to servers that no longer support the legacy handshakes
and algorithms used by those versions. If you will be using curl in one of
those earlier versions of Windows you should choose another SSL backend like
and algorithms used by those versions. If you are using curl in one of those
earlier versions of Windows you should choose another SSL backend like
OpenSSL.

View File

@ -84,7 +84,7 @@ int is_vms_shell(void)
* feature macro settings, and one of the exit routines is hidden at compile
* time.
*
* Since we want Curl to work properly under the VMS DCL shell and Unix
* Since we want curl to work properly under the VMS DCL shell and Unix
* shells under VMS, this routine should compile correctly regardless of
* the settings.
*/

View File

@ -6,7 +6,7 @@ SPDX-License-Identifier: curl
# Continuous Integration for curl
Curl runs in many different environments, so every change is run against a
curl runs in many different environments, so every change is run against a
large number of test suites.
Every pull request is verified for each of the following:
@ -58,7 +58,7 @@ GitHub Actions runs the following tests:
- macOS tests with a variety of different compilation options
- Fuzz tests ([see the curl-fuzzer repo for more
info](https://github.com/curl/curl-fuzzer)).
- Curl compiled using the Rust TLS backend with Hyper
- curl compiled using the Rust TLS backend with Hyper
These are each configured in different files in `.github/workflows`.

View File

@ -10,7 +10,9 @@ This is an additional test suite using a combination of Apache httpd and nghttpx
# Usage
The test cases and necessary files are in `tests/http`. You can invoke `pytest` from there or from the top level curl checkout and it will find all tests.
The test cases and necessary files are in `tests/http`. You can invoke
`pytest` from there or from the top level curl checkout and it finds all
tests.
```
curl> pytest test/http
@ -29,16 +31,18 @@ curl/tests/http> pytest -vv -k test_01_02
runs all test cases that have `test_01_02` in their name. This does not have to be the start of the name.
Depending on your setup, some test cases may be skipped and appear as `s` in the output. If you run pytest verbose, it will also give you the reason for skipping.
Depending on your setup, some test cases may be skipped and appear as `s` in
the output. If you run pytest verbose, it also gives you the reason for
skipping.
# Prerequisites
You will need:
You need:
1. a recent Python, the `cryptography` module and, of course, `pytest`
2. an apache httpd development version. On Debian/Ubuntu, the package `apache2-dev` has this.
2. an apache httpd development version. On Debian/Ubuntu, the package `apache2-dev` has this
3. a local `curl` project build
3. optionally, a `nghttpx` with HTTP/3 enabled or h3 test cases will be skipped.
3. optionally, a `nghttpx` with HTTP/3 enabled or h3 test cases are skipped
### Configuration
@ -85,12 +89,23 @@ There is a lot of [`pytest` documentation](https://docs.pytest.org/) with exampl
In `conftest.py` 3 "fixtures" are defined that are used by all test cases:
1. `env`: the test environment. It is an instance of class `testenv/env.py:Env`. It holds all information about paths, availability of features (HTTP/3), port numbers to use, domains and SSL certificates for those.
2. `httpd`: the Apache httpd instance, configured and started, then stopped at the end of the test suite. It has sites configured for the domains from `env`. It also loads a local module `mod_curltest?` and makes it available in certain locations. (more on mod_curltest below).
3. `nghttpx`: an instance of nghttpx that provides HTTP/3 support. `nghttpx` proxies those requests to the `httpd` server. In a direct mapping, so you may access all the resources under the same path as with HTTP/2. Only the port number used for HTTP/3 requests will be different.
1. `env`: the test environment. It is an instance of class
`testenv/env.py:Env`. It holds all information about paths, availability of
features (HTTP/3), port numbers to use, domains and SSL certificates for
those.
2. `httpd`: the Apache httpd instance, configured and started, then stopped at
the end of the test suite. It has sites configured for the domains from
`env`. It also loads a local module `mod_curltest?` and makes it available
in certain locations. (more on mod_curltest below).
3. `nghttpx`: an instance of nghttpx that provides HTTP/3 support. `nghttpx`
proxies those requests to the `httpd` server. In a direct mapping, so you
may access all the resources under the same path as with HTTP/2. Only the
port number used for HTTP/3 requests are different.
`pytest` manages these fixture so that they are created once and terminated before exit. This means you can `Ctrl-C` a running pytest and the server will shutdown. Only when you brutally chop its head off, might there be servers left
behind.
`pytest` manages these fixture so that they are created once and terminated
before exit. This means you can `Ctrl-C` a running pytest and the server then
shutdowns. Only when you brutally chop its head off, might there be servers
left behind.
### Test Cases
@ -126,4 +141,10 @@ The module adds 2 "handlers" to the Apache server (right now). Handler are piece
* `s`: seconds (the default)
* `ms`: milliseconds
As you can see, `mod_curltest`'s tweak handler allow to simulate many kinds of responses. An example of its use is `test_03_01` where responses are delayed using `chunk_delay`. This gives the response a defined duration and the test uses that to reload `httpd` in the middle of the first request. A graceful reload in httpd lets ongoing requests finish, but will close the connection afterwards and tear down the serving process. The following request need then to open a new connection. This is verified by the test case.
As you can see, `mod_curltest`'s tweak handler allow to simulate many kinds of
responses. An example of its use is `test_03_01` where responses are delayed
using `chunk_delay`. This gives the response a defined duration and the test
uses that to reload `httpd` in the middle of the first request. A graceful
reload in httpd lets ongoing requests finish, but closes the connection
afterwards and tears down the serving process. The following request then
needs to open a new connection. This is verified by the test case.

View File

@ -46,7 +46,7 @@ CURLcode test(char *URL)
global_init(CURL_GLOBAL_ALL);
/* Allocate one CURL handle per transfer */
/* Allocate one curl handle per transfer */
easy = curl_easy_init();
/* init a multi stack */
@ -152,7 +152,7 @@ CURLcode test(char *URL)
test_cleanup:
curl_multi_cleanup(multi_handle);
/* Free the CURL handles */
/* Free the curl handles */
curl_easy_cleanup(easy);
curl_global_cleanup();

View File

@ -12,9 +12,9 @@ big and complicated, we should split them into smaller and testable ones.
## Build Unit Tests
`./configure --enable-debug` is required for the unit tests to build. To
enable unit tests, there will be a separate static libcurl built that will be
used exclusively for linking unit test programs. Just build everything as
normal, and then you can run the unit test cases as well.
enable unit tests, there is a separate static libcurl built that is used
exclusively for linking unit test programs. Just build everything as normal,
and then you can run the unit test cases as well.
## Run Unit Tests
@ -25,8 +25,8 @@ can `cd tests` and `make` and then invoke individual unit tests with
## Debug Unit Tests
If a specific test fails you will get told. The test case then has output left
in the %LOGDIR subdirectory, but most importantly you can re-run the test again
If a specific test fails you get told. The test case then has output left in
the %LOGDIR subdirectory, but most importantly you can re-run the test again
using gdb by doing `./runtests.pl -g NNNN`. That is, add a `-g` to make it
start up gdb and run the same case using that.

View File

@ -7,10 +7,9 @@ SPDX-License-Identifier: curl
# Building curl with Visual C++
This document describes how to compile, build and install curl and libcurl
from sources using the Visual C++ build tool. To build with VC++, you will of
course have to first install VC++. The minimum required version of VC is 6
(part of Visual Studio 6). However using a more recent version is strongly
recommended.
from sources using the Visual C++ build tool. To build with VC++, you have to
first install VC++. The minimum required version of VC is 6 (part of Visual
Studio 6). However using a more recent version is strongly recommended.
VC++ is also part of the Windows Platform SDK. You do not have to install the
full Visual Studio or Visual C++ if all you want is to build curl.
@ -21,8 +20,8 @@ SPDX-License-Identifier: curl
## Prerequisites
If you wish to support zlib, OpenSSL, c-ares, ssh2, you will have to download
them separately and copy them to the `deps` directory as shown below:
If you wish to support zlib, OpenSSL, c-ares, ssh2, you have to download them
separately and copy them to the `deps` directory as shown below:
somedirectory\
|_curl-src
@ -62,14 +61,14 @@ Open a Visual Studio Command prompt:
## Build in the console
Once you are in the console, go to the winbuild directory in the Curl
Once you are in the console, go to the winbuild directory in the curl
sources:
cd curl-src\winbuild
Then you can call `nmake /f Makefile.vc` with the desired options (see
below). The builds will be in the top src directory, `builds\` directory, in
a directory named using the options given to the nmake call.
below). The builds are in the top src directory, `builds\` directory, in a
directory named using the options given to the nmake call.
nmake /f Makefile.vc mode=<static or dll> <options>
@ -124,12 +123,12 @@ where `<options>` is one or many of:
## Static linking of Microsoft's C runtime (CRT):
If you are using mode=static nmake will create and link to the static build
of libcurl but *not* the static CRT. If you must you can force nmake to link
in the static CRT by passing `RTLIBCFG=static`. Typically you shouldn't use
that option, and nmake will default to the DLL CRT. `RTLIBCFG` is rarely used
and therefore rarely tested. When passing `RTLIBCFG` for a configuration that
was already built but not with that option, or if the option was specified
If you are using mode=static, nmake creates and links to the static build of
libcurl but *not* the static CRT. If you must you can force nmake to link in
the static CRT by passing `RTLIBCFG=static`. Typically you shouldn't use that
option, and nmake defaults to the DLL CRT. `RTLIBCFG` is rarely used and
therefore rarely tested. When passing `RTLIBCFG` for a configuration that was
already built but not with that option, or if the option was specified
differently, you must destroy the build directory containing the
configuration so that nmake can build it from scratch.
@ -139,17 +138,17 @@ where `<options>` is one or many of:
## Building your own application with libcurl (Visual Studio example)
When you build curl and libcurl, nmake will show the relative path where the
output directory is. The output directory is named from the options nmake used
when building. You may also see temp directories of the same name but with
suffixes -obj-curl and -obj-lib.
When you build curl and libcurl, nmake shows the relative path where the
output directory is. The output directory is named from the options nmake
used when building. You may also see temp directories of the same name but
with suffixes -obj-curl and -obj-lib.
For example let's say you've built curl.exe and libcurl.dll from the Visual
For example let's say you have built curl.exe and libcurl.dll from the Visual
Studio 2010 x64 Win64 Command Prompt:
nmake /f Makefile.vc mode=dll VC=10
The output directory will have a name similar to
The output directory has a name similar to
`..\builds\libcurl-vc10-x64-release-dll-ipv6-sspi-schannel`.
The output directory contains subdirectories bin, lib and include. Those are
@ -177,14 +176,14 @@ where `<options>` is one or many of:
need to make a separate x86 build of libcurl.
If you build libcurl static (`mode=static`) or debug (`DEBUG=yes`) then the
library name will vary and separate builds may be necessary for separate
library name varies and separate builds may be necessary for separate
configurations of your project within the same platform. This is discussed in
the next section.
## Building your own application with a static libcurl
When building an application that uses the static libcurl library on Windows,
you must define `CURL_STATICLIB`. Otherwise the linker will look for dynamic
you must define `CURL_STATICLIB`. Otherwise the linker looks for dynamic
import symbols.
The static library name has an `_a` suffix in the basename and the debug
@ -201,8 +200,8 @@ where `<options>` is one or many of:
## Legacy Windows and SSL
When you build curl using the build files in this directory the default SSL
backend will be Schannel (Windows SSPI), the native SSL library that comes
with the Windows OS. Schannel in Windows 8 and earlier is not able to connect
to servers that no longer support the legacy handshakes and algorithms used by
those versions. If you will be using curl in one of those earlier versions of
backend is Schannel (Windows SSPI), the native SSL library that comes with
the Windows OS. Schannel in Windows 8 and earlier is not able to connect to
servers that no longer support the legacy handshakes and algorithms used by
those versions. If you are using curl in one of those earlier versions of
Windows you should choose another SSL backend like OpenSSL.