New cfilter HTTP-CONNECT for h3/h2/http1.1 eyeballing.
- filter is installed when `--http3` in the tool is used (or
the equivalent CURLOPT_ done in the library)
- starts a QUIC/HTTP/3 connect right away. Should that not
succeed after 100ms (subject to change), a parallel attempt
is started for HTTP/2 and HTTP/1.1 via TCP
- both attempts are subject to IPv6/IPv4 eyeballing, same
as happens for other connections
- tie timeout to the ip-version HAPPY_EYEBALLS_TIMEOUT
- use a `soft` timeout at half the value. When the soft timeout
expires, the HTTPS-CONNECT filter checks if the QUIC filter
has received any data from the server. If not, it will start
the HTTP/2 attempt.
HTTP/3(ngtcp2) improvements.
- setting call_data in all cfilter calls similar to http/2 and vtls filters
for use in callback where no stream data is available.
- returning CURLE_PARTIAL_FILE for prematurely terminated transfers
- enabling pytest test_05 for h3
- shifting functionality to "connect" UDP sockets from ngtcp2
implementation into the udp socket cfilter. Because unconnected
UDP sockets are weird. For example they error when adding to a
pollset.
HTTP/3(quiche) improvements.
- fixed upload bug in quiche implementation, now passes 251 and pytest
- error codes on stream RESET
- improved debug logs
- handling of DRAIN during connect
- limiting pending event queue
HTTP/2 cfilter improvements.
- use LOG_CF macros for dynamic logging in debug build
- fix CURLcode on RST streams to be CURLE_PARTIAL_FILE
- enable pytest test_05 for h2
- fix upload pytests and improve parallel transfer performance.
GOAWAY handling for ngtcp2/quiche
- during connect, when the remote server refuses to accept new connections
and closes immediately (so the local conn goes into DRAIN phase), the
connection is torn down and a another attempt is made after a short grace
period.
This is the behaviour observed with nghttpx when we tell it to shut
down gracefully. Tested in pytest test_03_02.
TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
and connection related things based on the negotiated protocol (or lack thereof).
- new tests/tests-httpd/scorecard.py for testing h3/h2 protocol implementation.
Invoke:
python3 tests/tests-httpd/scorecard.py --help
for usage.
Improvements on gathering connect statistics and socket access.
- new CF_CTRL_CONN_REPORT_STATS cfilter control for having cfilters
report connection statistics. This is triggered when the connection
has completely connected.
- new void Curl_pgrsTimeWas(..) method to report a timer update with
a timestamp of when it happend. This allows for updating timers
"later", e.g. a connect statistic after full connectivity has been
reached.
- in case of HTTP eyeballing, the previous changes will update
statistics only from the filter chain that "won" the eyeballing.
- new cfilter query CF_QUERY_SOCKET for retrieving the socket used
by a filter chain.
Added methods Curl_conn_cf_get_socket() and Curl_conn_get_socket()
for convenient use of this query.
- Change VTLS backend to query their sub-filters for the socket when
checks during the handshake are made.
HTTP/3 documentation on how https eyeballing works.
TLS improvements
- ALPN selection for SSL/SSL-PROXY filters in one vtls set of functions, replaces
copy of logic in all tls backends.
- standardized the infof logging of offered ALPNs
- ALPN negotiated: have common function for all backends that sets alpn proprty
and connection related things based on the negotiated protocol (or lack thereof).
Scorecard with Caddy.
- configure can be run with `--with-test-caddy=path` to specify which caddy to use for testing
- tests/tests-httpd/scorecard.py now measures download speeds with caddy
pytest improvements
- adding Makfile to clean gen dir
- adding nghttpx rundir creation on start
- checking httpd version 2.4.55 for test_05 cases where it is needed. Skipping with message if too old.
- catch exception when checking for caddy existance on system.
Closes #10349
184 lines
8.0 KiB
Python
184 lines
8.0 KiB
Python
#!/usr/bin/env python3
|
|
# -*- coding: utf-8 -*-
|
|
#***************************************************************************
|
|
# _ _ ____ _
|
|
# Project ___| | | | _ \| |
|
|
# / __| | | | |_) | |
|
|
# | (__| |_| | _ <| |___
|
|
# \___|\___/|_| \_\_____|
|
|
#
|
|
# Copyright (C) 2008 - 2022, Daniel Stenberg, <daniel@haxx.se>, et al.
|
|
#
|
|
# This software is licensed as described in the file COPYING, which
|
|
# you should have received as part of this distribution. The terms
|
|
# are also available at https://curl.se/docs/copyright.html.
|
|
#
|
|
# You may opt to use, copy, modify, merge, publish, distribute and/or sell
|
|
# copies of the Software, and permit persons to whom the Software is
|
|
# furnished to do so, under the terms of the COPYING file.
|
|
#
|
|
# This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
|
|
# KIND, either express or implied.
|
|
#
|
|
# SPDX-License-Identifier: curl
|
|
#
|
|
###########################################################################
|
|
#
|
|
import logging
|
|
import os
|
|
import pytest
|
|
|
|
from testenv import Env, CurlClient
|
|
|
|
|
|
log = logging.getLogger(__name__)
|
|
|
|
|
|
@pytest.mark.skipif(condition=Env.setup_incomplete(),
|
|
reason=f"missing: {Env.incomplete_reason()}")
|
|
class TestDownload:
|
|
|
|
@pytest.fixture(autouse=True, scope='class')
|
|
def _class_scope(self, env, httpd, nghttpx):
|
|
if env.have_h3():
|
|
nghttpx.start_if_needed()
|
|
fpath = os.path.join(httpd.docs_dir, 'data-1mb.data')
|
|
data1k = 1024*'x'
|
|
with open(fpath, 'w') as fd:
|
|
fsize = 0
|
|
while fsize < 1024*1024:
|
|
fd.write(data1k)
|
|
fsize += len(data1k)
|
|
|
|
# download 1 file
|
|
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
|
|
def test_02_01_download_1(self, env: Env, httpd, nghttpx, repeat, proto):
|
|
if proto == 'h3' and not env.have_h3():
|
|
pytest.skip("h3 not supported")
|
|
curl = CurlClient(env=env)
|
|
url = f'https://{env.authority_for(env.domain1, proto)}/data.json'
|
|
r = curl.http_download(urls=[url], alpn_proto=proto)
|
|
assert r.exit_code == 0, f'{r}'
|
|
r.check_stats(count=1, exp_status=200)
|
|
|
|
# download 2 files
|
|
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
|
|
def test_02_02_download_2(self, env: Env, httpd, nghttpx, repeat, proto):
|
|
if proto == 'h3' and not env.have_h3():
|
|
pytest.skip("h3 not supported")
|
|
curl = CurlClient(env=env)
|
|
url = f'https://{env.authority_for(env.domain1, proto)}/data.json?[0-1]'
|
|
r = curl.http_download(urls=[url], alpn_proto=proto)
|
|
assert r.exit_code == 0
|
|
r.check_stats(count=2, exp_status=200)
|
|
|
|
# download 100 files sequentially
|
|
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
|
|
def test_02_03_download_100_sequential(self, env: Env,
|
|
httpd, nghttpx, repeat, proto):
|
|
if proto == 'h3' and not env.have_h3():
|
|
pytest.skip("h3 not supported")
|
|
curl = CurlClient(env=env)
|
|
urln = f'https://{env.authority_for(env.domain1, proto)}/data.json?[0-99]'
|
|
r = curl.http_download(urls=[urln], alpn_proto=proto)
|
|
assert r.exit_code == 0
|
|
r.check_stats(count=100, exp_status=200)
|
|
# http/1.1 sequential transfers will open 1 connection
|
|
assert r.total_connects == 1
|
|
|
|
# download 100 files parallel
|
|
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
|
|
def test_02_04_download_100_parallel(self, env: Env,
|
|
httpd, nghttpx, repeat, proto):
|
|
if proto == 'h3' and not env.have_h3():
|
|
pytest.skip("h3 not supported")
|
|
curl = CurlClient(env=env)
|
|
urln = f'https://{env.authority_for(env.domain1, proto)}/data.json?[0-99]'
|
|
r = curl.http_download(urls=[urln], alpn_proto=proto,
|
|
extra_args=['--parallel'])
|
|
assert r.exit_code == 0
|
|
r.check_stats(count=100, exp_status=200)
|
|
if proto == 'http/1.1':
|
|
# http/1.1 parallel transfers will open multiple connections
|
|
assert r.total_connects > 1
|
|
else:
|
|
# http2 parallel transfers will use one connection (common limit is 100)
|
|
assert r.total_connects == 1
|
|
|
|
# download 500 files sequential
|
|
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
|
|
def test_02_05_download_500_sequential(self, env: Env,
|
|
httpd, nghttpx, repeat, proto):
|
|
if proto == 'h3' and not env.have_h3():
|
|
pytest.skip("h3 not supported")
|
|
curl = CurlClient(env=env)
|
|
urln = f'https://{env.authority_for(env.domain1, proto)}/data.json?[0-499]'
|
|
r = curl.http_download(urls=[urln], alpn_proto=proto)
|
|
assert r.exit_code == 0
|
|
r.check_stats(count=500, exp_status=200)
|
|
if proto == 'http/1.1':
|
|
# http/1.1 parallel transfers will open multiple connections
|
|
assert r.total_connects > 1
|
|
else:
|
|
# http2 parallel transfers will use one connection (common limit is 100)
|
|
assert r.total_connects == 1
|
|
|
|
# download 500 files parallel (default max of 100)
|
|
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
|
|
def test_02_06_download_500_parallel(self, env: Env,
|
|
httpd, nghttpx, repeat, proto):
|
|
if proto == 'h3' and not env.have_h3():
|
|
pytest.skip("h3 not supported")
|
|
curl = CurlClient(env=env)
|
|
urln = f'https://{env.authority_for(env.domain1, proto)}/data.json?[000-499]'
|
|
r = curl.http_download(urls=[urln], alpn_proto=proto,
|
|
extra_args=['--parallel'])
|
|
assert r.exit_code == 0
|
|
r.check_stats(count=500, exp_status=200)
|
|
if proto == 'http/1.1':
|
|
# http/1.1 parallel transfers will open multiple connections
|
|
assert r.total_connects > 1
|
|
else:
|
|
# http2 parallel transfers will use one connection (common limit is 100)
|
|
assert r.total_connects == 1
|
|
|
|
# download 500 files parallel (max of 200), only h2
|
|
@pytest.mark.skip(reason="TODO: we get 101 connections created. PIPEWAIT needs a fix")
|
|
@pytest.mark.parametrize("proto", ['h2', 'h3'])
|
|
def test_02_07_download_500_parallel(self, env: Env,
|
|
httpd, nghttpx, repeat, proto):
|
|
if proto == 'h3' and not env.have_h3():
|
|
pytest.skip("h3 not supported")
|
|
curl = CurlClient(env=env)
|
|
urln = f'https://{env.authority_for(env.domain1, proto)}/data.json?[0-499]'
|
|
r = curl.http_download(urls=[urln], alpn_proto=proto,
|
|
with_stats=False, extra_args=[
|
|
'--parallel', '--parallel-max', '200'
|
|
])
|
|
assert r.exit_code == 0, f'{r}'
|
|
r.check_stats(count=500, exp_status=200)
|
|
# http2 should now use 2 connections, at most 5
|
|
assert r.total_connects <= 5, "h2 should use fewer connections here"
|
|
|
|
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
|
|
def test_02_08_1MB_serial(self, env: Env,
|
|
httpd, nghttpx, repeat, proto):
|
|
count = 2
|
|
urln = f'https://{env.authority_for(env.domain1, proto)}/data-1mb.data?[0-{count-1}]'
|
|
curl = CurlClient(env=env)
|
|
r = curl.http_download(urls=[urln], alpn_proto=proto)
|
|
assert r.exit_code == 0
|
|
r.check_stats(count=count, exp_status=200)
|
|
|
|
@pytest.mark.parametrize("proto", ['http/1.1', 'h2', 'h3'])
|
|
def test_02_09_1MB_parallel(self, env: Env,
|
|
httpd, nghttpx, repeat, proto):
|
|
count = 2
|
|
urln = f'https://{env.authority_for(env.domain1, proto)}/data-1mb.data?[0-{count-1}]'
|
|
curl = CurlClient(env=env)
|
|
r = curl.http_download(urls=[urln], alpn_proto=proto, extra_args=[
|
|
'--parallel'
|
|
])
|
|
assert r.exit_code == 0
|
|
r.check_stats(count=count, exp_status=200)
|