This is a better match for what they do and the general "cpool" var/function prefix works well. The pool now handles very long hostnames correctly. The following changes have been made: * 'struct connectdata', e.g. connections, keep new members named `destination` and ' destination_len' that fully specifies interface+port+hostname of where the connection is going to. This is used in the pool for "bundling" of connections with the same destination. There is no limit on the length any more. * Locking: all locks are done inside conncache.c when calling into the pool and released on return. This eliminates hazards of the callers keeping track. * 'struct connectbundle' is now internal to the pool. It is no longer referenced by a connection. * 'bundle->multiuse' no longer exists. HTTP/2 and 3 and TLS filters no longer need to set it. Instead, the multi checks on leaving MSTATE_CONNECT or MSTATE_CONNECTING if the connection is now multiplexed and new, e.g. not conn->bits.reuse. In that case the processing of pending handles is triggered. * The pool's init is provided with a callback to invoke on all connections being discarded. This allows the cleanups in `Curl_disconnect` to run, wherever it is decided to retire a connection. * Several pool operations can now be fully done with one call. Pruning dead connections, upkeep and checks on pool limits can now directly discard connections and need no longer return those to the caller for doing that (as we have now the callback described above). * Finding a connection for reuse is now done via `Curl_cpool_find()` and the caller provides callbacks to evaluate the connection candidates. * The 'Curl_cpool_check_limits()' now directly uses the max values that may be set in the transfer's multi. No need to pass them around. Curl_multi_max_host_connections() and Curl_multi_max_total_connections() are gone. * Add method 'Curl_node_llist()' to get the llist a node is in. Used in cpool to verify connection are indeed in the list (or not in any list) as they need to. I left the conncache.[ch] as is for now and also did not touch the documentation. If we update that outside the feature window, we can do this in a separate PR. Multi-thread safety is not achieved by this PR, but since more details on how pools operate are now "internal" it is a better starting point to go for this in the future. Closes #14662
96 lines
1.7 KiB
Plaintext
96 lines
1.7 KiB
Plaintext
<testcase>
|
|
<info>
|
|
<keywords>
|
|
HTTP
|
|
HTTP GET
|
|
shared connections
|
|
</keywords>
|
|
</info>
|
|
|
|
# Server-side
|
|
<reply>
|
|
<data>
|
|
HTTP/1.1 200 OK
|
|
Date: Tue, 09 Nov 2010 14:49:00 GMT
|
|
Server: test-server/fake
|
|
Content-Type: text/html
|
|
Content-Length: 29
|
|
|
|
run 1: foobar and so on fun!
|
|
</data>
|
|
<datacheck>
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
run 1: foobar and so on fun!
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
run 1: foobar and so on fun!
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
run 1: foobar and so on fun!
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
-> Mutex lock SHARE
|
|
<- Mutex unlock SHARE
|
|
-> Mutex lock SHARE
|
|
-> Mutex lock CONNECT
|
|
<- Mutex unlock CONNECT
|
|
<- Mutex unlock SHARE
|
|
</datacheck>
|
|
</reply>
|
|
|
|
# Client-side
|
|
<client>
|
|
<server>
|
|
http
|
|
</server>
|
|
<name>
|
|
HTTP with shared connection cache
|
|
</name>
|
|
<tool>
|
|
lib%TESTNUMBER
|
|
</tool>
|
|
<command>
|
|
http://%HOSTIP:%HTTPPORT/%TESTNUMBER
|
|
</command>
|
|
</client>
|
|
|
|
# Verify data after the test has been "shot"
|
|
<verify>
|
|
</verify>
|
|
</testcase>
|