Adds a "cw-pause" client writer in the PROTOCOL phase that buffers
output when the client paused the transfer. This prevents content
decoding from blowing the buffer in the "cw-out" writer.
Added test_02_35 that downloads 2 100MB gzip bombs in parallel and
pauses after 1MB of decoded 0's.
This is a solution to issue #16280, with some limitations:
- cw-out still needs buffering of its own, since it can be paused
"in the middle" of a write that started with some KB of gzipped
zeros and exploded into several MB of calls to cw-out.
- cw-pause will then start buffering on its own *after* the write
that caused the pause. cw-pause has no buffer limits, but the
data it buffers is still content-encoded.
Protocols like http/1.1 stop receiving, h2/h3 have window sizes,
so the cw-pause buffer should not grow out of control, at least
for these protocols.
- the current limit on cw-out's buffer is ~75MB (for whatever
historical reason). A potential content-encoding that blows 16KB
(the common h2 chunk size) into > 75MB would still blow the buffer,
making the transfer fail. A gzip of 0's makes 16KB into ~16MB, so
that still works.
A better solution would be to allow CURLE_AGAIN handling in the client
writer chain and make all content encoders handle that. This would stop
explosion of encoding on a pause right away. But this is a large change
of the deocoder operations.
Reported-by: lf- on github
Fixes#16280Closes#16296