'How do I directly control the http body (big size) when using hyper?
Currently, we are switching the API gateway engine made in c to tokio, hyper, rustls of rust.
While analyzing the echo server example provided by hyper_rustls (tokio_rustls), there is a part that I do not understand and ask for help. (I'm struggling with not many examples to refer to)
https://github.com/rustls/hyper-rustls/blob/main/examples/server.rs
Here's the flow I was thinking of: When a POST request with a large body size is received, all http bodies are read → After reading up to content-length, the future code passed to make_service_fn is executed (echo operation in the example).
However, as soon as the request is received, the code passed to make_service_fn is executed and a response is sent to the client, and the poll_read function of the tokio::io::AsyncRead trait is executed many times. This has been run.
Q: When exactly does the make_fn_service code run, and is this something I can control?
Q: When using hyper, it seems to store the body accumulating in memory. Therefore, if the body size is very large, I would like to work with such as downloading it to a separate file. Is there a way to directly control every time the body comes?
- Can I use the hyper::body::HttpBody trait?
Solution 1:[1]
About Q1:
let service = make_service_fn(|_| async { Ok::<_, io::Error>(service_fn(echo)) });
You call make_fn_service to convert an async function into something that can be passed to serve(). It gets an argument of type &AddrStream and can do all kind of fancy stuff such as filtering and throttling, but if you don't need any of these, just call service_fn with your async function.
Then your function, echo in the example, will be called once per client request.
About Q2:
The body is not accumulated in memory:
*response.body_mut() = req.into_body();
But these are of type Body, that implements Stream<Item=Result<Bytes>>, being these Bytes the chunks of the body request/response.
And by asigning a Stream to another Stream a very big ping should be painlessly streamed through the echo function, one chunk at a time.
If you want to manage the data yourself you can poll the stream (StreamExt::next()) and handle each piece of body individually. Just do not call Body::to_bytes() or Body::aggregate().
About using the HttpBody trait:
Sure you can use it directly, but it is non trivial. I think it is usually implemented so that you can get for example a JSON object directly from the request, or an XML or a urlencoded map or whatever your Content-Type dictates, without doing an intermediate byte array and a parse.
But as you can probably guess, processing huge XML/JSON payloads in async mode is not easy. And if you really need that you can probably make it easier just driving the byte chunks of a plain Body.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 |
