Conversation
Summary of ChangesHello @0x676e67, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the project's testing capabilities by introducing comprehensive benchmarks for HTTP/1.1 and HTTP/2. It establishes a robust benchmarking framework, allowing for performance measurement of different HTTP versions under various concurrency models. The changes also include routine dependency updates and a minor refactoring of internal channel mechanisms to align with Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces benchmarks for HTTP/1.1 and HTTP/2, along with several dependency updates and a switch from tokio::sync::oneshot to futures_channel::oneshot. The new benchmark code is well-structured but has some areas for improvement regarding code duplication, error handling, and test reliability. I've provided specific comments on these points.
| }; | ||
| let _ = rt.block_on(server_with_shutdown(addr, shutdown_rx, mode)); | ||
| }); | ||
| std::thread::sleep(Duration::from_millis(100)); |
There was a problem hiding this comment.
Using std::thread::sleep to wait for the server to start up can lead to flaky tests due to a race condition. The server might not be ready to accept connections after the fixed delay, especially on a loaded system. A more reliable approach is to use a synchronization mechanism. For example, the server thread could send a signal on a oneshot channel once it has successfully bound to the port and is ready to accept connections. The main thread would wait on this signal before proceeding.
| //! This example runs a server that responds to any request with "Hello, world!" | ||
|
|
||
| mod support; | ||
|
|
||
| use std::time::Duration; | ||
| use support::client::{bench_reqwest, bench_wreq}; | ||
| use support::server::{spawn_multi_thread_server, spawn_single_thread_server, with_server}; | ||
| use support::{HttpMode, build_current_thread_runtime, build_multi_thread_runtime}; | ||
|
|
||
| use criterion::{ | ||
| BenchmarkGroup, Criterion, criterion_group, criterion_main, measurement::WallTime, | ||
| }; | ||
|
|
||
| const NUM_REQUESTS_TO_SEND: usize = 1000; | ||
| const CONCURRENT_LIMIT: usize = 100; | ||
| const HTTP_MODE: HttpMode = HttpMode::Http1; | ||
| const CURRENT_THREAD_LABEL: &str = "current_thread"; | ||
| const MULTI_THREAD_LABEL: &str = "multi_thread"; | ||
|
|
||
| fn run_benches( | ||
| group: &mut BenchmarkGroup<'_, WallTime>, | ||
| rt: fn() -> tokio::runtime::Runtime, | ||
| addr: &'static str, | ||
| label_prefix: &str, | ||
| ) { | ||
| let runtime = rt(); | ||
| bench_wreq( | ||
| group, | ||
| &runtime, | ||
| addr, | ||
| HTTP_MODE, | ||
| label_prefix, | ||
| false, | ||
| NUM_REQUESTS_TO_SEND, | ||
| CONCURRENT_LIMIT, | ||
| ); | ||
|
|
||
| bench_reqwest( | ||
| group, | ||
| &runtime, | ||
| addr, | ||
| HTTP_MODE, | ||
| label_prefix, | ||
| false, | ||
| NUM_REQUESTS_TO_SEND, | ||
| CONCURRENT_LIMIT, | ||
| ); | ||
|
|
||
| bench_wreq( | ||
| group, | ||
| &runtime, | ||
| addr, | ||
| HTTP_MODE, | ||
| label_prefix, | ||
| true, | ||
| NUM_REQUESTS_TO_SEND, | ||
| CONCURRENT_LIMIT, | ||
| ); | ||
|
|
||
| bench_reqwest( | ||
| group, | ||
| &runtime, | ||
| addr, | ||
| HTTP_MODE, | ||
| label_prefix, | ||
| true, | ||
| NUM_REQUESTS_TO_SEND, | ||
| CONCURRENT_LIMIT, | ||
| ); | ||
| } | ||
|
|
||
| // Criterion benchmark functions | ||
| fn bench_server_single_thread(c: &mut Criterion) { | ||
| let mut group = c.benchmark_group("server_single_thread"); | ||
| group.sample_size(10); | ||
|
|
||
| // single-threaded client | ||
| with_server( | ||
| "127.0.0.1:5928", | ||
| HTTP_MODE, | ||
| spawn_single_thread_server, | ||
| || { | ||
| run_benches( | ||
| &mut group, | ||
| build_current_thread_runtime, | ||
| "127.0.0.1:5928", | ||
| CURRENT_THREAD_LABEL, | ||
| ); | ||
| }, | ||
| ); | ||
|
|
||
| // multi-threaded client | ||
| with_server( | ||
| "127.0.0.1:5930", | ||
| HTTP_MODE, | ||
| spawn_single_thread_server, | ||
| || { | ||
| run_benches( | ||
| &mut group, | ||
| build_multi_thread_runtime, | ||
| "127.0.0.1:5930", | ||
| MULTI_THREAD_LABEL, | ||
| ); | ||
| }, | ||
| ); | ||
| group.finish(); | ||
| } | ||
|
|
||
| fn bench_server_multi_thread(c: &mut Criterion) { | ||
| let mut group = c.benchmark_group("server_multi_thread"); | ||
| group.sample_size(10); | ||
|
|
||
| // single-threaded client | ||
| with_server( | ||
| "127.0.0.1:5929", | ||
| HTTP_MODE, | ||
| spawn_multi_thread_server, | ||
| || { | ||
| run_benches( | ||
| &mut group, | ||
| build_current_thread_runtime, | ||
| "127.0.0.1:5929", | ||
| CURRENT_THREAD_LABEL, | ||
| ); | ||
| }, | ||
| ); | ||
|
|
||
| // multi-threaded client | ||
| with_server( | ||
| "127.0.0.1:5931", | ||
| HTTP_MODE, | ||
| spawn_multi_thread_server, | ||
| || { | ||
| run_benches( | ||
| &mut group, | ||
| build_multi_thread_runtime, | ||
| "127.0.0.1:5931", | ||
| MULTI_THREAD_LABEL, | ||
| ); | ||
| }, | ||
| ); | ||
| group.finish(); | ||
| } | ||
|
|
||
| criterion_group!( | ||
| name = benches; | ||
| config = Criterion::default() | ||
| .measurement_time(Duration::from_secs(10)) | ||
| .warm_up_time(Duration::from_secs(3)); | ||
| targets = | ||
| bench_server_single_thread, | ||
| bench_server_multi_thread | ||
| ); | ||
| criterion_main!(benches); |
There was a problem hiding this comment.
The contents of this file are nearly identical to bench/http2.rs. This duplication makes maintenance harder and increases the chance of introducing inconsistencies. Consider refactoring by creating a single, parameterized benchmark function that can be configured for both HTTP/1.1 and HTTP/2. You could pass the HttpMode, server addresses, and benchmark group names as arguments to a shared setup function.
| async fn wreq_send_requests_concurrent( | ||
| client: &wreq::Client, | ||
| url: &str, | ||
| num_requests: usize, | ||
| concurrent_limit: usize, | ||
| ) { | ||
| let semaphore = Arc::new(Semaphore::new(concurrent_limit)); | ||
| let mut handles = Vec::with_capacity(num_requests); | ||
|
|
||
| for _i in 0..num_requests { | ||
| let url = url.to_owned(); | ||
| let client = client.clone(); | ||
| let semaphore = semaphore.clone(); | ||
| let task = tokio::spawn(async move { | ||
| let _permit = semaphore.acquire().await.unwrap(); | ||
| let mut response = client.get(url).send().await.unwrap(); | ||
| while let Ok(Some(_chunk)) = response.chunk().await {} | ||
| }); | ||
| handles.push(task); | ||
| } | ||
|
|
||
| futures_util::future::join_all(handles).await; | ||
| } |
There was a problem hiding this comment.
This function's signature async fn wreq_send_requests_concurrent(...) returns (), while the similar reqwest_send_requests_concurrent returns a Result. For consistency and better error propagation, this function should also return a Result.
Additionally, the use of .unwrap() on line 72 inside tokio::spawn will cause the task to panic if sending the request or reading the response fails. This can abruptly terminate the benchmark for a single failed request. It would be more robust to handle the Result returned by send() and chunk(), for example by logging the error or counting failures, instead of panicking.
| async fn reqwest_send_requests_concurrent( | ||
| client: &reqwest::Client, | ||
| url: &str, | ||
| num_requests: usize, | ||
| concurrent_limit: usize, | ||
| ) -> Result<(), Box<dyn Error>> { | ||
| let semaphore = Arc::new(Semaphore::new(concurrent_limit)); | ||
| let mut handles = Vec::with_capacity(num_requests); | ||
|
|
||
| for _i in 0..num_requests { | ||
| let url = url.to_owned(); | ||
| let client = client.clone(); | ||
| let semaphore = semaphore.clone(); | ||
| let task = tokio::spawn(async move { | ||
| let _permit = semaphore.acquire().await.unwrap(); | ||
| let mut response = client.get(url).send().await.unwrap(); | ||
| while let Ok(Some(_chunk)) = response.chunk().await {} | ||
| }); | ||
| handles.push(task); | ||
| } | ||
|
|
||
| futures_util::future::join_all(handles).await; | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
The use of .unwrap() on line 96 inside tokio::spawn will cause the task to panic if sending the request or reading the response fails. This can abruptly terminate the benchmark for a single failed request. It would be more robust to handle the Result returned by send() and chunk(), for example by logging the error or counting failures, instead of panicking.
| if let Ok((socket, _peer_addr)) = accept { | ||
| tokio::spawn(async move { | ||
| if let Err(e) = serve(socket, mode).await { | ||
| println!(" -> err={:?}", e); |
There was a problem hiding this comment.
Errors are being printed to standard output using println!. It's standard practice to print errors to standard error (stderr) using eprintln! or a logging framework like tracing::error!. This separates diagnostic output from program output, making it easier to redirect and process them independently.
| println!(" -> err={:?}", e); | |
| eprintln!(" -> err={:?}", e); |
No description provided.