Skip to content

bench: add benchmarks for HTTP/1.1 and HTTP/2#1065

Merged
0x676e67 merged 1 commit intomainfrom
bench
Feb 14, 2026
Merged

bench: add benchmarks for HTTP/1.1 and HTTP/2#1065
0x676e67 merged 1 commit intomainfrom
bench

Conversation

@0x676e67
Copy link
Owner

No description provided.

@0x676e67 0x676e67 merged commit 71fb97a into main Feb 14, 2026
19 of 21 checks passed
@0x676e67 0x676e67 deleted the bench branch February 14, 2026 09:43
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @0x676e67, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the project's testing capabilities by introducing comprehensive benchmarks for HTTP/1.1 and HTTP/2. It establishes a robust benchmarking framework, allowing for performance measurement of different HTTP versions under various concurrency models. The changes also include routine dependency updates and a minor refactoring of internal channel mechanisms to align with futures_channel.

Highlights

  • New Benchmarks: Added new benchmark suites for both HTTP/1.1 and HTTP/2 protocols, utilizing the criterion benchmarking framework. These benchmarks evaluate server performance under various client and server concurrency configurations.
  • Benchmarking Infrastructure: Introduced a dedicated bench/support module containing common utilities for client and server setup, including functions for creating wreq and reqwest clients, spawning single-threaded and multi-threaded Tokio runtimes, and managing benchmark servers.
  • Dependency Updates: Updated several Rust crate dependencies in Cargo.toml, including bytes, http2, socket2, flate2, webpki-root-certs, tokio-util, libc, hyper-util, and tokio-test to their latest versions.
  • Oneshot Channel Migration: Migrated the usage of tokio::sync::oneshot to futures_channel::oneshot in the HTTP client and connection pool logic, along with updating the is_closed() method to is_canceled() for compatibility.
  • Changelog Configuration: Added a cliff.toml file to configure automatic changelog generation, defining the format and commit parsing rules.
Changelog
  • Cargo.toml
    • Updated bytes dependency to 1.11.1.
    • Updated http2 dependency to 0.5.12.
    • Updated socket2 dependency to 0.6.2.
    • Updated flate2 dependency to 1.1.9.
    • Updated webpki-root-certs dependency to 1.0.6.
    • Updated tokio-util dependency to 0.7.18.
    • Updated libc dependency to 0.2.182.
    • Updated hyper-util dependency to 0.1.20.
    • Updated tokio-test dependency to 0.4.5.
    • Added criterion and reqwest as new development dependencies.
    • Configured [[bench]] targets for http1 and http2 benchmarks.
  • bench/http1.rs
    • Added new Criterion benchmark file for HTTP/1.1 performance testing.
    • Implemented benchmarks for single-threaded and multi-threaded servers.
    • Included tests for both wreq and reqwest clients in sequential and concurrent modes.
  • bench/http2.rs
    • Added new Criterion benchmark file for HTTP/2 performance testing.
    • Implemented benchmarks for single-threaded and multi-threaded servers.
    • Included tests for both wreq and reqwest clients in sequential and concurrent modes, specifically for HTTP/2.
  • bench/support.rs
    • Added new module for common benchmarking support utilities.
    • Defined HttpMode enum to differentiate between HTTP/1.1 and HTTP/2.
    • Provided functions to build current_thread and multi_thread Tokio runtimes.
  • bench/support/client.rs
    • Added new module for client-side benchmarking logic.
    • Implemented functions to create wreq and reqwest HTTP clients.
    • Provided asynchronous functions to send sequential and concurrent requests for benchmarking.
  • bench/support/server.rs
    • Added new module for server-side benchmarking logic.
    • Implemented functions to spawn single-threaded and multi-threaded HTTP servers.
    • Included a generic request handler that responds with 'Hello, world!'.
  • cliff.toml
    • Added new configuration file for the cliff changelog generator.
    • Defined the format for changelog entries and rules for filtering Git commits.
  • src/client/http/client.rs
    • Replaced tokio::sync::oneshot::channel with futures_channel::oneshot::channel.
  • src/client/http/client/pool.rs
    • Replaced tokio::sync::oneshot import with futures_channel::oneshot.
    • Updated tx.is_closed() to tx.is_canceled() for futures_channel::oneshot compatibility.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces benchmarks for HTTP/1.1 and HTTP/2, along with several dependency updates and a switch from tokio::sync::oneshot to futures_channel::oneshot. The new benchmark code is well-structured but has some areas for improvement regarding code duplication, error handling, and test reliability. I've provided specific comments on these points.

};
let _ = rt.block_on(server_with_shutdown(addr, shutdown_rx, mode));
});
std::thread::sleep(Duration::from_millis(100));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using std::thread::sleep to wait for the server to start up can lead to flaky tests due to a race condition. The server might not be ready to accept connections after the fixed delay, especially on a loaded system. A more reliable approach is to use a synchronization mechanism. For example, the server thread could send a signal on a oneshot channel once it has successfully bound to the port and is ready to accept connections. The main thread would wait on this signal before proceeding.

Comment on lines +1 to +154
//! This example runs a server that responds to any request with "Hello, world!"

mod support;

use std::time::Duration;
use support::client::{bench_reqwest, bench_wreq};
use support::server::{spawn_multi_thread_server, spawn_single_thread_server, with_server};
use support::{HttpMode, build_current_thread_runtime, build_multi_thread_runtime};

use criterion::{
BenchmarkGroup, Criterion, criterion_group, criterion_main, measurement::WallTime,
};

const NUM_REQUESTS_TO_SEND: usize = 1000;
const CONCURRENT_LIMIT: usize = 100;
const HTTP_MODE: HttpMode = HttpMode::Http1;
const CURRENT_THREAD_LABEL: &str = "current_thread";
const MULTI_THREAD_LABEL: &str = "multi_thread";

fn run_benches(
group: &mut BenchmarkGroup<'_, WallTime>,
rt: fn() -> tokio::runtime::Runtime,
addr: &'static str,
label_prefix: &str,
) {
let runtime = rt();
bench_wreq(
group,
&runtime,
addr,
HTTP_MODE,
label_prefix,
false,
NUM_REQUESTS_TO_SEND,
CONCURRENT_LIMIT,
);

bench_reqwest(
group,
&runtime,
addr,
HTTP_MODE,
label_prefix,
false,
NUM_REQUESTS_TO_SEND,
CONCURRENT_LIMIT,
);

bench_wreq(
group,
&runtime,
addr,
HTTP_MODE,
label_prefix,
true,
NUM_REQUESTS_TO_SEND,
CONCURRENT_LIMIT,
);

bench_reqwest(
group,
&runtime,
addr,
HTTP_MODE,
label_prefix,
true,
NUM_REQUESTS_TO_SEND,
CONCURRENT_LIMIT,
);
}

// Criterion benchmark functions
fn bench_server_single_thread(c: &mut Criterion) {
let mut group = c.benchmark_group("server_single_thread");
group.sample_size(10);

// single-threaded client
with_server(
"127.0.0.1:5928",
HTTP_MODE,
spawn_single_thread_server,
|| {
run_benches(
&mut group,
build_current_thread_runtime,
"127.0.0.1:5928",
CURRENT_THREAD_LABEL,
);
},
);

// multi-threaded client
with_server(
"127.0.0.1:5930",
HTTP_MODE,
spawn_single_thread_server,
|| {
run_benches(
&mut group,
build_multi_thread_runtime,
"127.0.0.1:5930",
MULTI_THREAD_LABEL,
);
},
);
group.finish();
}

fn bench_server_multi_thread(c: &mut Criterion) {
let mut group = c.benchmark_group("server_multi_thread");
group.sample_size(10);

// single-threaded client
with_server(
"127.0.0.1:5929",
HTTP_MODE,
spawn_multi_thread_server,
|| {
run_benches(
&mut group,
build_current_thread_runtime,
"127.0.0.1:5929",
CURRENT_THREAD_LABEL,
);
},
);

// multi-threaded client
with_server(
"127.0.0.1:5931",
HTTP_MODE,
spawn_multi_thread_server,
|| {
run_benches(
&mut group,
build_multi_thread_runtime,
"127.0.0.1:5931",
MULTI_THREAD_LABEL,
);
},
);
group.finish();
}

criterion_group!(
name = benches;
config = Criterion::default()
.measurement_time(Duration::from_secs(10))
.warm_up_time(Duration::from_secs(3));
targets =
bench_server_single_thread,
bench_server_multi_thread
);
criterion_main!(benches);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The contents of this file are nearly identical to bench/http2.rs. This duplication makes maintenance harder and increases the chance of introducing inconsistencies. Consider refactoring by creating a single, parameterized benchmark function that can be configured for both HTTP/1.1 and HTTP/2. You could pass the HttpMode, server addresses, and benchmark group names as arguments to a shared setup function.

Comment on lines +57 to +79
async fn wreq_send_requests_concurrent(
client: &wreq::Client,
url: &str,
num_requests: usize,
concurrent_limit: usize,
) {
let semaphore = Arc::new(Semaphore::new(concurrent_limit));
let mut handles = Vec::with_capacity(num_requests);

for _i in 0..num_requests {
let url = url.to_owned();
let client = client.clone();
let semaphore = semaphore.clone();
let task = tokio::spawn(async move {
let _permit = semaphore.acquire().await.unwrap();
let mut response = client.get(url).send().await.unwrap();
while let Ok(Some(_chunk)) = response.chunk().await {}
});
handles.push(task);
}

futures_util::future::join_all(handles).await;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This function's signature async fn wreq_send_requests_concurrent(...) returns (), while the similar reqwest_send_requests_concurrent returns a Result. For consistency and better error propagation, this function should also return a Result.

Additionally, the use of .unwrap() on line 72 inside tokio::spawn will cause the task to panic if sending the request or reading the response fails. This can abruptly terminate the benchmark for a single failed request. It would be more robust to handle the Result returned by send() and chunk(), for example by logging the error or counting failures, instead of panicking.

Comment on lines +81 to +105
async fn reqwest_send_requests_concurrent(
client: &reqwest::Client,
url: &str,
num_requests: usize,
concurrent_limit: usize,
) -> Result<(), Box<dyn Error>> {
let semaphore = Arc::new(Semaphore::new(concurrent_limit));
let mut handles = Vec::with_capacity(num_requests);

for _i in 0..num_requests {
let url = url.to_owned();
let client = client.clone();
let semaphore = semaphore.clone();
let task = tokio::spawn(async move {
let _permit = semaphore.acquire().await.unwrap();
let mut response = client.get(url).send().await.unwrap();
while let Ok(Some(_chunk)) = response.chunk().await {}
});
handles.push(task);
}

futures_util::future::join_all(handles).await;

Ok(())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The use of .unwrap() on line 96 inside tokio::spawn will cause the task to panic if sending the request or reading the response fails. This can abruptly terminate the benchmark for a single failed request. It would be more robust to handle the Result returned by send() and chunk(), for example by logging the error or counting failures, instead of panicking.

if let Ok((socket, _peer_addr)) = accept {
tokio::spawn(async move {
if let Err(e) = serve(socket, mode).await {
println!(" -> err={:?}", e);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Errors are being printed to standard output using println!. It's standard practice to print errors to standard error (stderr) using eprintln! or a logging framework like tracing::error!. This separates diagnostic output from program output, making it easier to redirect and process them independently.

Suggested change
println!(" -> err={:?}", e);
eprintln!(" -> err={:?}", e);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments