Provides utilities for collecting metrics from a Tokio application, including runtime and per-task metrics.
[dependencies]
tokio-metrics = "0.4.6"
Use TaskMonitor to instrument tasks before spawning them, and to observe
metrics for those tasks. All tasks instrumented with a given TaskMonitor
aggregate their metrics together. To split out metrics for different tasks, use
separate TaskMetrics instances.
// construct a TaskMonitor
let monitor = tokio_metrics::TaskMonitor::new();
// print task metrics every 500ms
{
let frequency = std::time::Duration::from_millis(500);
let monitor = monitor.clone();
tokio::spawn(async move {
for metrics in monitor.intervals() {
println!("{:?}", metrics);
tokio::time::sleep(frequency).await;
}
});
}
// instrument some tasks and spawn them
loop {
tokio::spawn(monitor.instrument(do_work()));
}instrumented_countThe number of tasks instrumented.dropped_countThe number of tasks dropped.first_poll_countThe number of tasks polled for the first time.total_first_poll_delayThe total duration elapsed between the instant tasks are instrumented, and the instant they are first polled.total_idled_countThe total number of times that tasks idled, waiting to be awoken.total_idle_durationThe total duration that tasks idled.max_idle_durationThe maximum idle duration that a task took.total_scheduled_countThe total number of times that tasks were awoken (and then, presumably, scheduled for execution).total_scheduled_durationThe total duration that tasks spent waiting to be polled after awakening.total_poll_countThe total number of times that tasks were polled.total_poll_durationThe total duration elapsed during polls.total_fast_poll_countThe total number of times that polling tasks completed swiftly.total_fast_poll_durationThe total duration of fast polls.total_slow_poll_countThe total number of times that polling tasks completed slowly.total_slow_poll_durationThe total duration of slow polls.total_short_delay_countThe total count of short scheduling delays.total_short_delay_durationThe total duration of short scheduling delays.total_long_delay_countThe total count of long scheduling delays.total_long_delay_durationThe total duration of long scheduling delays.
mean_first_poll_delayThe mean duration elapsed between the instant tasks are instrumented, and the instant they are first polled.mean_idle_durationThe mean duration of idles.mean_scheduled_durationThe mean duration that tasks spent waiting to be executed after awakening.mean_poll_durationThe mean duration of polls.slow_poll_ratioThe ratio between the number polls categorized as slow and fast.long_delay_ratioThe ratio between the number of long scheduling delays and the number of total schedules.mean_fast_poll_durationThe mean duration of fast polls.mean_slow_poll_durationThe mean duration of slow polls.mean_short_delay_durationThe mean duration of short schedules.mean_long_delay_durationThe mean duration of long schedules.
Not all runtime metrics are stable. Using unstable metrics requires tokio_unstable, and the rt crate
feature. To enable tokio_unstable, the --cfg tokio_unstable must be passed
to rustc when compiling. You can do this by setting the RUSTFLAGS
environment variable before compiling your application; e.g.:
RUSTFLAGS="--cfg tokio_unstable" cargo buildOr, by creating the file .cargo/config.toml in the root directory of your crate.
If you're using a workspace, put this file in the root directory of your workspace instead.
[build]
rustflags = ["--cfg", "tokio_unstable"]
rustdocflags = ["--cfg", "tokio_unstable"] Putting .cargo/config.toml files below the workspace or crate root directory may lead to tools like
Rust-Analyzer or VSCode not using your .cargo/config.toml since they invoke cargo from
the workspace or crate root and cargo only looks for the .cargo directory in the current & parent directories.
Cargo ignores configurations in child directories.
More information about where cargo looks for configuration files can be found
here.
Missing this configuration file during compilation will cause tokio-metrics to not work, and alternating between building with and without this configuration file included will cause full rebuilds of your project.
The rt feature of tokio-metrics is on by default; simply check that you do
not set default-features = false when declaring it as a dependency; e.g.:
[dependencies]
tokio-metrics = "0.4.6"From within a Tokio runtime, use RuntimeMonitor to monitor key metrics of
that runtime.
let handle = tokio::runtime::Handle::current();
let runtime_monitor = tokio_metrics::RuntimeMonitor::new(&handle);
// print runtime metrics every 500ms
let frequency = std::time::Duration::from_millis(500);
tokio::spawn(async move {
for metrics in runtime_monitor.intervals() {
println!("Metrics = {:?}", metrics);
tokio::time::sleep(frequency).await;
}
});
// run some tasks
tokio::spawn(do_work());
tokio::spawn(do_work());
tokio::spawn(do_work());workers_countThe number of worker threads used by the runtime.total_park_countThe number of times worker threads parked.max_park_countThe maximum number of times any worker thread parked.min_park_countThe minimum number of times any worker thread parked.total_busy_durationThe amount of time worker threads were busy.max_busy_durationThe maximum amount of time a worker thread was busy.min_busy_durationThe minimum amount of time a worker thread was busy.global_queue_depthThe number of tasks currently scheduled in the runtime's global queue.elapsedTotal amount of time elapsed since observing runtime metrics.
mean_poll_durationThe average duration of a single invocation of poll on a task.mean_poll_duration_worker_minThe average duration of a single invocation of poll on a task on the worker with the lowest value.mean_poll_duration_worker_maxThe average duration of a single invocation of poll on a task on the worker with the highest value.poll_time_histogramA histogram of task polls since the previous probe grouped by poll times.total_noop_countThe number of times worker threads unparked but performed no work before parking again.max_noop_countThe maximum number of times any worker thread unparked but performed no work before parking again.min_noop_countThe minimum number of times any worker thread unparked but performed no work before parking again.total_steal_countThe number of tasks worker threads stole from another worker thread.max_steal_countThe maximum number of tasks any worker thread stole from another worker thread.min_steal_countThe minimum number of tasks any worker thread stole from another worker thread.total_steal_operationsThe number of times worker threads stole tasks from another worker thread.max_steal_operationsThe maximum number of times any worker thread stole tasks from another worker thread.min_steal_operationsThe minimum number of times any worker thread stole tasks from another worker thread.num_remote_schedulesThe number of tasks scheduled from outside of the runtime.total_local_schedule_countThe number of tasks scheduled from worker threads.max_local_schedule_countThe maximum number of tasks scheduled from any one worker thread.min_local_schedule_countThe minimum number of tasks scheduled from any one worker thread.total_overflow_countThe number of times worker threads saturated their local queues.max_overflow_countThe maximum number of times any one worker saturated its local queue.min_overflow_countThe minimum number of times any one worker saturated its local queue.total_polls_countThe number of tasks that have been polled across all worker threads.max_polls_countThe maximum number of tasks that have been polled in any worker thread.min_polls_countThe minimum number of tasks that have been polled in any worker thread.total_local_queue_depthThe total number of tasks currently scheduled in workers' local queues.max_local_queue_depthThe maximum number of tasks currently scheduled any worker's local queue.min_local_queue_depthThe minimum number of tasks currently scheduled any worker's local queue.blocking_queue_depthThe number of tasks currently waiting to be executed in the blocking threadpool.live_tasks_countThe current number of alive tasks in the runtime.blocking_threads_countThe number of additional threads spawned by the runtime.idle_blocking_threads_countThe number of idle threads, which have spawned by the runtime forspawn_blockingcalls.budget_forced_yield_countThe number of times that a task was forced to yield because it exhausted its budget.io_driver_ready_countThe number of ready events received from the I/O driver.
busy_ratioThe ratio between the amount of time worker threads were busy and the total time elapsed since observing runtime metrics.
mean_polls_per_parkThe ratio of the number of tasks that have been polled and the number of times worker threads unparked but performed no work before parking again.
If you also enable the metrics-rs-integration feature, you can use metrics.rs exporters to export metrics
outside of your process. metrics.rs supports a variety of exporters, including Prometheus.
The exported metrics by default will be exported with their name, preceded by tokio_. For example,
tokio_workers_count for the workers_count metric and tokio_instrumented_count for the
instrumented_count metric. This can be customized by using the
RuntimeMetricsReporterBuilder::with_metrics_transformer and TaskMetricsReporterBuilder::new functions.
If you want to use Prometheus, you could have this Cargo.toml:
[dependencies]
tokio-metrics = { version = "0.4.6", features = ["metrics-rs-integration"] }
metrics = "0.24"
# You don't actually need to use the Prometheus exporter with uds-listener enabled,
# it's just here as an example.
metrics-exporter-prometheus = { version = "0.16", features = ["uds-listener"] }Then, you can launch a metrics exporter:
// This makes metrics visible via a local Unix socket with name prometheus.sock
// You probably want to do it differently.
//
// If you use this exporter, you can access the metrics for debugging
// by running `curl --unix-socket prometheus.sock localhost`.
metrics_exporter_prometheus::PrometheusBuilder::new()
.with_http_uds_listener("prometheus.sock")
.install()
.unwrap();
// This line launches the runtime reporter that monitors the Tokio runtime and exports the metrics.
tokio::task::spawn(
tokio_metrics::RuntimeMetricsReporterBuilder::default().describe_and_run(),
);
// This line creates a task monitor.
let task_monitor = tokio_metrics::TaskMonitor::new();
// This line launches the task reporter that exports the task metrics.
tokio::task::spawn(
tokio_metrics::TaskMetricsReporterBuilder::new(|name| {
let name = name.replacen("tokio_", "my_task_", 1);
Key::from_parts(name, &[("application", "my_app")])
})
.describe_and_run(task_monitor.clone()),
);
// run some tasks
tokio::spawn(do_work());
// This line causes the task monitor to monitor this task.
tokio::spawn(task_monitor.instrument(do_work()));
tokio::spawn(do_work());Of course, it will work with any other metrics.rs exporter.
Currently, Tokio Console is primarily intended for local debugging. Tokio
metrics is intended to enable reporting of metrics in production to your
preferred tools. Longer term, it is likely that tokio-metrics will merge with
Tokio Console.
This project is licensed under the MIT license.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in tokio-metrics by you, shall be licensed as MIT, without any additional terms or conditions.