fix(distributed): implement cluster manager startup loops and state telemetry#798
fix(distributed): implement cluster manager startup loops and state telemetry#798
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ce36221fbb
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if matches!(peer.state, PeerConnectionState::Connected) | ||
| && now.saturating_duration_since(peer.last_seen) > stale_after | ||
| { | ||
| peer.state = PeerConnectionState::Failed { | ||
| reason: "health timeout".to_string(), |
There was a problem hiding this comment.
Preserve sendability after idle health timeouts
After about 3 * heartbeat_interval, this loop marks any idle connected peer as Failed, but send_message later only permits Connected peers (src/distributed/cluster.rs:442-446). Since no background path in this lifecycle promotes timed-out peers back to Connected, a quiet cluster can fall into a persistent PeerNotConnected state where later heartbeats or work dispatches cannot be queued.
Useful? React with 👍 / 👎.
| for peer in peers.values_mut() { | ||
| peer.state = PeerConnectionState::Disconnected; | ||
| peer.pending_messages = 0; | ||
| } |
There was a problem hiding this comment.
Reset peer health cache during shutdown
stop() resets connection state in self.peers but does not reconcile or clear self.peer_info, while cluster telemetry is computed from peer_info in cluster_state(). If callers inspect state after shutdown, they can still see previously healthy peers and capacity values from stale cache entries, which makes stopped-node telemetry inaccurate.
Useful? React with 👍 / 👎.
Summary
This change replaces startup and state-reporting stubs in the distributed cluster manager with concrete, testable behavior.
What changed
ClusterManager::start()startup initialization to:ClusterManager::stop()lifecycle cleanup to:cluster_state()to report meaningful data from available runtime state:leadernow uses observed leader (heartbeat-derived) with local-controller fallback while running,total_loadnow aggregates pending outbound peer queue load,send_message().Root cause
ClusterManager::start()andcluster_state()still contained TODO stubs, so no background processing existed for message queue draining or health reconciliation, and cluster telemetry always returned placeholder leader/load values.Validation
Targeted tests for the distributed cluster module pass:
cargo test --features distributed distributed::cluster::tests:: --lib -- --test-threads=1Result:
Issue
Closes #788