Skip to content

Conversation

@veryCrunchy
Copy link
Member

Summary

  • Add full P2P gossipsub monitoring pipeline, network scaffolding, persistence, UI, and runtime integration.

Changes

  • Integrate Gossipsub into PeerUP behaviour with subscribe/publish helpers and tolerant publish behavior.
  • Add2P message types (SignedMessage, P2PCommand, P2PEvent, PeerResult) and P2PNetwork scaffolding with command/event channels.
  • Persist peer metadata and periodic network stats (upsert_peer, mark_peer_offline, insert/get_latest_network_stats).
  • Expose crypto.verify_result in public API for result verification.
  • Enhance orchestrator P2P setup: configurable NodeConfig, load keypair path from env, move P2P startup to new(), prepare timing/stats/result verification plumbing.
  • Run libp2p components on a Tokio LocalSet to support non-Send Swarm futures and add PeerUPConfig to runtime config.
  • Add TUI/state fields and UI panel improvements to show real peer/network metrics and last peer event.
  • Add production and local multi-peer example configs and a Windows helper script for multi-peer testing.

Motivation

  • Provide reliable broadcast/observation of monitoring results, store peer/network metrics, verify results, improve startup resilience, and surface live network state in the UI.

Add Gossipsub integration and helper methods for broadcasting and
receiving monitoring results, plus improved network behaviour
initialization.

- Introduce MONITORING_RESULTS_TOPIC constant and implement
  PeerNode methods:
  - subscribe_to_results / unsubscribe_from_results:
    subscribe/unsubscribe to the monitoring results topic and log
    subscription state.
  - publish_result:
    publish monitoring result JSON to gossipsub; tolerate "no peers"
    / InsufficientPeers errors (common during startup or isolation) and
    log at debug level instead of failing.
  - get_topic_peers / get_subscribed_topics:
    helpers to list peers subscribed to a topic and list currently
    subscribed topics.

- Wire gossipsub into PeerUPBehaviour:
  - add gossipsub field for pub/sub messaging.
  - create gossipsub instance during behaviour initialization.

- Harden behaviour creation logging and platform resilience:
  - when enabling mDNS, handle errors gracefully with warnings and
    fallback to DHT/bootstrapping instead of failing the node startup.
  - add informational logs for mDNS enabled/disabled paths.

Why:
- Provide a reliable way to broadcast and observe monitoring results
  across the network.
- Improve startup robustness by tolerating absence of peers when
  publishing and by gracefully handling mDNS platform limitations.
Add verification::verify_result to the public crypto API by
re-exporting it from apps/service/src/crypto/mod.rs. This exposes
the verification function alongside existing signing utilities
so callers can perform both sign and verify operations from the
same crypto module.

This change is needed to provide symmetric access to signing and
verification functionality, simplifying usage and preventing
internal-only access to verification logic.
Add new database models and repository methods to persist peer
metadata and periodic network statistics. Implement:
- upsert_peer: insert or update peer records with status, last
  seen, join time, contribution score, uptime, checks/day and
  location fields.
- mark_peer_offline: set a peer to offline and update last_seen.
- insert_network_stats: store a snapshot of network-level metrics
  (total/online peers, checks performed/received, bandwidth).
- get_latest_network_stats: fetch the most recent network stats
  row.
- extend repository trait signatures to include Peer, NetworkStats
  and related operations.
- add PeerResult::from_p2p_result helper to convert P2P results.

These changes enable tracking peer lifecycle and cluster-wide
metrics for monitoring and reporting, and ensure incoming P2P
results can be converted into database-ready records.
…verification

- Add new imports and utilities: HashSet, Duration, Instant, SystemTime,
  debug and warn tracing levels.
- Load keypair path from UPPE_KEYPAIR_PATH env var instead of hardcoding the
  filename; keep loading/generation logic.
- Add verify_result to crypto usage to enable result verification.
- Extend database models import with NetworkStats and Peer for network stats
  tracking.
- Replace simple P2PNetwork::new usage with a configurable peerup NodeConfig
  builder. Configure port_range, bootstrap_peers and conditionally enable/disable
  mdns, kademlia and relay based on config.
- Build a peerup_config and construct P2PNetwork using with_config, passing
  peer id, key bytes and preferences. Start network in new() when enabled and
  store Arc-wrapped network in the orchestrator struct.
- Move P2P startup out of run() to new(); adjust run() comments to reflect
  this change.
- Replace repeated std::time::Instant/Duration usage with imported Instant and
  Duration types; prepare for location update scheduling.
- Introduce groundwork for P2P/network stats tracking and result verification
  (imports and variables added; more logic to follow in subsequent commits).

Rationale:
- Make P2P setup configurable and environment-aware to support different
  deployment setups and runtime overrides.
- Centralize P2P initialization so run() focuses on orchestration work and
  runtime loops.
- Add types and imports required for upcoming features: more granular logging,
  result verification, timing-based tasks, and network statistics collection.
Introduce P2P messaging types and initial network manager scaffolding
to support publishing and receiving monitoring results over the P2P
layer.

- Add apps/service/src/p2p/messages.rs:
  - Define SignedMessage to wrap a CheckResult with an Ed25519 public
    key for verification.
  - Add P2PCommand and P2PEvent enums to represent commands sent to the
    P2P node and events emitted by it (publish/subscribe lifecycle,
    peer connect/disconnect, errors).
  - Add PeerResult struct for deserialized results from peers,
    including optional signature/public_key, peer_id and receive timestamp.

- Update apps/service/src/p2p/network.rs:
  - Replace placeholder with a P2PNetwork struct that holds peer_id,
    enabled flag, optional public_key, NodeConfig, and command/event
    channels.
  - Provide new() and with_config() constructors; new() builds a
    default NodeConfig.
  - Implement start() to initialize command/event channels, store senders
    and receivers, create a PeerUP Node with the stored config, start
    listening, and log node information.
  - Add futures::StreamExt import and wire up SignedMessage/PeerResult
    usage from the new messages module.

Why:
- Establish clear message types and a command/event model to decouple
  service logic from the P2P implementation.
- Provide initial PeerUP node setup and channel wiring so the rest of
  the service can integrate publishing, subscribing, and handling peer
  events. This scaffolding prepares for signing/verification and full
  network behaviour to be implemented next.
Expand displayed node ID and replace static placeholders with real
state-driven fields in the network TUI. Show a longer node ID
preview (16 chars) and present a colored status label (connected or
offline). When P2P is enabled, render peer metrics from state:
connected peers, total seen peers, computed health percentage with
color coding, activity counts (shared/received) and the last peer
event if present. When disabled, show a concise disabled notice and
instructions.

These changes provide accurate, contextual information in the UI
instead of hardcoded example values, improving observability and
user feedback.
Add fields to the TUI state to track peer and network statistics:
connected_peers, total_peers_seen, results_shared, results_received,
and last_peer_event. Initialize these fields in the State::new ctor.

Provide methods to update and record peer information:
update_peer_stats(...) updates numeric counters, and
record_peer_event(...) stores the last peer event string.

Populate the state from the database when initializing and when switching
profiles by fetching latest_network_stats and calling update_peer_stats.
This enables the TUI to display up-to-date P2P metrics and recent peer
events for better visibility into network health and activity.
Add LocalSet-based execution for libp2p components in both the
service runner and the TUI so that non-Send libp2p Swarm futures can
execute on the current thread without requiring Send. Replace direct
await calls to orchestrator::Orchestrator::start and
tui::run_tui_with_p2p with local.run_until(...) wrappers.

Introduce PeerUP configuration into the service config:
- Add a PeerUPConfig struct with fields for port_range, mDNS,
  Kademlia, relay and bootstrap_peers, plus sensible defaults.
- Wire PeerUPConfig into Config and Config::default so config files
  can provide P2P settings.

Update Cargo.toml dependencies:
- Add peerup, futures, ratatui and crossterm and reorder entries to
  reflect new usage.

These changes allow the app to run libp2p networking that is not Send,
and expose configurable PeerUP P2P options to users.
Add a production example and three test configuration files for the
service, plus a Windows batch script to run multiple local peers for
testing.

- apps/service/config.production.toml: new production-ready example
  with zeromq bind/port, peer preferences tuned for privacy and
  stability (location_privacy=country_only, longer location update
  interval), peerup settings for port ranges, disabled mDNS,
  enabled Kademlia DHT, and placeholders for bootstrap peers.
- apps/service/test-peer1.toml, test-peer2.toml, test-peer3.toml:
  add local test configs with unique zeromq ports and non-overlapping
  peerup port ranges; all mimic production preferences but use local
  bootstrap peers and disable mDNS for reliable local testing.
- apps/service/test-multi-peer.bat: add Windows helper script to
  launch three peers (services and TUIs) in separate windows for
  manual integration testing and debugging; documents window colors,
  DB paths and startup order.

These changes provide a clear production config and convenient
repeatable local test setup to simplify development, debugging, and
deployment.
Copilot AI review requested due to automatic review settings January 12, 2026 12:27
Reorganize imports in orchestrator and fix long-line wrapping to improve
readability. Apply consistent formatting across several files and remove
trailing whitespace. Add #[allow(dead_code)] annotations to several P2P
message variants (Subscribe, Unsubscribe, Shutdown) to document they are
future API and silence unused warnings.

Simplify conditional when sharing results to the P2P network by using a
combined if-let guard, reducing nesting. Normalize spacing in various
places to remove blank/whitespace-only lines.

Wrap long SQL string literals across multiple lines in repository to
keep line lengths reasonable and maintain consistent formatting. Also
fix minor alignment and path handling: make keypair env var unwrap a
single expression placed on one line for clarity.

Overall these changes are purely stylistic and maintenance-focused to
improve code clarity and reduce warnings without altering behavior.
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request implements a comprehensive P2P gossipsub monitoring pipeline for the UPPE service, enabling peer-to-peer sharing and receiving of monitoring results with cryptographic verification.

Changes:

  • Integrated libp2p gossipsub protocol into PeerUP behavior for broadcasting and receiving monitoring results
  • Added P2P network scaffolding with command/event channels, message types, and LocalSet runtime support for non-Send Swarm futures
  • Implemented database persistence for peer metadata and network statistics with periodic snapshots
  • Enhanced TUI to display real-time P2P network metrics including connected peers, results shared/received, and peer events
  • Added configurable PeerUP settings (port ranges, mDNS, Kademlia, relay, bootstrap peers) and multi-peer test configurations

Reviewed changes

Copilot reviewed 28 out of 28 changed files in this pull request and generated 15 comments.

Show a summary per file
File Description
crates/peerup/src/node/core/gossipsub.rs New gossipsub helper methods for subscribe/publish/query operations
crates/peerup/src/node/core/node_methods.rs Added networking methods for listening, dialing, and Kademlia bootstrap
crates/peerup/src/node/config/methods.rs Added config builder methods for Kademlia and relay features
crates/peerup/src/network/behaviour.rs Integrated gossipsub behaviour with message authentication and deduplication
crates/peerup/src/network/events.rs Added gossipsub event variants to PeerUPEvent enum
crates/peerup/src/network/conversions/gossipsub.rs Event conversion from libp2p gossipsub to PeerUPEvent
crates/peerup/src/lib.rs Exported gossipsub topic constant and SwarmEvent for consumers
apps/service/src/p2p/messages.rs Defined P2P message types (SignedMessage, P2PCommand, P2PEvent, PeerResult)
apps/service/src/p2p/network.rs Implemented P2PNetwork with event loop, command handling, and result publishing
apps/service/src/orchestrator/mod.rs Integrated P2P event handling, signature verification, peer tracking, and stats persistence
apps/service/src/main.rs Added LocalSet for running non-Send libp2p Swarm futures
apps/service/src/database/models.rs Added Peer and NetworkStats models with conversion utilities
apps/service/src/database/repository.rs Implemented persistence methods for peers and network statistics
apps/service/src/tui/state.rs Added peer stats fields and update methods to TUI state
apps/service/src/tui/ui/network.rs Enhanced network panel to show real peer metrics and health indicators
apps/service/src/tui/mod.rs Load and display network stats in TUI
apps/service/src/config.rs Added PeerUPConfig struct with P2P networking options
apps/service/test-peer{1,2,3}.toml Multi-peer local test configurations with bootstrap peers
apps/service/config.production.toml Production deployment configuration template
apps/service/test-multi-peer.bat Windows batch script for multi-peer local testing
apps/service/Cargo.toml Reordered dependencies alphabetically

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 241 to 242
#[allow(dead_code)] // Public API method
pub fn peer_id(&self) -> &str {
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test has been modified to remove the actual assertion that validates the network starts correctly when disabled. The original code verified that network.start().await returns Ok(()), which is important to ensure that disabled networks don't fail on startup. This change weakens the test coverage.

Copilot uses AI. Check for mistakes.
Comment on lines +113 to +134
pub fn from_p2p_result(p2p_result: &crate::p2p::PeerResult) -> Option<Self> {
// Extract signature or return None if missing
let signature = p2p_result.signature.clone()?;

Self {
id: None,
monitor_uuid: p2p_result.result.monitor_id,
timestamp: p2p_result.result.timestamp,
status: p2p_result.result.status,
latency_ms: p2p_result.result.latency_ms,
status_code: p2p_result.result.status_code,
error_message: p2p_result.result.error_message.clone(),
peer_id: p2p_result.peer_id.clone(),
signature,
verified: false, // Will be verified later
created_at: p2p_result.received_at,
city: None, // TODO: Add geolocation lookup
country: None,
region: None,
}
.into()
}
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The from_p2p_result method returns None if signature is missing, but the caller in orchestrator (line 253) only logs a warning when None is returned and doesn't distinguish between missing signature vs other errors. This makes debugging harder. Consider logging a more specific message when signature is missing.

Copilot uses AI. Check for mistakes.
Comment on lines +49 to +52
let health_pct = if state.total_peers_seen > 0 {
(state.connected_peers * 100) / state.total_peers_seen.max(1)
} else {
0
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Division by zero risk. If total_peers_seen is 0, the max(1) prevents complete failure but the calculation logic is incorrect. When total_peers_seen is 0 and connected_peers is also 0, this will compute (0 * 100) / 1 = 0, which is correct. However, if connected_peers is somehow non-zero while total_peers_seen is 0 (which shouldn't happen logically but isn't prevented), the health percentage could be misleading. Consider adding an explicit check: if total_peers_seen == 0, set health_pct to 0 or 100 depending on your definition.

Suggested change
let health_pct = if state.total_peers_seen > 0 {
(state.connected_peers * 100) / state.total_peers_seen.max(1)
} else {
0
let health_pct = if state.total_peers_seen == 0 {
0
} else {
(state.connected_peers * 100) / state.total_peers_seen

Copilot uses AI. Check for mistakes.
Comment on lines +183 to +185
// Get mutable reference to p2p_network for event handling
let p2p_network = Arc::get_mut(&mut self.p2p_network)
.expect("P2P network should not have multiple references at this point");
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary use of Arc with Arc::get_mut. The p2p_network field is wrapped in Arc but then immediately unwrapped to get mutable access in the run method. Since the Arc is never cloned and mutable access is needed, consider storing P2PNetwork directly without Arc, or refactor to use interior mutability patterns like Mutex if shared ownership is actually needed.

Suggested change
// Get mutable reference to p2p_network for event handling
let p2p_network = Arc::get_mut(&mut self.p2p_network)
.expect("P2P network should not have multiple references at this point");
// Clone P2P network handle for event handling (shared ownership via Arc)
let p2p_network = Arc::clone(&self.p2p_network);

Copilot uses AI. Check for mistakes.
Comment on lines +145 to +161
if let Err(e) = node.unsubscribe_from_results() {
tracing::error!("Failed to unsubscribe: {}", e);
} else {
let _ = event_tx.send(P2PEvent::Unsubscribed).await;
}
}
P2PCommand::Shutdown => {
tracing::info!("Shutting down P2P node");
break;
}
}
}

// Handle events from the swarm
event = node.swarm.select_next_some() => {
use peerup::{swarm::SwarmEvent, PeerUPEvent};

Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Silent failure when decoding gossipsub messages. If String::from_utf8 or serde_json::from_str fails, the error is silently ignored with no logging. This makes debugging network issues very difficult. Consider logging a warning when message decoding fails to help diagnose malformed messages or protocol incompatibilities.

Copilot uses AI. Check for mistakes.
Comment on lines +105 to +116
// Send started event
let _ = event_tx.send(P2PEvent::Started { peer_id: libp2p_peer_id.to_string() }).await;

// TODO: Implement actual P2P sharing via PeerUP
// This will broadcast the signed result to connected peers
// Spawn background task to run the node's event loop
tokio::task::spawn_local(async move {
tracing::info!("P2P event loop started");

loop {
tokio::select! {
// Handle commands from the service
Some(cmd) = command_rx.recv() => {
match cmd {
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing error handling for JSON serialization failure. If serde_json::to_string fails to serialize the SignedMessage, the error is silently ignored and no event is sent. This could lead to monitoring results being lost without any indication. Consider logging an error or sending an error event when serialization fails.

Copilot uses AI. Check for mistakes.
Comment on lines +216 to +351
if last_stats_persist.elapsed() >= stats_persist_interval {
let snapshot = NetworkStats {
timestamp: SystemTime::now(),
total_peers: total_peers_seen.len() as i64,
online_peers: connected_peers.len() as i64,
checks_performed,
checks_received,
bandwidth_used_mb: 0,
};

if let Err(e) = self.database.insert_network_stats(&snapshot).await {
warn!("Failed to persist network stats: {}", e);
}

last_stats_persist = Instant::now();
}

// Log the result
info!(
"Monitor {} - {} - Status: {} - Latency: {:?}ms",
signed_result.monitor_id,
signed_result.target,
signed_result.status,
signed_result.latency_ms
);
}

// Handle P2P events
Some(p2p_event) = p2p_network.next_event() => {
use crate::p2p::P2PEvent;
match p2p_event {
P2PEvent::ResultReceived { peer_id, result } => {
info!("Received monitoring result from peer {}", peer_id);

total_peers_seen.insert(peer_id.clone());

// Convert P2P result to database model
if let Some(mut db_result) = crate::database::models::PeerResult::from_p2p_result(&result) {
// Verify signature if public key is available
let verified = if let Some(public_key_vec) = &result.public_key {
if public_key_vec.len() == 32 {
let mut public_key_bytes = [0u8; 32];
public_key_bytes.copy_from_slice(&public_key_vec[..32]);

// Verify the signature
match verify_result(&db_result, &public_key_bytes, &result.result.target) {
Ok(true) => {
info!("Successfully verified signature from peer {}", peer_id);
true
}
Ok(false) => {
warn!("Invalid signature from peer {}", peer_id);
false
}
Err(e) => {
error!("Signature verification error from peer {}: {}", peer_id, e);
false
}
}
} else {
warn!("Invalid public key length from peer {}: {} bytes", peer_id, public_key_vec.len());
false
}
} else {
warn!("Received peer result without public key from {}", peer_id);
false
};

db_result.verified = verified;

// Keep peer record fresh when results arrive
let peer_model = Peer::new_online(peer_id.clone(), SystemTime::now());
if let Err(e) = self.database.upsert_peer(&peer_model).await {
warn!("Failed to upsert peer {} on result: {}", peer_id, e);
}

if let Err(e) = self.database.save_peer_result(&db_result).await {
error!("Failed to save peer result: {}", e);
} else {
let status = if verified { "verified" } else { "unverified" };
debug!("Successfully saved {} peer result from {}", status, peer_id);
}

// Update stats for received results
checks_received += 1;
} else {
warn!("Received peer result without signature from {}", peer_id);
}
}
P2PEvent::PeerConnected(peer_id) => {
info!("Peer connected: {}", peer_id);

let now = SystemTime::now();
connected_peers.insert(peer_id.clone());
total_peers_seen.insert(peer_id.clone());

let peer_model = Peer::new_online(peer_id.clone(), now);
if let Err(e) = self.database.upsert_peer(&peer_model).await {
warn!("Failed to upsert peer {}: {}", peer_id, e);
}
}
P2PEvent::PeerDisconnected(peer_id) => {
info!("Peer disconnected: {}", peer_id);

connected_peers.remove(&peer_id);
if let Err(e) = self.database.mark_peer_offline(&peer_id, SystemTime::now()).await {
warn!("Failed to mark peer offline {}: {}", peer_id, e);
}
}
P2PEvent::Started { peer_id } => {
info!("P2P network started with peer ID: {}", peer_id);
}
P2PEvent::Error(err) => {
error!("P2P error: {}", err);
}
_ => {
tracing::trace!("P2P event: {:?}", p2p_event);
}
}

if last_stats_persist.elapsed() >= stats_persist_interval {
let snapshot = NetworkStats {
timestamp: SystemTime::now(),
total_peers: total_peers_seen.len() as i64,
online_peers: connected_peers.len() as i64,
checks_performed,
checks_received,
bandwidth_used_mb: 0,
};

if let Err(e) = self.database.insert_network_stats(&snapshot).await {
warn!("Failed to persist network stats: {}", e);
}

last_stats_persist = Instant::now();
}
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate network stats persistence logic. The same NetworkStats snapshot creation and insertion code appears in both the monitoring result handler (lines 217-231) and the P2P event handler (lines 337-351). This duplication violates the DRY principle and makes maintenance harder. Consider extracting this into a helper function that both code paths can call.

Copilot uses AI. Check for mistakes.
Comment on lines +109 to +115
fn create_gossipsub(keypair: &Keypair) -> Result<gossipsub::Behaviour> {
// Configure gossipsub for monitoring results
let gossipsub_config = gossipsub::ConfigBuilder::default()
.heartbeat_interval(Duration::from_secs(10))
.validation_mode(gossipsub::ValidationMode::Strict)
.message_id_fn(|msg| {
// Use message data hash as ID for deduplication
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Message ID function uses data hash which may not be cryptographically secure for deduplication. The DefaultHasher is not guaranteed to be stable across Rust versions or executions, and is not cryptographically secure. For reliable message deduplication across the network, consider using a cryptographic hash function like SHA256 or use the message's cryptographic signature as the MessageId.

Copilot uses AI. Check for mistakes.
Comment on lines +71 to +146
info!("Starting listener on {}", addr);
self.listeners.push((listener_id, addr));
}
Err(e) => {
tracing::warn!("Failed to listen on {}: {}", addr, e);
}
}
}

if self.listeners.is_empty() {
anyhow::bail!("Failed to start any listeners");
}

info!("Started {} listener(s)", self.listeners.len());
Ok(())
}

/// Dial a peer at the specified address
pub fn dial(&mut self, addr: &str) -> Result<()> {
use libp2p::Multiaddr;

let multiaddr: Multiaddr =
addr.parse().map_err(|e| anyhow::anyhow!("Invalid multiaddr '{}': {}", addr, e))?;

self.swarm
.dial(multiaddr.clone())
.map_err(|e| anyhow::anyhow!("Failed to dial {}: {}", multiaddr, e))?;

info!("Dialing peer at {}", multiaddr);
Ok(())
}

/// Add bootstrap peers to Kademlia DHT for peer discovery
/// This is the proper way to bootstrap a Kademlia DHT network
pub fn add_kademlia_bootstrap_peers(
&mut self,
peers: &[(libp2p::PeerId, libp2p::Multiaddr)],
) -> Result<()> {
let kademlia_opt = self.swarm.behaviour_mut().kademlia.as_mut();

if let Some(kademlia) = kademlia_opt {
for (peer_id, addr) in peers {
kademlia.add_address(peer_id, addr.clone());
info!("Added Kademlia bootstrap peer: {} at {}", peer_id, addr);
}

// Trigger bootstrap to populate the routing table
match kademlia.bootstrap() {
Ok(_) => info!("Kademlia bootstrap initiated with {} peer(s)", peers.len()),
Err(e) => tracing::warn!("Kademlia bootstrap error: {:?}", e),
}
} else {
tracing::warn!("Kademlia is not enabled, cannot add bootstrap peers");
}

Ok(())
}

/// Dial multiple bootstrap peers (for initial network join)
/// Note: For production, prefer using add_kademlia_bootstrap_peers for DHT-based discovery
pub fn dial_bootstrap_peers(&mut self, addrs: &[String]) -> Result<()> {
if addrs.is_empty() {
return Ok(());
}

info!("Dialing {} bootstrap peer(s)", addrs.len());
let mut success_count = 0;

for addr in addrs {
match self.dial(addr) {
Ok(_) => success_count += 1,
Err(e) => tracing::warn!("Failed to dial bootstrap peer {}: {}", addr, e),
}
}

if success_count == 0 && !addrs.is_empty() {
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logging inconsistency between info! and tracing::warn!. Line 71 uses info! macro while lines 75, 116, 119, 138, and 143 use tracing::warn! and tracing::info!. For consistency and maintainability, prefer using the tracing macros (tracing::info!, tracing::warn!) throughout the module instead of mixing with the info! macro.

Copilot uses AI. Check for mistakes.
Comment on lines +58 to +82
pub fn start_listening(&mut self) -> Result<()> {
use libp2p::Multiaddr;

// Listen on all interfaces with configured port range
let (start_port, end_port) = self.config.port_range;

for port in start_port..=end_port {
let addr: Multiaddr = format!("/ip4/0.0.0.0/tcp/{port}")
.parse()
.map_err(|e| anyhow::anyhow!("Failed to parse multiaddr: {}", e))?;

match self.swarm.listen_on(addr.clone()) {
Ok(listener_id) => {
info!("Starting listener on {}", addr);
self.listeners.push((listener_id, addr));
}
Err(e) => {
tracing::warn!("Failed to listen on {}: {}", addr, e);
}
}
}

if self.listeners.is_empty() {
anyhow::bail!("Failed to start any listeners");
}
Copy link

Copilot AI Jan 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing validation for port_range configuration. The port_range tuple is not validated to ensure that start_port is less than or equal to end_port. If a configuration has end_port less than start_port (e.g., [9010, 9000]), the loop at line 64 will never execute and no listeners will be started, causing the bail at line 81. Consider adding validation in the config builder or at the start of start_listening to provide a clearer error message.

Copilot uses AI. Check for mistakes.
@veryCrunchy veryCrunchy merged commit b05bd10 into main Jan 12, 2026
8 checks passed
@veryCrunchy veryCrunchy deleted the feat/p2p-gossipsub-monitoring-stats branch January 12, 2026 12:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants