diff --git a/dash-spv-ffi/FFI_API.md b/dash-spv-ffi/FFI_API.md index 24f8cc164..38a8e0ea2 100644 --- a/dash-spv-ffi/FFI_API.md +++ b/dash-spv-ffi/FFI_API.md @@ -4,13 +4,14 @@ This document provides a comprehensive reference for all FFI (Foreign Function I **Auto-generated**: This documentation is automatically generated from the source code. Do not edit manually. -**Total Functions**: 39 +**Total Functions**: 54 ## Table of Contents - [Client Management](#client-management) - [Configuration](#configuration) - [Synchronization](#synchronization) +- [Wallet Operations](#wallet-operations) - [Platform Integration](#platform-integration) - [Event Callbacks](#event-callbacks) - [Error Handling](#error-handling) @@ -54,15 +55,27 @@ Functions: 16 ### Synchronization -Functions: 4 +Functions: 7 | Function | Description | Module | |----------|-------------|--------| | `dash_spv_ffi_client_cancel_sync` | Cancels the sync operation | client | +| `dash_spv_ffi_client_clear_sync_event_callbacks` | Clear sync event callbacks | client | +| `dash_spv_ffi_client_get_manager_sync_progress` | Get the current manager-based sync progress | client | | `dash_spv_ffi_client_get_sync_progress` | Get the current sync progress snapshot | client | -| `dash_spv_ffi_client_sync_to_tip_with_progress` | Sync the SPV client to the chain tip with detailed progress updates | client | +| `dash_spv_ffi_client_set_sync_event_callbacks` | Set sync event callbacks for push-based event notifications | client | +| `dash_spv_ffi_manager_sync_progress_destroy` | Destroy an `FFISyncProgress` object and all its nested pointers | types | | `dash_spv_ffi_sync_progress_destroy` | Destroy a `FFISyncProgress` object returned by this crate | client | +### Wallet Operations + +Functions: 2 + +| Function | Description | Module | +|----------|-------------|--------| +| `dash_spv_ffi_client_clear_wallet_event_callbacks` | Clear wallet event callbacks | client | +| `dash_spv_ffi_client_set_wallet_event_callbacks` | Set wallet event callbacks for push-based event notifications | client | + ### Platform Integration Functions: 2 @@ -74,12 +87,14 @@ Functions: 2 ### Event Callbacks -Functions: 2 +Functions: 4 | Function | Description | Module | |----------|-------------|--------| -| `dash_spv_ffi_client_drain_events` | Drain pending events and invoke configured callbacks (non-blocking) | client | -| `dash_spv_ffi_client_set_event_callbacks` | Set event callbacks for the client | client | +| `dash_spv_ffi_client_clear_network_event_callbacks` | Clear network event callbacks | client | +| `dash_spv_ffi_client_clear_progress_callback` | Clear progress callback | client | +| `dash_spv_ffi_client_set_network_event_callbacks` | Set network event callbacks for push-based event notifications | client | +| `dash_spv_ffi_client_set_progress_callback` | Set progress callback for sync progress updates | client | ### Error Handling @@ -91,10 +106,13 @@ Functions: 1 ### Utility Functions -Functions: 10 +Functions: 18 | Function | Description | Module | |----------|-------------|--------| +| `dash_spv_ffi_block_headers_progress_destroy` | Destroy an `FFIBlockHeadersProgress` object | types | +| `dash_spv_ffi_blocks_progress_destroy` | Destroy an `FFIBlocksProgress` object | types | +| `dash_spv_ffi_chainlock_progress_destroy` | Destroy an `FFIChainLockProgress` object | types | | `dash_spv_ffi_checkpoint_before_height` | Get the last checkpoint at or before a given height | checkpoints | | `dash_spv_ffi_checkpoint_before_timestamp` | Get the last checkpoint at or before a given UNIX timestamp (seconds) | checkpoints | | `dash_spv_ffi_checkpoint_latest` | Get the latest checkpoint for the given network | checkpoints | @@ -102,7 +120,12 @@ Functions: 10 | `dash_spv_ffi_client_get_tip_hash` | Get the current chain tip hash (32 bytes) if available | client | | `dash_spv_ffi_client_get_tip_height` | Get the current chain tip height (absolute) | client | | `dash_spv_ffi_client_get_wallet_manager` | Get the wallet manager from the SPV client Returns a pointer to an... | client | +| `dash_spv_ffi_client_run` | Start the SPV client and begin syncing in the background | client | +| `dash_spv_ffi_filter_headers_progress_destroy` | Destroy an `FFIFilterHeadersProgress` object | types | +| `dash_spv_ffi_filters_progress_destroy` | Destroy an `FFIFiltersProgress` object | types | | `dash_spv_ffi_init_logging` | Initialize logging for the SPV library | utils | +| `dash_spv_ffi_instantsend_progress_destroy` | Destroy an `FFIInstantSendProgress` object | types | +| `dash_spv_ffi_masternode_progress_destroy` | Destroy an `FFIMasternodesProgress` object | types | | `dash_spv_ffi_version` | No description | utils | | `dash_spv_ffi_wallet_manager_free` | Release a wallet manager obtained from `dash_spv_ffi_client_get_wallet_manager` | client | @@ -432,6 +455,38 @@ The client pointer must be valid and non-null. --- +#### `dash_spv_ffi_client_clear_sync_event_callbacks` + +```c +dash_spv_ffi_client_clear_sync_event_callbacks(client: *mut FFIDashSpvClient,) -> i32 +``` + +**Description:** +Clear sync event callbacks. # Safety - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + +**Safety:** +- `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + +**Module:** `client` + +--- + +#### `dash_spv_ffi_client_get_manager_sync_progress` + +```c +dash_spv_ffi_client_get_manager_sync_progress(client: *mut FFIDashSpvClient,) -> *mut FFISyncProgress +``` + +**Description:** +Get the current manager-based sync progress. Returns the new parallel sync system's progress with per-manager details. Use `dash_spv_ffi_manager_sync_progress_destroy` to free the returned struct. # Safety - `client` must be a valid, non-null pointer. + +**Safety:** +- `client` must be a valid, non-null pointer. + +**Module:** `client` + +--- + #### `dash_spv_ffi_client_get_sync_progress` ```c @@ -448,22 +503,38 @@ Get the current sync progress snapshot. # Safety - `client` must be a valid, no --- -#### `dash_spv_ffi_client_sync_to_tip_with_progress` +#### `dash_spv_ffi_client_set_sync_event_callbacks` ```c -dash_spv_ffi_client_sync_to_tip_with_progress(client: *mut FFIDashSpvClient, progress_callback: Option, completion_callback: Option, user_data: *mut c_void,) -> i32 +dash_spv_ffi_client_set_sync_event_callbacks(client: *mut FFIDashSpvClient, callbacks: FFISyncEventCallbacks,) -> i32 ``` **Description:** -Sync the SPV client to the chain tip with detailed progress updates. # Safety This function is unsafe because: - `client` must be a valid pointer to an initialized `FFIDashSpvClient` - `user_data` must satisfy thread safety requirements: - If non-null, it must point to data that is safe to access from multiple threads - The caller must ensure proper synchronization if the data is mutable - The data must remain valid for the entire duration of the sync operation - Both `progress_callback` and `completion_callback` must be thread-safe and can be called from any thread # Parameters - `client`: Pointer to the SPV client - `progress_callback`: Optional callback invoked periodically with sync progress - `completion_callback`: Optional callback invoked on completion - `user_data`: Optional user data pointer passed to all callbacks # Returns 0 on success, error code on failure +Set sync event callbacks for push-based event notifications. The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. Call this before calling run(). # Safety - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. - Callbacks must be thread-safe as they may be called from a background thread. **Safety:** -This function is unsafe because: - `client` must be a valid pointer to an initialized `FFIDashSpvClient` - `user_data` must satisfy thread safety requirements: - If non-null, it must point to data that is safe to access from multiple threads - The caller must ensure proper synchronization if the data is mutable - The data must remain valid for the entire duration of the sync operation - Both `progress_callback` and `completion_callback` must be thread-safe and can be called from any thread +- `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. - Callbacks must be thread-safe as they may be called from a background thread. **Module:** `client` --- +#### `dash_spv_ffi_manager_sync_progress_destroy` + +```c +dash_spv_ffi_manager_sync_progress_destroy(progress: *mut FFISyncProgress,) -> () +``` + +**Description:** +Destroy an `FFISyncProgress` object and all its nested pointers. # Safety - `progress` must be a pointer returned from this crate, or null. + +**Safety:** +- `progress` must be a pointer returned from this crate, or null. + +**Module:** `types` + +--- + #### `dash_spv_ffi_sync_progress_destroy` ```c @@ -480,6 +551,40 @@ Destroy a `FFISyncProgress` object returned by this crate. # Safety - `progress --- +### Wallet Operations - Detailed + +#### `dash_spv_ffi_client_clear_wallet_event_callbacks` + +```c +dash_spv_ffi_client_clear_wallet_event_callbacks(client: *mut FFIDashSpvClient,) -> i32 +``` + +**Description:** +Clear wallet event callbacks. # Safety - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + +**Safety:** +- `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + +**Module:** `client` + +--- + +#### `dash_spv_ffi_client_set_wallet_event_callbacks` + +```c +dash_spv_ffi_client_set_wallet_event_callbacks(client: *mut FFIDashSpvClient, callbacks: FFIWalletEventCallbacks,) -> i32 +``` + +**Description:** +Set wallet event callbacks for push-based event notifications. The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. Call this before calling run(). # Safety - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. - Callbacks must be thread-safe as they may be called from a background thread. + +**Safety:** +- `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. - Callbacks must be thread-safe as they may be called from a background thread. + +**Module:** `client` + +--- + ### Platform Integration - Detailed #### `ffi_dash_spv_get_platform_activation_height` @@ -516,33 +621,65 @@ This function is unsafe because: - The caller must ensure all pointers are valid ### Event Callbacks - Detailed -#### `dash_spv_ffi_client_drain_events` +#### `dash_spv_ffi_client_clear_network_event_callbacks` ```c -dash_spv_ffi_client_drain_events(client: *mut FFIDashSpvClient) -> i32 +dash_spv_ffi_client_clear_network_event_callbacks(client: *mut FFIDashSpvClient,) -> i32 ``` **Description:** -Drain pending events and invoke configured callbacks (non-blocking). # Safety - `client` must be a valid, non-null pointer. +Clear network event callbacks. # Safety - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. **Safety:** -- `client` must be a valid, non-null pointer. +- `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. **Module:** `client` --- -#### `dash_spv_ffi_client_set_event_callbacks` +#### `dash_spv_ffi_client_clear_progress_callback` ```c -dash_spv_ffi_client_set_event_callbacks(client: *mut FFIDashSpvClient, callbacks: FFIEventCallbacks,) -> i32 +dash_spv_ffi_client_clear_progress_callback(client: *mut FFIDashSpvClient,) -> i32 ``` **Description:** -Set event callbacks for the client. # Safety - `client` must be a valid, non-null pointer. +Clear progress callback. # Safety - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. **Safety:** -- `client` must be a valid, non-null pointer. +- `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + +**Module:** `client` + +--- + +#### `dash_spv_ffi_client_set_network_event_callbacks` + +```c +dash_spv_ffi_client_set_network_event_callbacks(client: *mut FFIDashSpvClient, callbacks: FFINetworkEventCallbacks,) -> i32 +``` + +**Description:** +Set network event callbacks for push-based event notifications. The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. Call this before calling run(). # Safety - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. - Callbacks must be thread-safe as they may be called from a background thread. + +**Safety:** +- `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. - Callbacks must be thread-safe as they may be called from a background thread. + +**Module:** `client` + +--- + +#### `dash_spv_ffi_client_set_progress_callback` + +```c +dash_spv_ffi_client_set_progress_callback(client: *mut FFIDashSpvClient, callback: crate::FFIProgressCallback,) -> i32 +``` + +**Description:** +Set progress callback for sync progress updates. The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. Call this before calling run(). # Safety - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. - The `callback` struct and its `user_data` must remain valid until the callback is cleared. - The callback must be thread-safe as it may be called from a background thread. + +**Safety:** +- `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. - The `callback` struct and its `user_data` must remain valid until the callback is cleared. - The callback must be thread-safe as it may be called from a background thread. **Module:** `client` @@ -562,6 +699,54 @@ dash_spv_ffi_get_last_error() -> *const c_char ### Utility Functions - Detailed +#### `dash_spv_ffi_block_headers_progress_destroy` + +```c +dash_spv_ffi_block_headers_progress_destroy(progress: *mut FFIBlockHeadersProgress,) -> () +``` + +**Description:** +Destroy an `FFIBlockHeadersProgress` object. # Safety - `progress` must be a pointer returned from this crate, or null. + +**Safety:** +- `progress` must be a pointer returned from this crate, or null. + +**Module:** `types` + +--- + +#### `dash_spv_ffi_blocks_progress_destroy` + +```c +dash_spv_ffi_blocks_progress_destroy(progress: *mut FFIBlocksProgress) -> () +``` + +**Description:** +Destroy an `FFIBlocksProgress` object. # Safety - `progress` must be a pointer returned from this crate, or null. + +**Safety:** +- `progress` must be a pointer returned from this crate, or null. + +**Module:** `types` + +--- + +#### `dash_spv_ffi_chainlock_progress_destroy` + +```c +dash_spv_ffi_chainlock_progress_destroy(progress: *mut FFIChainLockProgress,) -> () +``` + +**Description:** +Destroy an `FFIChainLockProgress` object. # Safety - `progress` must be a pointer returned from this crate, or null. + +**Safety:** +- `progress` must be a pointer returned from this crate, or null. + +**Module:** `types` + +--- + #### `dash_spv_ffi_checkpoint_before_height` ```c @@ -674,6 +859,54 @@ The caller must ensure that: - The client pointer is valid - The returned pointe --- +#### `dash_spv_ffi_client_run` + +```c +dash_spv_ffi_client_run(client: *mut FFIDashSpvClient) -> i32 +``` + +**Description:** +Start the SPV client and begin syncing in the background. This is the streamlined entry point that combines `start()` and continuous monitoring into a single non-blocking call. Use event callbacks (set via `set_sync_event_callbacks`, `set_network_event_callbacks`, `set_wallet_event_callbacks`) to receive notifications about sync progress, peer connections, and wallet activity. Workflow: 1. Configure event callbacks before calling `run()` 2. Call `run()` - it returns immediately after spawning background sync threads 3. Receive notifications via callbacks as sync progresses 4. Call `stop()` when done # Safety - `client` must be a valid, non-null pointer to a created client. # Returns 0 on success, error code on failure. + +**Safety:** +- `client` must be a valid, non-null pointer to a created client. + +**Module:** `client` + +--- + +#### `dash_spv_ffi_filter_headers_progress_destroy` + +```c +dash_spv_ffi_filter_headers_progress_destroy(progress: *mut FFIFilterHeadersProgress,) -> () +``` + +**Description:** +Destroy an `FFIFilterHeadersProgress` object. # Safety - `progress` must be a pointer returned from this crate, or null. + +**Safety:** +- `progress` must be a pointer returned from this crate, or null. + +**Module:** `types` + +--- + +#### `dash_spv_ffi_filters_progress_destroy` + +```c +dash_spv_ffi_filters_progress_destroy(progress: *mut FFIFiltersProgress) -> () +``` + +**Description:** +Destroy an `FFIFiltersProgress` object. # Safety - `progress` must be a pointer returned from this crate, or null. + +**Safety:** +- `progress` must be a pointer returned from this crate, or null. + +**Module:** `types` + +--- + #### `dash_spv_ffi_init_logging` ```c @@ -690,6 +923,38 @@ Initialize logging for the SPV library. # Arguments - `level`: Log level string --- +#### `dash_spv_ffi_instantsend_progress_destroy` + +```c +dash_spv_ffi_instantsend_progress_destroy(progress: *mut FFIInstantSendProgress,) -> () +``` + +**Description:** +Destroy an `FFIInstantSendProgress` object. # Safety - `progress` must be a pointer returned from this crate, or null. + +**Safety:** +- `progress` must be a pointer returned from this crate, or null. + +**Module:** `types` + +--- + +#### `dash_spv_ffi_masternode_progress_destroy` + +```c +dash_spv_ffi_masternode_progress_destroy(progress: *mut FFIMasternodesProgress,) -> () +``` + +**Description:** +Destroy an `FFIMasternodesProgress` object. # Safety - `progress` must be a pointer returned from this crate, or null. + +**Safety:** +- `progress` must be a pointer returned from this crate, or null. + +**Module:** `types` + +--- + #### `dash_spv_ffi_version` ```c diff --git a/dash-spv-ffi/dash_spv_ffi.h b/dash-spv-ffi/dash_spv_ffi.h index bdbde3fbc..ab63d67a5 100644 --- a/dash-spv-ffi/dash_spv_ffi.h +++ b/dash-spv-ffi/dash_spv_ffi.h @@ -336,7 +336,7 @@ int32_t dash_spv_ffi_client_update_config(struct FFIDashSpvClient *client, * 0 on success, error code on failure */ -int32_t dash_spv_ffi_client_sync_to_tip(struct FFIDashSpvClient *client, +int32_t dash_spv_ffi_client_start_sync(struct FFIDashSpvClient *client, void (*completion_callback)(bool, const char*, void*), void *user_data) ; @@ -382,7 +382,7 @@ int32_t dash_spv_ffi_client_sync_to_tip(struct FFIDashSpvClient *client, * 0 on success, error code on failure */ -int32_t dash_spv_ffi_client_sync_to_tip_with_progress(struct FFIDashSpvClient *client, +int32_t dash_spv_ffi_client_start_sync_with_progress(struct FFIDashSpvClient *client, void (*progress_callback)(const struct FFIDetailedSyncProgress*, void*), void (*completion_callback)(bool, diff --git a/dash-spv-ffi/include/dash_spv_ffi.h b/dash-spv-ffi/include/dash_spv_ffi.h index 4d7cc65ef..f71e51aaa 100644 --- a/dash-spv-ffi/include/dash_spv_ffi.h +++ b/dash-spv-ffi/include/dash_spv_ffi.h @@ -16,18 +16,30 @@ namespace dash_spv_ffi { #endif // __cplusplus -typedef enum FFISyncStage { - Connecting = 0, - QueryingHeight = 1, - Downloading = 2, - Validating = 3, - Storing = 4, - DownloadingFilterHeaders = 5, - DownloadingFilters = 6, - DownloadingBlocks = 7, - Complete = 8, - Failed = 9, -} FFISyncStage; +/** + * SyncState exposed by the FFI as FFISyncState. + */ +typedef enum FFISyncState { + Initializing = 0, + WaitingForConnections = 1, + WaitForEvents = 2, + Syncing = 3, + Synced = 4, + Error = 5, +} FFISyncState; + +/** + * Identifies which sync manager generated an event. + */ +typedef enum FFIManagerId { + Headers = 0, + FilterHeaders = 1, + Filters = 2, + Blocks = 3, + Masternodes = 4, + ChainLocks = 5, + InstantSend = 6, +} FFIManagerId; typedef enum FFIMempoolStrategy { FetchAll = 0, @@ -42,83 +54,115 @@ typedef struct FFIClientConfig { } FFIClientConfig; -typedef struct FFIString { - char *ptr; - uintptr_t length; -} FFIString; +/** + * Progress for block headers synchronization. + */ +typedef struct FFIBlockHeadersProgress { + enum FFISyncState state; + uint32_t current_height; + uint32_t target_height; + uint32_t processed; + uint32_t buffered; + double percentage; + uint64_t last_activity; +} FFIBlockHeadersProgress; -typedef struct FFISyncProgress { - uint32_t header_height; - uint32_t filter_header_height; - uint32_t masternode_height; - uint32_t peer_count; - bool filter_sync_available; - uint32_t filters_downloaded; - uint32_t last_synced_filter_height; -} FFISyncProgress; +/** + * Progress for filter headers synchronization. + */ +typedef struct FFIFilterHeadersProgress { + enum FFISyncState state; + uint32_t current_height; + uint32_t target_height; + uint32_t block_header_tip_height; + uint32_t processed; + double percentage; + uint64_t last_activity; +} FFIFilterHeadersProgress; -typedef struct FFIDetailedSyncProgress { - uint32_t total_height; +/** + * Progress for compact block filters synchronization. + */ +typedef struct FFIFiltersProgress { + enum FFISyncState state; + uint32_t current_height; + uint32_t target_height; + uint32_t filter_header_tip_height; + uint32_t downloaded; + uint32_t processed; + uint32_t matched; double percentage; - double headers_per_second; - int64_t estimated_seconds_remaining; - enum FFISyncStage stage; - struct FFIString stage_message; - struct FFISyncProgress overview; - uint64_t total_headers; - int64_t sync_start_timestamp; -} FFIDetailedSyncProgress; - -typedef void (*BlockCallback)(uint32_t height, const uint8_t (*hash)[32], void *user_data); - -typedef void (*TransactionCallback)(const uint8_t (*txid)[32], - bool confirmed, - int64_t amount, - const char *addresses, - uint32_t block_height, - void *user_data); - -typedef void (*BalanceCallback)(uint64_t confirmed, uint64_t unconfirmed, void *user_data); - -typedef void (*MempoolTransactionCallback)(const uint8_t (*txid)[32], - int64_t amount, - const char *addresses, - bool is_instant_send, - void *user_data); - -typedef void (*MempoolConfirmedCallback)(const uint8_t (*txid)[32], - uint32_t block_height, - const uint8_t (*block_hash)[32], - void *user_data); + uint64_t last_activity; +} FFIFiltersProgress; -typedef void (*MempoolRemovedCallback)(const uint8_t (*txid)[32], uint8_t reason, void *user_data); - -typedef void (*CompactFilterMatchedCallback)(const uint8_t (*block_hash)[32], - const char *matched_scripts, - const char *wallet_id, - void *user_data); - -typedef void (*WalletTransactionCallback)(const char *wallet_id, - uint32_t account_index, - const uint8_t (*txid)[32], - bool confirmed, - int64_t amount, - const char *addresses, - uint32_t block_height, - bool is_ours, - void *user_data); - -typedef struct FFIEventCallbacks { - BlockCallback on_block; - TransactionCallback on_transaction; - BalanceCallback on_balance_update; - MempoolTransactionCallback on_mempool_transaction_added; - MempoolConfirmedCallback on_mempool_transaction_confirmed; - MempoolRemovedCallback on_mempool_transaction_removed; - CompactFilterMatchedCallback on_compact_filter_matched; - WalletTransactionCallback on_wallet_transaction; - void *user_data; -} FFIEventCallbacks; +/** + * Progress for full block synchronization. + */ +typedef struct FFIBlocksProgress { + enum FFISyncState state; + uint32_t last_processed; + uint32_t requested; + uint32_t from_storage; + uint32_t downloaded; + uint32_t processed; + uint32_t relevant; + uint32_t transactions; + uint64_t last_activity; +} FFIBlocksProgress; + +/** + * Progress for masternode list synchronization. + */ +typedef struct FFIMasternodesProgress { + enum FFISyncState state; + uint32_t current_height; + uint32_t target_height; + uint32_t block_header_tip_height; + uint32_t diffs_processed; + uint64_t last_activity; +} FFIMasternodesProgress; + +/** + * Progress for ChainLock synchronization. + */ +typedef struct FFIChainLockProgress { + enum FFISyncState state; + uint32_t best_validated_height; + uint32_t valid; + uint32_t invalid; + uint64_t last_activity; +} FFIChainLockProgress; + +/** + * Progress for InstantSend synchronization. + */ +typedef struct FFIInstantSendProgress { + enum FFISyncState state; + uint32_t pending; + uint32_t valid; + uint32_t invalid; + uint64_t last_activity; +} FFIInstantSendProgress; + +/** + * Aggregate progress for all sync managers. + * Provides a complete view of the parallel sync system's state. + */ +typedef struct FFISyncProgress { + enum FFISyncState state; + double percentage; + bool is_synced; + /** + * Per-manager progress (null if manager not started). + */ + struct FFIBlockHeadersProgress *headers; + struct FFIFilterHeadersProgress *filter_headers; + struct FFIFiltersProgress *filters; + struct FFIBlocksProgress *blocks; + struct FFIMasternodesProgress *masternodes; + struct FFIChainLockProgress *chainlocks; + struct FFIInstantSendProgress *instantsend; +} FFISyncProgress; /** * Opaque handle to the wallet manager owned by the SPV client. @@ -131,6 +175,264 @@ typedef struct FFIWalletManager { uint8_t _private[0]; } FFIWalletManager; +/** + * Callback for SyncEvent::SyncStart + */ +typedef void (*OnSyncStartCallback)(enum FFIManagerId manager_id, void *user_data); + +/** + * Callback for SyncEvent::BlockHeadersStored + */ +typedef void (*OnBlockHeadersStoredCallback)(uint32_t tip_height, void *user_data); + +/** + * Callback for SyncEvent::BlockHeaderSyncComplete + */ +typedef void (*OnBlockHeaderSyncCompleteCallback)(uint32_t tip_height, void *user_data); + +/** + * Callback for SyncEvent::FilterHeadersStored + */ +typedef void (*OnFilterHeadersStoredCallback)(uint32_t start_height, + uint32_t end_height, + uint32_t tip_height, + void *user_data); + +/** + * Callback for SyncEvent::FilterHeadersSyncComplete + */ +typedef void (*OnFilterHeadersSyncCompleteCallback)(uint32_t tip_height, void *user_data); + +/** + * Callback for SyncEvent::FiltersStored + */ +typedef void (*OnFiltersStoredCallback)(uint32_t start_height, uint32_t end_height, void *user_data); + +/** + * Callback for SyncEvent::FiltersSyncComplete + */ +typedef void (*OnFiltersSyncCompleteCallback)(uint32_t tip_height, void *user_data); + +/** + * A block that needs to be downloaded (height + hash). + */ +typedef struct FFIBlockNeeded { + /** + * Block height + */ + uint32_t height; + /** + * Block hash (32 bytes) + */ + uint8_t hash[32]; +} FFIBlockNeeded; + +/** + * Callback for SyncEvent::BlocksNeeded + * + * The `blocks` pointer points to an array of `FFIBlockNeeded` structs. + * The pointer is borrowed and only valid for the duration of the callback. + * Callers must memcpy/duplicate any data they need to retain after the + * callback returns. + */ +typedef void (*OnBlocksNeededCallback)(const struct FFIBlockNeeded *blocks, + uint32_t count, + void *user_data); + +/** + * Callback for SyncEvent::BlockProcessed + * + * The `hash` pointer is borrowed and only valid for the duration of the + * callback. Callers must memcpy/duplicate it to retain the value after + * the callback returns. + */ +typedef void (*OnBlockProcessedCallback)(uint32_t height, + const uint8_t (*hash)[32], + uint32_t new_address_count, + void *user_data); + +/** + * Callback for SyncEvent::MasternodeStateUpdated + */ +typedef void (*OnMasternodeStateUpdatedCallback)(uint32_t height, void *user_data); + +/** + * Callback for SyncEvent::ChainLockReceived + * + * The `hash` and `signature` pointers are borrowed and only valid for the + * duration of the callback. Callers must memcpy/duplicate them to retain + * the values after the callback returns. + */ +typedef void (*OnChainLockReceivedCallback)(uint32_t height, + const uint8_t (*hash)[32], + const uint8_t (*signature)[96], + bool validated, + void *user_data); + +/** + * Callback for SyncEvent::InstantLockReceived + * + * The `txid` pointer is borrowed and only valid for the duration of the callback. + * The `instantlock_data` pointer points to the consensus-serialized InstantLock + * bytes and is only valid for the duration of the callback. + * Callers must memcpy/duplicate any data they need to retain. + */ +typedef void (*OnInstantLockReceivedCallback)(const uint8_t (*txid)[32], + const uint8_t *instantlock_data, + uintptr_t instantlock_len, + bool validated, + void *user_data); + +/** + * Callback for SyncEvent::ManagerError + * + * The `error` string pointer is borrowed and only valid for the duration + * of the callback. Callers must copy the string if they need to retain it + * after the callback returns. + */ +typedef void (*OnManagerErrorCallback)(enum FFIManagerId manager_id, + const char *error, + void *user_data); + +/** + * Callback for SyncEvent::SyncComplete + */ +typedef void (*OnSyncCompleteCallback)(uint32_t header_tip, void *user_data); + +/** + * Sync event callbacks - one callback per SyncEvent variant. + * + * Set only the callbacks you're interested in; unset callbacks will be ignored. + * + * All pointer parameters passed to callbacks (strings, hashes, arrays) are + * borrowed and only valid for the duration of the callback invocation. + * Callers must memcpy/duplicate any data they need to retain. + */ +typedef struct FFISyncEventCallbacks { + OnSyncStartCallback on_sync_start; + OnBlockHeadersStoredCallback on_block_headers_stored; + OnBlockHeaderSyncCompleteCallback on_block_header_sync_complete; + OnFilterHeadersStoredCallback on_filter_headers_stored; + OnFilterHeadersSyncCompleteCallback on_filter_headers_sync_complete; + OnFiltersStoredCallback on_filters_stored; + OnFiltersSyncCompleteCallback on_filters_sync_complete; + OnBlocksNeededCallback on_blocks_needed; + OnBlockProcessedCallback on_block_processed; + OnMasternodeStateUpdatedCallback on_masternode_state_updated; + OnChainLockReceivedCallback on_chainlock_received; + OnInstantLockReceivedCallback on_instantlock_received; + OnManagerErrorCallback on_manager_error; + OnSyncCompleteCallback on_sync_complete; + void *user_data; +} FFISyncEventCallbacks; + +/** + * Callback for NetworkEvent::PeerConnected + * + * The `address` string pointer is borrowed and only valid for the duration + * of the callback. Callers must copy the string if they need to retain it + * after the callback returns. + */ +typedef void (*OnPeerConnectedCallback)(const char *address, void *user_data); + +/** + * Callback for NetworkEvent::PeerDisconnected + * + * The `address` string pointer is borrowed and only valid for the duration + * of the callback. Callers must copy the string if they need to retain it + * after the callback returns. + */ +typedef void (*OnPeerDisconnectedCallback)(const char *address, void *user_data); + +/** + * Callback for NetworkEvent::PeersUpdated + */ +typedef void (*OnPeersUpdatedCallback)(uint32_t connected_count, + uint32_t best_height, + void *user_data); + +/** + * Network event callbacks - one callback per NetworkEvent variant. + * + * Set only the callbacks you're interested in; unset callbacks will be ignored. + * + * All pointer parameters passed to callbacks (strings, addresses) are + * borrowed and only valid for the duration of the callback invocation. + * Callers must copy any data they need to retain. + */ +typedef struct FFINetworkEventCallbacks { + OnPeerConnectedCallback on_peer_connected; + OnPeerDisconnectedCallback on_peer_disconnected; + OnPeersUpdatedCallback on_peers_updated; + void *user_data; +} FFINetworkEventCallbacks; + +/** + * Callback for WalletEvent::TransactionReceived + * + * The `wallet_id`, `addresses` string pointers and the `txid` hash pointer + * are borrowed and only valid for the duration of the callback. Callers must + * copy any data they need to retain after the callback returns. + */ +typedef void (*OnTransactionReceivedCallback)(const char *wallet_id, + uint32_t account_index, + const uint8_t (*txid)[32], + int64_t amount, + const char *addresses, + void *user_data); + +/** + * Callback for WalletEvent::BalanceUpdated + * + * The `wallet_id` string pointer is borrowed and only valid for the duration + * of the callback. Callers must copy the string if they need to retain it + * after the callback returns. + */ +typedef void (*OnBalanceUpdatedCallback)(const char *wallet_id, + uint64_t spendable, + uint64_t unconfirmed, + uint64_t immature, + uint64_t locked, + void *user_data); + +/** + * Wallet event callbacks - one callback per WalletEvent variant. + * + * Set only the callbacks you're interested in; unset callbacks will be ignored. + * + * All pointer parameters passed to callbacks (wallet IDs, txids, addresses) + * are borrowed and only valid for the duration of the callback invocation. + * Callers must copy any data they need to retain. + */ +typedef struct FFIWalletEventCallbacks { + OnTransactionReceivedCallback on_transaction_received; + OnBalanceUpdatedCallback on_balance_updated; + void *user_data; +} FFIWalletEventCallbacks; + +/** + * Callback for sync progress updates. + * + * Called whenever the sync progress changes. The progress pointer is only + * valid for the duration of the callback. The caller must NOT free the + * progress pointer - it will be freed automatically after the callback returns. + */ +typedef void (*OnProgressUpdateCallback)(const struct FFISyncProgress *progress, void *user_data); + +/** + * Progress callback configuration. + */ +typedef struct FFIProgressCallback { + /** + * Callback function for progress updates. + */ + OnProgressUpdateCallback on_progress; + /** + * User data passed to the callback. + */ + void *user_data; +} FFIProgressCallback; + /** * FFIResult type for error handling */ @@ -193,14 +495,6 @@ int32_t dash_spv_ffi_checkpoint_before_timestamp(FFINetwork network, */ struct FFIDashSpvClient *dash_spv_ffi_client_new(const struct FFIClientConfig *config) ; -/** - * Drain pending events and invoke configured callbacks (non-blocking). - * - * # Safety - * - `client` must be a valid, non-null pointer. - */ - int32_t dash_spv_ffi_client_drain_events(struct FFIDashSpvClient *client) ; - /** * Update the running client's configuration. * @@ -231,38 +525,26 @@ int32_t dash_spv_ffi_client_update_config(struct FFIDashSpvClient *client, int32_t dash_spv_ffi_client_stop(struct FFIDashSpvClient *client) ; /** - * Sync the SPV client to the chain tip with detailed progress updates. - * - * # Safety + * Start the SPV client and begin syncing in the background. * - * This function is unsafe because: - * - `client` must be a valid pointer to an initialized `FFIDashSpvClient` - * - `user_data` must satisfy thread safety requirements: - * - If non-null, it must point to data that is safe to access from multiple threads - * - The caller must ensure proper synchronization if the data is mutable - * - The data must remain valid for the entire duration of the sync operation - * - Both `progress_callback` and `completion_callback` must be thread-safe and can be called from any thread + * This is the streamlined entry point that combines `start()` and continuous monitoring + * into a single non-blocking call. Use event callbacks (set via `set_sync_event_callbacks`, + * `set_network_event_callbacks`, `set_wallet_event_callbacks`) to receive notifications + * about sync progress, peer connections, and wallet activity. * - * # Parameters + * Workflow: + * 1. Configure event callbacks before calling `run()` + * 2. Call `run()` - it returns immediately after spawning background sync threads + * 3. Receive notifications via callbacks as sync progresses + * 4. Call `stop()` when done * - * - `client`: Pointer to the SPV client - * - `progress_callback`: Optional callback invoked periodically with sync progress - * - `completion_callback`: Optional callback invoked on completion - * - `user_data`: Optional user data pointer passed to all callbacks + * # Safety + * - `client` must be a valid, non-null pointer to a created client. * * # Returns - * - * 0 on success, error code on failure + * 0 on success, error code on failure. */ - -int32_t dash_spv_ffi_client_sync_to_tip_with_progress(struct FFIDashSpvClient *client, - void (*progress_callback)(const struct FFIDetailedSyncProgress*, - void*), - void (*completion_callback)(bool, - const char*, - void*), - void *user_data) -; + int32_t dash_spv_ffi_client_run(struct FFIDashSpvClient *client) ; /** * Cancels the sync operation. @@ -286,6 +568,19 @@ int32_t dash_spv_ffi_client_sync_to_tip_with_progress(struct FFIDashSpvClient *c */ struct FFISyncProgress *dash_spv_ffi_client_get_sync_progress(struct FFIDashSpvClient *client) ; +/** + * Get the current manager-based sync progress. + * + * Returns the new parallel sync system's progress with per-manager details. + * Use `dash_spv_ffi_manager_sync_progress_destroy` to free the returned struct. + * + * # Safety + * - `client` must be a valid, non-null pointer. + */ + +struct FFISyncProgress *dash_spv_ffi_client_get_manager_sync_progress(struct FFIDashSpvClient *client) +; + /** * Get the current chain tip hash (32 bytes) if available. * @@ -312,17 +607,6 @@ int32_t dash_spv_ffi_client_sync_to_tip_with_progress(struct FFIDashSpvClient *c */ int32_t dash_spv_ffi_client_clear_storage(struct FFIDashSpvClient *client) ; -/** - * Set event callbacks for the client. - * - * # Safety - * - `client` must be a valid, non-null pointer. - */ - -int32_t dash_spv_ffi_client_set_event_callbacks(struct FFIDashSpvClient *client, - struct FFIEventCallbacks callbacks) -; - /** * Destroy the client and free associated resources. * @@ -372,6 +656,102 @@ int32_t dash_spv_ffi_client_set_event_callbacks(struct FFIDashSpvClient *client, */ void dash_spv_ffi_wallet_manager_free(struct FFIWalletManager *manager) ; +/** + * Set sync event callbacks for push-based event notifications. + * + * The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. + * Call this before calling run(). + * + * # Safety + * - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + * - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. + * - Callbacks must be thread-safe as they may be called from a background thread. + */ + +int32_t dash_spv_ffi_client_set_sync_event_callbacks(struct FFIDashSpvClient *client, + struct FFISyncEventCallbacks callbacks) +; + +/** + * Clear sync event callbacks. + * + * # Safety + * - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + */ + int32_t dash_spv_ffi_client_clear_sync_event_callbacks(struct FFIDashSpvClient *client) ; + +/** + * Set network event callbacks for push-based event notifications. + * + * The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. + * Call this before calling run(). + * + * # Safety + * - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + * - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. + * - Callbacks must be thread-safe as they may be called from a background thread. + */ + +int32_t dash_spv_ffi_client_set_network_event_callbacks(struct FFIDashSpvClient *client, + struct FFINetworkEventCallbacks callbacks) +; + +/** + * Clear network event callbacks. + * + * # Safety + * - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + */ + int32_t dash_spv_ffi_client_clear_network_event_callbacks(struct FFIDashSpvClient *client) ; + +/** + * Set wallet event callbacks for push-based event notifications. + * + * The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. + * Call this before calling run(). + * + * # Safety + * - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + * - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. + * - Callbacks must be thread-safe as they may be called from a background thread. + */ + +int32_t dash_spv_ffi_client_set_wallet_event_callbacks(struct FFIDashSpvClient *client, + struct FFIWalletEventCallbacks callbacks) +; + +/** + * Clear wallet event callbacks. + * + * # Safety + * - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + */ + int32_t dash_spv_ffi_client_clear_wallet_event_callbacks(struct FFIDashSpvClient *client) ; + +/** + * Set progress callback for sync progress updates. + * + * The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. + * Call this before calling run(). + * + * # Safety + * - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + * - The `callback` struct and its `user_data` must remain valid until the callback is cleared. + * - The callback must be thread-safe as it may be called from a background thread. + */ + +int32_t dash_spv_ffi_client_set_progress_callback(struct FFIDashSpvClient *client, + struct FFIProgressCallback callback) +; + +/** + * Clear progress callback. + * + * # Safety + * - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. + */ + int32_t dash_spv_ffi_client_clear_progress_callback(struct FFIDashSpvClient *client) ; + struct FFIClientConfig *dash_spv_ffi_config_new(FFINetwork network) ; struct FFIClientConfig *dash_spv_ffi_config_mainnet(void) ; @@ -566,6 +946,70 @@ struct FFIResult ffi_dash_spv_get_platform_activation_height(struct FFIDashSpvCl uint32_t *out_height) ; +/** + * Destroy an `FFIBlockHeadersProgress` object. + * + * # Safety + * - `progress` must be a pointer returned from this crate, or null. + */ + void dash_spv_ffi_block_headers_progress_destroy(struct FFIBlockHeadersProgress *progress) ; + +/** + * Destroy an `FFIFilterHeadersProgress` object. + * + * # Safety + * - `progress` must be a pointer returned from this crate, or null. + */ + void dash_spv_ffi_filter_headers_progress_destroy(struct FFIFilterHeadersProgress *progress) ; + +/** + * Destroy an `FFIFiltersProgress` object. + * + * # Safety + * - `progress` must be a pointer returned from this crate, or null. + */ + void dash_spv_ffi_filters_progress_destroy(struct FFIFiltersProgress *progress) ; + +/** + * Destroy an `FFIBlocksProgress` object. + * + * # Safety + * - `progress` must be a pointer returned from this crate, or null. + */ + void dash_spv_ffi_blocks_progress_destroy(struct FFIBlocksProgress *progress) ; + +/** + * Destroy an `FFIMasternodesProgress` object. + * + * # Safety + * - `progress` must be a pointer returned from this crate, or null. + */ + void dash_spv_ffi_masternode_progress_destroy(struct FFIMasternodesProgress *progress) ; + +/** + * Destroy an `FFIChainLockProgress` object. + * + * # Safety + * - `progress` must be a pointer returned from this crate, or null. + */ + void dash_spv_ffi_chainlock_progress_destroy(struct FFIChainLockProgress *progress) ; + +/** + * Destroy an `FFIInstantSendProgress` object. + * + * # Safety + * - `progress` must be a pointer returned from this crate, or null. + */ + void dash_spv_ffi_instantsend_progress_destroy(struct FFIInstantSendProgress *progress) ; + +/** + * Destroy an `FFISyncProgress` object and all its nested pointers. + * + * # Safety + * - `progress` must be a pointer returned from this crate, or null. + */ + void dash_spv_ffi_manager_sync_progress_destroy(struct FFISyncProgress *progress) ; + /** * Initialize logging for the SPV library. * diff --git a/dash-spv-ffi/src/bin/ffi_cli.rs b/dash-spv-ffi/src/bin/ffi_cli.rs index ea355ce31..837a7c28d 100644 --- a/dash-spv-ffi/src/bin/ffi_cli.rs +++ b/dash-spv-ffi/src/bin/ffi_cli.rs @@ -1,25 +1,13 @@ use std::ffi::{CStr, CString}; use std::os::raw::{c_char, c_void}; use std::ptr; -use std::sync::atomic::{AtomicBool, Ordering}; -use std::thread; -use std::time::Duration; -use clap::{Arg, ArgAction, Command, ValueEnum}; +use clap::{Arg, ArgAction, Command}; use dash_spv_ffi::*; use key_wallet_ffi::wallet_manager::wallet_manager_add_wallet_from_mnemonic; use key_wallet_ffi::{FFIError, FFINetwork}; -#[derive(Copy, Clone, Debug, ValueEnum)] -enum NetworkOpt { - Mainnet, - Testnet, - Regtest, -} - -static SYNC_COMPLETED: AtomicBool = AtomicBool::new(false); - fn ffi_string_to_rust(s: *const c_char) -> String { if s.is_null() { return String::new(); @@ -27,38 +15,229 @@ fn ffi_string_to_rust(s: *const c_char) -> String { unsafe { CStr::from_ptr(s) }.to_str().unwrap_or_default().to_owned() } -extern "C" fn on_detailed_progress(progress: *const FFIDetailedSyncProgress, _ud: *mut c_void) { - if progress.is_null() { - return; - } - unsafe { - let p = &*progress; - println!( - "height {}/{} {:.2}% peers {} hps {:.1}", - p.overview.header_height, - p.total_height, - p.percentage, - p.overview.peer_count, - p.headers_per_second - ); +// ============================================================================ +// Sync Event Callbacks +// ============================================================================ + +extern "C" fn on_sync_start(manager_id: FFIManagerId, _user_data: *mut c_void) { + let manager_name = match manager_id { + FFIManagerId::Headers => "Headers", + FFIManagerId::FilterHeaders => "FilterHeaders", + FFIManagerId::Filters => "Filters", + FFIManagerId::Blocks => "Blocks", + FFIManagerId::Masternodes => "Masternodes", + FFIManagerId::ChainLocks => "ChainLocks", + FFIManagerId::InstantSend => "InstantSend", + }; + println!("[Sync] Manager started: {}", manager_name); +} + +extern "C" fn on_block_headers_stored(tip_height: u32, _user_data: *mut c_void) { + println!("[Sync] Block headers stored, tip: {}", tip_height); +} + +extern "C" fn on_block_header_sync_complete(tip_height: u32, _user_data: *mut c_void) { + println!("[Sync] Block header sync complete at height: {}", tip_height); +} + +extern "C" fn on_filter_headers_stored( + start_height: u32, + end_height: u32, + tip_height: u32, + _user_data: *mut c_void, +) { + println!("[Sync] Filter headers stored: {}-{}, tip: {}", start_height, end_height, tip_height); +} + +extern "C" fn on_filter_headers_sync_complete(tip_height: u32, _user_data: *mut c_void) { + println!("[Sync] Filter headers sync complete at height: {}", tip_height); +} + +extern "C" fn on_filters_stored(start_height: u32, end_height: u32, _user_data: *mut c_void) { + println!("[Sync] Filters stored: {}-{}", start_height, end_height); +} + +extern "C" fn on_filters_sync_complete(tip_height: u32, _user_data: *mut c_void) { + println!("[Sync] Filters sync complete at height: {}", tip_height); +} + +extern "C" fn on_blocks_needed(blocks: *const FFIBlockNeeded, count: u32, _user_data: *mut c_void) { + println!("[Sync] Blocks needed: {}", count); + if !blocks.is_null() && count > 0 { + let blocks_slice = unsafe { std::slice::from_raw_parts(blocks, count as usize) }; + for block in blocks_slice.iter() { + println!(" - height: {}, hash: {}", block.height, hex::encode(block.hash)); + } } } -extern "C" fn on_completion(success: bool, msg: *const c_char, _ud: *mut c_void) { - let m = ffi_string_to_rust(msg); - if success { - println!("Completed: {}", m); - SYNC_COMPLETED.store(true, Ordering::SeqCst); +extern "C" fn on_block_processed( + height: u32, + _hash: *const [u8; 32], + new_address_count: u32, + _user_data: *mut c_void, +) { + println!("[Sync] Block processed: height={}, new_addresses={}", height, new_address_count); +} + +extern "C" fn on_masternode_state_updated(height: u32, _user_data: *mut c_void) { + println!("[Sync] Masternode state updated at height: {}", height); +} + +extern "C" fn on_chainlock_received( + height: u32, + hash: *const [u8; 32], + signature: *const [u8; 96], + validated: bool, + _user_data: *mut c_void, +) { + let hash_hex = unsafe { hex::encode(*hash) }; + let signature_hex = unsafe { hex::encode(*signature) }; + println!( + "[Sync] ChainLock received: height={}, hash={}, signature={}, validated={}", + height, hash_hex, signature_hex, validated + ); +} + +extern "C" fn on_instantlock_received( + txid: *const [u8; 32], + _instantlock_data: *const u8, + instantlock_len: usize, + validated: bool, + _user_data: *mut c_void, +) { + let txid_hex = unsafe { hex::encode(*txid) }; + println!( + "[Sync] InstantLock received: txid={}, validated={}, data_len={}", + txid_hex, validated, instantlock_len + ); +} + +extern "C" fn on_manager_error( + manager_id: FFIManagerId, + error: *const c_char, + _user_data: *mut c_void, +) { + let error_str = ffi_string_to_rust(error); + println!("[Sync] Manager error: {:?} - {}", manager_id, error_str); +} + +extern "C" fn on_sync_complete(header_tip: u32, _user_data: *mut c_void) { + println!("[Sync] Sync complete at height: {}", header_tip); +} + +// ============================================================================ +// Network Event Callbacks +// ============================================================================ + +extern "C" fn on_peer_connected(address: *const c_char, _user_data: *mut c_void) { + let addr = ffi_string_to_rust(address); + println!("[Network] Peer connected: {}", addr); +} + +extern "C" fn on_peer_disconnected(address: *const c_char, _user_data: *mut c_void) { + let addr = ffi_string_to_rust(address); + println!("[Network] Peer disconnected: {}", addr); +} + +extern "C" fn on_peers_updated(connected_count: u32, best_height: u32, _user_data: *mut c_void) { + println!("[Network] Peers: {} connected, best height: {}", connected_count, best_height); +} + +// ============================================================================ +// Wallet Event Callbacks +// ============================================================================ + +extern "C" fn on_transaction_received( + wallet_id: *const c_char, + account_index: u32, + txid: *const [u8; 32], + amount: i64, + addresses: *const c_char, + _user_data: *mut c_void, +) { + let wallet_str = ffi_string_to_rust(wallet_id); + let addr_str = ffi_string_to_rust(addresses); + let wallet_short = if wallet_str.len() > 8 { + &wallet_str[..8] } else { - eprintln!("Failed: {}", m); + &wallet_str + }; + let txid_hex = unsafe { hex::encode(*txid) }; + println!( + "[Wallet] TX received: wallet={}..., txid={}, account={}, amount={} duffs, addresses={}", + wallet_short, txid_hex, account_index, amount, addr_str + ); +} + +extern "C" fn on_balance_updated( + wallet_id: *const c_char, + spendable: u64, + unconfirmed: u64, + immature: u64, + locked: u64, + _user_data: *mut c_void, +) { + let wallet_str = ffi_string_to_rust(wallet_id); + let wallet_short = if wallet_str.len() > 8 { + &wallet_str[..8] + } else { + &wallet_str + }; + println!( + "[Wallet] Balance updated: wallet={}..., spendable={}, unconfirmed={}, immature={}, locked={}", + wallet_short, spendable, unconfirmed, immature, locked + ); +} + +// ============================================================================ +// Progress Callback +// ============================================================================ + +extern "C" fn on_progress_update(progress: *const FFISyncProgress, _user_data: *mut c_void) { + if progress.is_null() { + return; + } + let p = unsafe { &*progress }; + + let state_str = match p.state { + FFISyncState::Initializing => "Initializing", + FFISyncState::WaitingForConnections => "WaitingForConnections", + FFISyncState::WaitForEvents => "WaitForEvents", + FFISyncState::Syncing => "Syncing", + FFISyncState::Synced => "Synced", + FFISyncState::Error => "Error", + }; + + print!("[Progress] {:.1}% {} ", p.percentage * 100.0, state_str); + + if !p.headers.is_null() { + let h = unsafe { &*p.headers }; + print!("headers:{}/{} ", h.current_height + h.buffered, h.target_height); + } + if !p.filter_headers.is_null() { + let fh = unsafe { &*p.filter_headers }; + print!("filter headers:{}/{} ", fh.current_height, fh.target_height); } + if !p.filters.is_null() { + let f = unsafe { &*p.filters }; + print!("filters:{}/{} ", f.current_height, f.target_height); + } + if !p.blocks.is_null() { + let f = unsafe { &*p.blocks }; + print!("blocks: last: {}, transactions: {} ", f.last_processed, f.transactions); + } + if !p.masternodes.is_null() { + let mn = unsafe { &*p.masternodes }; + print!("masternodes:{}/{} ", mn.current_height, mn.target_height); + } + + println!(); } fn main() { - env_logger::init(); - let matches = Command::new("dash-spv-ffi") - .about("Run SPV sync via FFI") + .about("Run SPV sync via FFI using event callbacks") .arg( Arg::new("network") .long("network") @@ -94,6 +273,19 @@ fn main() { .action(ArgAction::SetTrue) .help("Disable masternode list synchronization"), ) + .arg( + Arg::new("data-dir") + .short('d') + .long("data-dir") + .value_name("DIR") + .help("Data directory for storage (default: unique directory in /tmp)"), + ) + .arg( + Arg::new("mnemonic-file") + .long("mnemonic-file") + .value_name("PATH") + .help("Path to file containing BIP39 mnemonic phrase"), + ) .get_matches(); // Map network @@ -212,63 +404,104 @@ fn main() { dash_spv_ffi_wallet_manager_free(wallet_manager); } - // Set minimal event callbacks - let callbacks = FFIEventCallbacks { - on_block: None, - on_transaction: None, - on_balance_update: None, - on_mempool_transaction_added: None, - on_mempool_transaction_confirmed: None, - on_mempool_transaction_removed: None, - on_compact_filter_matched: None, - on_wallet_transaction: None, + // Set up event callbacks + let sync_callbacks = FFISyncEventCallbacks { + on_sync_start: Some(on_sync_start), + on_block_headers_stored: Some(on_block_headers_stored), + on_block_header_sync_complete: Some(on_block_header_sync_complete), + on_filter_headers_stored: Some(on_filter_headers_stored), + on_filter_headers_sync_complete: Some(on_filter_headers_sync_complete), + on_filters_stored: Some(on_filters_stored), + on_filters_sync_complete: Some(on_filters_sync_complete), + on_blocks_needed: Some(on_blocks_needed), + on_block_processed: Some(on_block_processed), + on_masternode_state_updated: Some(on_masternode_state_updated), + on_chainlock_received: Some(on_chainlock_received), + on_instantlock_received: Some(on_instantlock_received), + on_manager_error: Some(on_manager_error), + on_sync_complete: Some(on_sync_complete), user_data: ptr::null_mut(), }; - let _ = dash_spv_ffi_client_set_event_callbacks(client, callbacks); - // Start client - let rc = dash_spv_ffi_client_start(client); + let network_callbacks = FFINetworkEventCallbacks { + on_peer_connected: Some(on_peer_connected), + on_peer_disconnected: Some(on_peer_disconnected), + on_peers_updated: Some(on_peers_updated), + user_data: ptr::null_mut(), + }; + + let wallet_callbacks = FFIWalletEventCallbacks { + on_transaction_received: Some(on_transaction_received), + on_balance_updated: Some(on_balance_updated), + user_data: ptr::null_mut(), + }; + + let rc = dash_spv_ffi_client_set_sync_event_callbacks(client, sync_callbacks); if rc != FFIErrorCode::Success as i32 { - eprintln!("Start failed: {}", ffi_string_to_rust(dash_spv_ffi_get_last_error())); + eprintln!( + "Failed to set sync callbacks: {}", + ffi_string_to_rust(dash_spv_ffi_get_last_error()) + ); std::process::exit(1); } - // Ensure completion flag is reset before starting sync - SYNC_COMPLETED.store(false, Ordering::SeqCst); + let rc = dash_spv_ffi_client_set_network_event_callbacks(client, network_callbacks); + if rc != FFIErrorCode::Success as i32 { + eprintln!( + "Failed to set network callbacks: {}", + ffi_string_to_rust(dash_spv_ffi_get_last_error()) + ); + std::process::exit(1); + } - // Run sync on this thread; detailed progress will print via callback - let rc = dash_spv_ffi_client_sync_to_tip_with_progress( - client, - Some(on_detailed_progress), - Some(on_completion), - ptr::null_mut(), - ); + let rc = dash_spv_ffi_client_set_wallet_event_callbacks(client, wallet_callbacks); if rc != FFIErrorCode::Success as i32 { - eprintln!("Sync failed: {}", ffi_string_to_rust(dash_spv_ffi_get_last_error())); + eprintln!( + "Failed to set wallet callbacks: {}", + ffi_string_to_rust(dash_spv_ffi_get_last_error()) + ); std::process::exit(1); } - // Wait for sync completion by polling basic progress flags; drain events meanwhile - loop { - let _ = dash_spv_ffi_client_drain_events(client); - let prog_ptr = dash_spv_ffi_client_get_sync_progress(client); - if !prog_ptr.is_null() { - let prog = &*prog_ptr; - let headers_done = SYNC_COMPLETED.load(Ordering::SeqCst); - let filters_complete = prog.filter_header_height >= prog.header_height - && prog.last_synced_filter_height >= prog.filter_header_height; - if headers_done && filters_complete { - dash_spv_ffi_sync_progress_destroy(prog_ptr); - break; - } - dash_spv_ffi_sync_progress_destroy(prog_ptr); - } - thread::sleep(Duration::from_millis(300)); + // Set up progress callback + let progress_callback = FFIProgressCallback { + on_progress: Some(on_progress_update), + user_data: ptr::null_mut(), + }; + + let rc = dash_spv_ffi_client_set_progress_callback(client, progress_callback); + if rc != FFIErrorCode::Success as i32 { + eprintln!( + "Failed to set progress callback: {}", + ffi_string_to_rust(dash_spv_ffi_get_last_error()) + ); + std::process::exit(1); } + println!("Event and progress callbacks configured, starting sync..."); + + // Run client - starts sync in background and returns immediately + let rc = dash_spv_ffi_client_run(client); + if rc != FFIErrorCode::Success as i32 { + eprintln!("Client run failed: {}", ffi_string_to_rust(dash_spv_ffi_get_last_error())); + std::process::exit(1); + } + + println!("Client running. Press Ctrl+C to shutdown..."); + + // Wait for Ctrl+C signal using tokio + tokio::runtime::Runtime::new() + .expect("Failed to create tokio runtime") + .block_on(tokio::signal::ctrl_c()) + .expect("Failed to listen for Ctrl+C"); + + println!("Shutting down..."); + // Cleanup dash_spv_ffi_client_stop(client); dash_spv_ffi_client_destroy(client); dash_spv_ffi_config_destroy(cfg); + + println!("Done."); } } diff --git a/dash-spv-ffi/src/callbacks.rs b/dash-spv-ffi/src/callbacks.rs index dbc85dc8c..10b17bbb4 100644 --- a/dash-spv-ffi/src/callbacks.rs +++ b/dash-spv-ffi/src/callbacks.rs @@ -1,385 +1,624 @@ +//! FFI callback types for event notifications. +//! +//! This module provides several callback structs, each with one callback per event variant: +//! - `FFIProgressCallback` - Sync progress updates +//! - `FFISyncEventCallbacks` - Sync coordinator events +//! - `FFINetworkEventCallbacks` - Network manager events +//! - `FFIWalletEventCallbacks` - Wallet manager events + +use crate::{dash_spv_ffi_manager_sync_progress_destroy, FFISyncProgress}; use dashcore::hashes::Hash; use std::ffi::CString; use std::os::raw::{c_char, c_void}; -pub type ProgressCallback = - extern "C" fn(progress: f64, message: *const c_char, user_data: *mut c_void); -pub type CompletionCallback = - extern "C" fn(success: bool, error: *const c_char, user_data: *mut c_void); -pub type DataCallback = extern "C" fn(data: *const c_void, len: usize, user_data: *mut c_void); +// ============================================================================ +// Sync Event Types (for FFISyncEventCallbacks) +// ============================================================================ +/// Identifies which sync manager generated an event. #[repr(C)] -pub struct FFICallbacks { - pub on_progress: Option, - pub on_completion: Option, - pub on_data: Option, +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum FFIManagerId { + Headers = 0, + FilterHeaders = 1, + Filters = 2, + Blocks = 3, + Masternodes = 4, + ChainLocks = 5, + InstantSend = 6, +} + +impl From for FFIManagerId { + fn from(id: dash_spv::sync::ManagerIdentifier) -> Self { + match id { + dash_spv::sync::ManagerIdentifier::BlockHeader => FFIManagerId::Headers, + dash_spv::sync::ManagerIdentifier::FilterHeader => FFIManagerId::FilterHeaders, + dash_spv::sync::ManagerIdentifier::Filter => FFIManagerId::Filters, + dash_spv::sync::ManagerIdentifier::Block => FFIManagerId::Blocks, + dash_spv::sync::ManagerIdentifier::Masternode => FFIManagerId::Masternodes, + dash_spv::sync::ManagerIdentifier::ChainLock => FFIManagerId::ChainLocks, + dash_spv::sync::ManagerIdentifier::InstantSend => FFIManagerId::InstantSend, + } + } +} + +// ============================================================================ +// Progress Callback +// ============================================================================ + +/// Callback for sync progress updates. +/// +/// Called whenever the sync progress changes. The progress pointer is only +/// valid for the duration of the callback. The caller must NOT free the +/// progress pointer - it will be freed automatically after the callback returns. +pub type OnProgressUpdateCallback = + Option; + +/// Progress callback configuration. +#[repr(C)] +pub struct FFIProgressCallback { + /// Callback function for progress updates. + pub on_progress: OnProgressUpdateCallback, + /// User data passed to the callback. pub user_data: *mut c_void, } -/// # Safety -/// FFICallbacks is only Send if all callback functions and user_data are thread-safe. -/// The caller must ensure that: -/// - All callback functions can be safely called from any thread -/// - The user_data pointer points to thread-safe data or is properly synchronized -unsafe impl Send for FFICallbacks {} - -/// # Safety -/// FFICallbacks is only Sync if all callback functions and user_data are thread-safe. -/// The caller must ensure that: -/// - All callback functions can be safely called concurrently from multiple threads -/// - The user_data pointer points to thread-safe data or is properly synchronized -unsafe impl Sync for FFICallbacks {} - -impl Default for FFICallbacks { +unsafe impl Send for FFIProgressCallback {} +unsafe impl Sync for FFIProgressCallback {} + +impl Default for FFIProgressCallback { fn default() -> Self { - FFICallbacks { + Self { on_progress: None, - on_completion: None, - on_data: None, user_data: std::ptr::null_mut(), } } } -impl FFICallbacks { - /// Call the progress callback with a progress value and message. +impl FFIProgressCallback { + /// Dispatch a progress update to the callback. /// - /// # Safety - /// The string pointer passed to the callback is only valid for the duration of the callback. - /// The C code MUST NOT store or use this pointer after the callback returns. - pub fn call_progress(&self, progress: f64, message: &str) { - if let Some(callback) = self.on_progress { - let c_message = CString::new(message).unwrap_or_else(|_| CString::new("").unwrap()); - callback(progress, c_message.as_ptr(), self.user_data); - } - } + /// Creates an FFISyncProgress from the Rust progress, calls the callback, + /// then cleans up all allocated memory. + pub fn dispatch(&self, progress: &dash_spv::sync::SyncProgress) { + if let Some(cb) = self.on_progress { + // Clone the progress to get an owned SyncProgress for conversion + let owned_progress = progress.clone(); + let ffi_progress = Box::new(FFISyncProgress::from(owned_progress)); + let ptr = Box::into_raw(ffi_progress); - /// Call the completion callback with success status and optional error message. - /// - /// # Safety - /// The string pointer passed to the callback is only valid for the duration of the callback. - /// The C code MUST NOT store or use this pointer after the callback returns. - pub fn call_completion(&self, success: bool, error: Option<&str>) { - if let Some(callback) = self.on_completion { - let c_error = error - .map(|e| CString::new(e).unwrap_or_else(|_| CString::new("").unwrap())) - .unwrap_or_else(|| CString::new("").unwrap()); - callback(success, c_error.as_ptr(), self.user_data); - } - } + // Call the callback + cb(ptr as *const FFISyncProgress, self.user_data); - /// Call the data callback with raw byte data. - /// - /// # Safety - /// The data pointer passed to the callback is only valid for the duration of the callback. - /// The C code MUST NOT store or use this pointer after the callback returns. - pub fn call_data(&self, data: &[u8]) { - if let Some(callback) = self.on_data { - callback(data.as_ptr() as *const c_void, data.len(), self.user_data); + // Clean up the progress and all its nested pointers + unsafe { + dash_spv_ffi_manager_sync_progress_destroy(ptr); + } } } } -pub type BlockCallback = - Option; -pub type TransactionCallback = Option< - extern "C" fn( - txid: *const [u8; 32], - confirmed: bool, - amount: i64, - addresses: *const c_char, - block_height: u32, - user_data: *mut c_void, - ), ->; -pub type BalanceCallback = - Option; -pub type MempoolTransactionCallback = Option< - extern "C" fn( - txid: *const [u8; 32], - amount: i64, - addresses: *const c_char, - is_instant_send: bool, - user_data: *mut c_void, - ), +// ============================================================================ +// FFISyncEventCallbacks - One callback per SyncEvent variant +// ============================================================================ + +/// Callback for SyncEvent::SyncStart +pub type OnSyncStartCallback = + Option; + +/// Callback for SyncEvent::BlockHeadersStored +pub type OnBlockHeadersStoredCallback = + Option; + +/// Callback for SyncEvent::BlockHeaderSyncComplete +pub type OnBlockHeaderSyncCompleteCallback = + Option; + +/// Callback for SyncEvent::FilterHeadersStored +pub type OnFilterHeadersStoredCallback = Option< + extern "C" fn(start_height: u32, end_height: u32, tip_height: u32, user_data: *mut c_void), >; -pub type MempoolConfirmedCallback = Option< + +/// Callback for SyncEvent::FilterHeadersSyncComplete +pub type OnFilterHeadersSyncCompleteCallback = + Option; + +/// Callback for SyncEvent::FiltersStored +pub type OnFiltersStoredCallback = + Option; + +/// Callback for SyncEvent::FiltersSyncComplete +pub type OnFiltersSyncCompleteCallback = + Option; + +/// A block that needs to be downloaded (height + hash). +#[repr(C)] +#[derive(Debug, Clone, Copy)] +pub struct FFIBlockNeeded { + /// Block height + pub height: u32, + /// Block hash (32 bytes) + pub hash: [u8; 32], +} + +/// Callback for SyncEvent::BlocksNeeded +/// +/// The `blocks` pointer points to an array of `FFIBlockNeeded` structs. +/// The pointer is borrowed and only valid for the duration of the callback. +/// Callers must memcpy/duplicate any data they need to retain after the +/// callback returns. +pub type OnBlocksNeededCallback = + Option; + +/// Callback for SyncEvent::BlockProcessed +/// +/// The `hash` pointer is borrowed and only valid for the duration of the +/// callback. Callers must memcpy/duplicate it to retain the value after +/// the callback returns. +pub type OnBlockProcessedCallback = Option< extern "C" fn( - txid: *const [u8; 32], - block_height: u32, - block_hash: *const [u8; 32], + height: u32, + hash: *const [u8; 32], + new_address_count: u32, user_data: *mut c_void, ), >; -pub type MempoolRemovedCallback = - Option; -pub type CompactFilterMatchedCallback = Option< + +/// Callback for SyncEvent::MasternodeStateUpdated +pub type OnMasternodeStateUpdatedCallback = + Option; + +/// Callback for SyncEvent::ChainLockReceived +/// +/// The `hash` and `signature` pointers are borrowed and only valid for the +/// duration of the callback. Callers must memcpy/duplicate them to retain +/// the values after the callback returns. +pub type OnChainLockReceivedCallback = Option< extern "C" fn( - block_hash: *const [u8; 32], - matched_scripts: *const c_char, - wallet_id: *const c_char, + height: u32, + hash: *const [u8; 32], + signature: *const [u8; 96], + validated: bool, user_data: *mut c_void, ), >; -pub type WalletTransactionCallback = Option< + +/// Callback for SyncEvent::InstantLockReceived +/// +/// The `txid` pointer is borrowed and only valid for the duration of the callback. +/// The `instantlock_data` pointer points to the consensus-serialized InstantLock +/// bytes and is only valid for the duration of the callback. +/// Callers must memcpy/duplicate any data they need to retain. +pub type OnInstantLockReceivedCallback = Option< extern "C" fn( - wallet_id: *const c_char, - account_index: u32, txid: *const [u8; 32], - confirmed: bool, - amount: i64, - addresses: *const c_char, - block_height: u32, - is_ours: bool, + instantlock_data: *const u8, + instantlock_len: usize, + validated: bool, user_data: *mut c_void, ), >; +/// Callback for SyncEvent::ManagerError +/// +/// The `error` string pointer is borrowed and only valid for the duration +/// of the callback. Callers must copy the string if they need to retain it +/// after the callback returns. +pub type OnManagerErrorCallback = + Option; + +/// Callback for SyncEvent::SyncComplete +pub type OnSyncCompleteCallback = Option; + +/// Sync event callbacks - one callback per SyncEvent variant. +/// +/// Set only the callbacks you're interested in; unset callbacks will be ignored. +/// +/// All pointer parameters passed to callbacks (strings, hashes, arrays) are +/// borrowed and only valid for the duration of the callback invocation. +/// Callers must memcpy/duplicate any data they need to retain. #[repr(C)] -pub struct FFIEventCallbacks { - pub on_block: BlockCallback, - pub on_transaction: TransactionCallback, - pub on_balance_update: BalanceCallback, - pub on_mempool_transaction_added: MempoolTransactionCallback, - pub on_mempool_transaction_confirmed: MempoolConfirmedCallback, - pub on_mempool_transaction_removed: MempoolRemovedCallback, - pub on_compact_filter_matched: CompactFilterMatchedCallback, - pub on_wallet_transaction: WalletTransactionCallback, +pub struct FFISyncEventCallbacks { + pub on_sync_start: OnSyncStartCallback, + pub on_block_headers_stored: OnBlockHeadersStoredCallback, + pub on_block_header_sync_complete: OnBlockHeaderSyncCompleteCallback, + pub on_filter_headers_stored: OnFilterHeadersStoredCallback, + pub on_filter_headers_sync_complete: OnFilterHeadersSyncCompleteCallback, + pub on_filters_stored: OnFiltersStoredCallback, + pub on_filters_sync_complete: OnFiltersSyncCompleteCallback, + pub on_blocks_needed: OnBlocksNeededCallback, + pub on_block_processed: OnBlockProcessedCallback, + pub on_masternode_state_updated: OnMasternodeStateUpdatedCallback, + pub on_chainlock_received: OnChainLockReceivedCallback, + pub on_instantlock_received: OnInstantLockReceivedCallback, + pub on_manager_error: OnManagerErrorCallback, + pub on_sync_complete: OnSyncCompleteCallback, pub user_data: *mut c_void, } -// SAFETY: FFIEventCallbacks is safe to send between threads because: -// 1. All callback function pointers are extern "C" functions which have no captured state -// 2. The user_data raw pointer is treated as opaque data that must be managed by the caller -// 3. The caller is responsible for ensuring that user_data points to thread-safe memory -// 4. All callback invocations happen through the FFI boundary where the caller manages synchronization -unsafe impl Send for FFIEventCallbacks {} - -// SAFETY: FFIEventCallbacks is safe to share between threads because: -// 1. The struct is immutable after construction (all fields are read-only from Rust's perspective) -// 2. Function pointers themselves are inherently thread-safe as they don't contain mutable state -// 3. The user_data pointer is never dereferenced by Rust code, only passed through to callbacks -// 4. Thread safety of the data pointed to by user_data is the responsibility of the FFI caller -unsafe impl Sync for FFIEventCallbacks {} - -impl Default for FFIEventCallbacks { +// SAFETY: FFISyncEventCallbacks is safe to send between threads because: +// 1. All callback function pointers are extern "C" functions with no captured state +// 2. The user_data pointer is treated as opaque and managed by the caller +// 3. The caller is responsible for ensuring user_data points to thread-safe memory +unsafe impl Send for FFISyncEventCallbacks {} + +// SAFETY: FFISyncEventCallbacks is safe to share between threads because: +// 1. The struct is immutable after construction +// 2. Function pointers are inherently thread-safe +// 3. Thread safety of user_data is the caller's responsibility +unsafe impl Sync for FFISyncEventCallbacks {} + +impl Default for FFISyncEventCallbacks { fn default() -> Self { - FFIEventCallbacks { - on_block: None, - on_transaction: None, - on_balance_update: None, - on_mempool_transaction_added: None, - on_mempool_transaction_confirmed: None, - on_mempool_transaction_removed: None, - on_compact_filter_matched: None, - on_wallet_transaction: None, + Self { + on_sync_start: None, + on_block_headers_stored: None, + on_block_header_sync_complete: None, + on_filter_headers_stored: None, + on_filter_headers_sync_complete: None, + on_filters_stored: None, + on_filters_sync_complete: None, + on_blocks_needed: None, + on_block_processed: None, + on_masternode_state_updated: None, + on_chainlock_received: None, + on_instantlock_received: None, + on_manager_error: None, + on_sync_complete: None, user_data: std::ptr::null_mut(), } } } -impl FFIEventCallbacks { - pub fn call_block(&self, height: u32, hash: &dashcore::BlockHash) { - if let Some(callback) = self.on_block { - tracing::info!("🎯 Calling block callback: height={}, hash={}", height, hash); - let hash_bytes = hash.as_byte_array(); - callback(height, hash_bytes.as_ptr() as *const [u8; 32], self.user_data); - tracing::info!("✅ Block callback completed"); - } else { - tracing::warn!("⚠️ Block callback not set"); +impl FFISyncEventCallbacks { + /// Dispatch a SyncEvent to the appropriate callback. + pub fn dispatch(&self, event: &dash_spv::sync::SyncEvent) { + use dash_spv::sync::SyncEvent; + + match event { + SyncEvent::SyncStart { + identifier, + } => { + if let Some(cb) = self.on_sync_start { + cb((*identifier).into(), self.user_data); + } + } + SyncEvent::BlockHeadersStored { + tip_height, + } => { + if let Some(cb) = self.on_block_headers_stored { + cb(*tip_height, self.user_data); + } + } + SyncEvent::BlockHeaderSyncComplete { + tip_height, + } => { + if let Some(cb) = self.on_block_header_sync_complete { + cb(*tip_height, self.user_data); + } + } + SyncEvent::FilterHeadersStored { + start_height, + end_height, + tip_height, + } => { + if let Some(cb) = self.on_filter_headers_stored { + cb(*start_height, *end_height, *tip_height, self.user_data); + } + } + SyncEvent::FilterHeadersSyncComplete { + tip_height, + } => { + if let Some(cb) = self.on_filter_headers_sync_complete { + cb(*tip_height, self.user_data); + } + } + SyncEvent::FiltersStored { + start_height, + end_height, + } => { + if let Some(cb) = self.on_filters_stored { + cb(*start_height, *end_height, self.user_data); + } + } + SyncEvent::FiltersSyncComplete { + tip_height, + } => { + if let Some(cb) = self.on_filters_sync_complete { + cb(*tip_height, self.user_data); + } + } + SyncEvent::BlocksNeeded { + blocks, + } => { + if let Some(cb) = self.on_blocks_needed { + let ffi_blocks: Vec = blocks + .iter() + .map(|key| FFIBlockNeeded { + height: key.height(), + hash: *key.hash().as_byte_array(), + }) + .collect(); + cb(ffi_blocks.as_ptr(), ffi_blocks.len() as u32, self.user_data); + } + } + SyncEvent::BlockProcessed { + block_hash, + height, + new_addresses, + } => { + if let Some(cb) = self.on_block_processed { + let hash_bytes = block_hash.as_byte_array(); + cb( + *height, + hash_bytes as *const [u8; 32], + new_addresses.len() as u32, + self.user_data, + ); + } + } + SyncEvent::MasternodeStateUpdated { + height, + } => { + if let Some(cb) = self.on_masternode_state_updated { + cb(*height, self.user_data); + } + } + SyncEvent::ChainLockReceived { + chain_lock, + validated, + } => { + if let Some(cb) = self.on_chainlock_received { + let hash_bytes = chain_lock.block_hash.as_byte_array(); + let sig_bytes = chain_lock.signature.as_bytes(); + cb( + chain_lock.block_height, + hash_bytes as *const [u8; 32], + sig_bytes as *const [u8; 96], + *validated, + self.user_data, + ); + } + } + SyncEvent::InstantLockReceived { + instant_lock, + validated, + } => { + if let Some(cb) = self.on_instantlock_received { + let txid_bytes = instant_lock.txid.as_byte_array(); + let serialized = dashcore::consensus::serialize(instant_lock); + cb( + txid_bytes as *const [u8; 32], + serialized.as_ptr(), + serialized.len(), + *validated, + self.user_data, + ); + } + } + SyncEvent::ManagerError { + manager, + error, + } => { + if let Some(cb) = self.on_manager_error { + let c_error = CString::new(error.as_str()).unwrap_or_default(); + cb((*manager).into(), c_error.as_ptr(), self.user_data); + } + } + SyncEvent::SyncComplete { + header_tip, + } => { + if let Some(cb) = self.on_sync_complete { + cb(*header_tip, self.user_data); + } + } } } +} - pub fn call_transaction( - &self, - txid: &dashcore::Txid, - confirmed: bool, - amount: i64, - addresses: &[String], - block_height: Option, - ) { - if let Some(callback) = self.on_transaction { - tracing::info!( - "🎯 Calling transaction callback: txid={}, confirmed={}, amount={}, addresses={:?}", - txid, - confirmed, - amount, - addresses - ); - let txid_bytes = txid.as_byte_array(); - let addresses_str = addresses.join(","); - let c_addresses = - CString::new(addresses_str).unwrap_or_else(|_| CString::new("").unwrap()); - callback( - txid_bytes.as_ptr() as *const [u8; 32], - confirmed, - amount, - c_addresses.as_ptr(), - block_height.unwrap_or(0), - self.user_data, - ); - tracing::info!("✅ Transaction callback completed"); - } else { - tracing::warn!("⚠️ Transaction callback not set"); +// ============================================================================ +// FFINetworkEventCallbacks - One callback per NetworkEvent variant +// ============================================================================ + +/// Callback for NetworkEvent::PeerConnected +/// +/// The `address` string pointer is borrowed and only valid for the duration +/// of the callback. Callers must copy the string if they need to retain it +/// after the callback returns. +pub type OnPeerConnectedCallback = + Option; + +/// Callback for NetworkEvent::PeerDisconnected +/// +/// The `address` string pointer is borrowed and only valid for the duration +/// of the callback. Callers must copy the string if they need to retain it +/// after the callback returns. +pub type OnPeerDisconnectedCallback = + Option; + +/// Callback for NetworkEvent::PeersUpdated +pub type OnPeersUpdatedCallback = + Option; + +/// Network event callbacks - one callback per NetworkEvent variant. +/// +/// Set only the callbacks you're interested in; unset callbacks will be ignored. +/// +/// All pointer parameters passed to callbacks (strings, addresses) are +/// borrowed and only valid for the duration of the callback invocation. +/// Callers must copy any data they need to retain. +#[repr(C)] +pub struct FFINetworkEventCallbacks { + pub on_peer_connected: OnPeerConnectedCallback, + pub on_peer_disconnected: OnPeerDisconnectedCallback, + pub on_peers_updated: OnPeersUpdatedCallback, + pub user_data: *mut c_void, +} + +// SAFETY: Same rationale as FFISyncEventCallbacks +unsafe impl Send for FFINetworkEventCallbacks {} +unsafe impl Sync for FFINetworkEventCallbacks {} + +impl Default for FFINetworkEventCallbacks { + fn default() -> Self { + Self { + on_peer_connected: None, + on_peer_disconnected: None, + on_peers_updated: None, + user_data: std::ptr::null_mut(), } } +} + +impl FFINetworkEventCallbacks { + /// Dispatch a NetworkEvent to the appropriate callback. + pub fn dispatch(&self, event: &dash_spv::network::NetworkEvent) { + use dash_spv::network::NetworkEvent; - pub fn call_balance_update(&self, confirmed: u64, unconfirmed: u64) { - if let Some(callback) = self.on_balance_update { - tracing::info!( - "🎯 Calling balance update callback: confirmed={}, unconfirmed={}", - confirmed, - unconfirmed - ); - callback(confirmed, unconfirmed, self.user_data); - tracing::info!("✅ Balance update callback completed"); - } else { - tracing::warn!("⚠️ Balance update callback not set"); + match event { + NetworkEvent::PeerConnected { + address, + } => { + if let Some(cb) = self.on_peer_connected { + let c_addr = CString::new(address.to_string()).unwrap_or_default(); + cb(c_addr.as_ptr(), self.user_data); + } + } + NetworkEvent::PeerDisconnected { + address, + } => { + if let Some(cb) = self.on_peer_disconnected { + let c_addr = CString::new(address.to_string()).unwrap_or_default(); + cb(c_addr.as_ptr(), self.user_data); + } + } + NetworkEvent::PeersUpdated { + connected_count, + best_height, + .. + } => { + if let Some(cb) = self.on_peers_updated { + cb(*connected_count as u32, best_height.unwrap_or(0), self.user_data); + } + } } } +} + +// ============================================================================ +// FFIWalletEventCallbacks - One callback per WalletEvent variant +// ============================================================================ - // Mempool callbacks use debug level for "not set" messages as they are optional and frequently unused - pub fn call_mempool_transaction_added( - &self, - txid: &dashcore::Txid, +/// Callback for WalletEvent::TransactionReceived +/// +/// The `wallet_id`, `addresses` string pointers and the `txid` hash pointer +/// are borrowed and only valid for the duration of the callback. Callers must +/// copy any data they need to retain after the callback returns. +pub type OnTransactionReceivedCallback = Option< + extern "C" fn( + wallet_id: *const c_char, + account_index: u32, + txid: *const [u8; 32], amount: i64, - addresses: &[String], - is_instant_send: bool, - ) { - if let Some(callback) = self.on_mempool_transaction_added { - tracing::info!("🎯 Calling mempool transaction added callback: txid={}, amount={}, is_instant_send={}", - txid, amount, is_instant_send); - let txid_bytes = txid.as_byte_array(); - let addresses_str = addresses.join(","); - let c_addresses = - CString::new(addresses_str).unwrap_or_else(|_| CString::new("").unwrap()); - callback( - txid_bytes.as_ptr() as *const [u8; 32], - amount, - c_addresses.as_ptr(), - is_instant_send, - self.user_data, - ); - tracing::info!("✅ Mempool transaction added callback completed"); - } else { - tracing::debug!("Mempool transaction added callback not set"); - } - } + addresses: *const c_char, + user_data: *mut c_void, + ), +>; - pub fn call_mempool_transaction_confirmed( - &self, - txid: &dashcore::Txid, - block_height: u32, - block_hash: &dashcore::BlockHash, - ) { - if let Some(callback) = self.on_mempool_transaction_confirmed { - tracing::info!( - "🎯 Calling mempool transaction confirmed callback: txid={}, height={}, hash={}", - txid, - block_height, - block_hash - ); - let txid_bytes = txid.as_byte_array(); - let hash_bytes = block_hash.as_byte_array(); - callback( - txid_bytes.as_ptr() as *const [u8; 32], - block_height, - hash_bytes.as_ptr() as *const [u8; 32], - self.user_data, - ); - tracing::info!("✅ Mempool transaction confirmed callback completed"); - } else { - tracing::debug!("Mempool transaction confirmed callback not set"); - } - } +/// Callback for WalletEvent::BalanceUpdated +/// +/// The `wallet_id` string pointer is borrowed and only valid for the duration +/// of the callback. Callers must copy the string if they need to retain it +/// after the callback returns. +pub type OnBalanceUpdatedCallback = Option< + extern "C" fn( + wallet_id: *const c_char, + spendable: u64, + unconfirmed: u64, + immature: u64, + locked: u64, + user_data: *mut c_void, + ), +>; - pub fn call_mempool_transaction_removed(&self, txid: &dashcore::Txid, reason: u8) { - if let Some(callback) = self.on_mempool_transaction_removed { - tracing::info!( - "🎯 Calling mempool transaction removed callback: txid={}, reason={}", - txid, - reason - ); - let txid_bytes = txid.as_byte_array(); - callback(txid_bytes.as_ptr() as *const [u8; 32], reason, self.user_data); - tracing::info!("✅ Mempool transaction removed callback completed"); - } else { - tracing::debug!("Mempool transaction removed callback not set"); - } - } +/// Wallet event callbacks - one callback per WalletEvent variant. +/// +/// Set only the callbacks you're interested in; unset callbacks will be ignored. +/// +/// All pointer parameters passed to callbacks (wallet IDs, txids, addresses) +/// are borrowed and only valid for the duration of the callback invocation. +/// Callers must copy any data they need to retain. +#[repr(C)] +pub struct FFIWalletEventCallbacks { + pub on_transaction_received: OnTransactionReceivedCallback, + pub on_balance_updated: OnBalanceUpdatedCallback, + pub user_data: *mut c_void, +} - pub fn call_compact_filter_matched( - &self, - block_hash: &dashcore::BlockHash, - matched_scripts: &[String], - wallet_id: &str, - ) { - if let Some(callback) = self.on_compact_filter_matched { - tracing::info!( - "🎯 Calling compact filter matched callback: block={}, scripts={:?}, wallet={}", - block_hash, - matched_scripts, - wallet_id - ); - let hash_bytes = block_hash.as_byte_array(); - let scripts_str = matched_scripts.join(","); - let c_scripts = CString::new(scripts_str).unwrap_or_else(|_| CString::new("").unwrap()); - let c_wallet_id = CString::new(wallet_id).unwrap_or_else(|_| CString::new("").unwrap()); - - callback( - hash_bytes.as_ptr() as *const [u8; 32], - c_scripts.as_ptr(), - c_wallet_id.as_ptr(), - self.user_data, - ); - tracing::info!("✅ Compact filter matched callback completed"); - } else { - tracing::debug!("Compact filter matched callback not set"); +// SAFETY: Same rationale as FFISyncEventCallbacks +unsafe impl Send for FFIWalletEventCallbacks {} +unsafe impl Sync for FFIWalletEventCallbacks {} + +impl Default for FFIWalletEventCallbacks { + fn default() -> Self { + Self { + on_transaction_received: None, + on_balance_updated: None, + user_data: std::ptr::null_mut(), } } +} - #[allow(clippy::too_many_arguments)] - pub fn call_wallet_transaction( - &self, - wallet_id: &str, - account_index: u32, - txid: &dashcore::Txid, - confirmed: bool, - amount: i64, - addresses: &[String], - block_height: u32, - is_ours: bool, - ) { - if let Some(callback) = self.on_wallet_transaction { - tracing::info!( - "🎯 Calling wallet transaction callback: wallet={}, account={}, txid={}, confirmed={}, amount={}, is_ours={}", +impl FFIWalletEventCallbacks { + /// Dispatch a WalletEvent to the appropriate callback. + pub fn dispatch(&self, event: &key_wallet_manager::WalletEvent) { + use key_wallet_manager::WalletEvent; + + match event { + WalletEvent::TransactionReceived { wallet_id, account_index, txid, - confirmed, amount, - is_ours - ); - let txid_bytes = txid.as_byte_array(); - let addresses_str = addresses.join(","); - let c_addresses = - CString::new(addresses_str).unwrap_or_else(|_| CString::new("").unwrap()); - let c_wallet_id = CString::new(wallet_id).unwrap_or_else(|_| CString::new("").unwrap()); - - callback( - c_wallet_id.as_ptr(), - account_index, - txid_bytes.as_ptr() as *const [u8; 32], - confirmed, - amount, - c_addresses.as_ptr(), - block_height, - is_ours, - self.user_data, - ); - tracing::info!("✅ Wallet transaction callback completed"); - } else { - tracing::debug!("Wallet transaction callback not set"); + addresses, + } => { + if let Some(cb) = self.on_transaction_received { + let wallet_id_hex = hex::encode(wallet_id); + let c_wallet_id = CString::new(wallet_id_hex).unwrap_or_default(); + let txid_bytes = txid.as_byte_array(); + let addresses_str: Vec = + addresses.iter().map(|a| a.to_string()).collect(); + let c_addresses = CString::new(addresses_str.join(",")).unwrap_or_default(); + cb( + c_wallet_id.as_ptr(), + *account_index, + txid_bytes as *const [u8; 32], + *amount, + c_addresses.as_ptr(), + self.user_data, + ); + } + } + WalletEvent::BalanceUpdated { + wallet_id, + spendable, + unconfirmed, + immature, + locked, + } => { + if let Some(cb) = self.on_balance_updated { + let wallet_id_hex = hex::encode(wallet_id); + let c_wallet_id = CString::new(wallet_id_hex).unwrap_or_default(); + cb( + c_wallet_id.as_ptr(), + *spendable, + *unconfirmed, + *immature, + *locked, + self.user_data, + ); + } + } } } } diff --git a/dash-spv-ffi/src/client.rs b/dash-spv-ffi/src/client.rs index 3a3546903..4ea599898 100644 --- a/dash-spv-ffi/src/client.rs +++ b/dash-spv-ffi/src/client.rs @@ -1,101 +1,115 @@ use crate::{ - null_check, set_last_error, FFIClientConfig, FFIDetailedSyncProgress, FFIErrorCode, - FFIEventCallbacks, FFISyncProgress, FFIWalletManager, + null_check, set_last_error, FFIClientConfig, FFIErrorCode, FFINetworkEventCallbacks, + FFIProgressCallback, FFISyncEventCallbacks, FFISyncProgress, FFIWalletEventCallbacks, + FFIWalletManager, }; // Import wallet types from key-wallet-ffi use key_wallet_ffi::FFIWalletManager as KeyWalletFFIWalletManager; use dash_spv::storage::DiskStorageManager; -use dash_spv::types::SyncStage; use dash_spv::DashSpvClient; use dash_spv::Hash; use futures::future::{AbortHandle, Abortable}; -use once_cell::sync::Lazy; -use std::collections::HashMap; -use std::ffi::CString; -use std::os::raw::{c_char, c_void}; -use std::sync::atomic::{AtomicU64, Ordering}; use std::sync::{Arc, Mutex}; +use std::thread::JoinHandle; use std::time::Duration; +use tokio::runtime::Handle; use tokio::runtime::Runtime; -use tokio::sync::mpsc::{error::TryRecvError, UnboundedReceiver}; +use tokio::sync::{broadcast, watch}; use tokio_util::sync::CancellationToken; -/// Global callback registry for thread-safe callback management -static CALLBACK_REGISTRY: Lazy>> = - Lazy::new(|| Arc::new(Mutex::new(CallbackRegistry::new()))); - -/// Atomic counter for generating unique callback IDs -static CALLBACK_ID_COUNTER: AtomicU64 = AtomicU64::new(1); - -/// Thread-safe callback registry -struct CallbackRegistry { - callbacks: HashMap, -} - -/// Information stored for each callback -enum CallbackInfo { - /// Detailed progress callbacks (used by sync_to_tip_with_progress) - Detailed { - progress_callback: Option, - completion_callback: Option, - user_data: *mut c_void, - }, -} - -/// # Safety -/// -/// `CallbackInfo` is only `Send` if the following conditions are met: -/// - All callback functions must be safe to call from any thread -/// - The `user_data` pointer must either: -/// - Point to thread-safe data (i.e., data that implements `Send`) -/// - Be properly synchronized by the caller (e.g., using mutexes) -/// - Be null +/// Spawns a monitoring thread for broadcast-based events (sync, network, wallet). /// -/// The caller is responsible for ensuring these conditions are met. Violating -/// these requirements will result in undefined behavior. -unsafe impl Send for CallbackInfo {} +/// Returns a thread handle that monitors the receiver and dispatches events to callbacks. +fn spawn_broadcast_monitor( + name: &'static str, + receiver: broadcast::Receiver, + callbacks: Arc>>, + shutdown: CancellationToken, + rt: Handle, + dispatch_fn: F, +) -> JoinHandle<()> +where + E: Clone + Send + 'static, + C: Send + 'static, + F: Fn(&C, &E) + Send + 'static, +{ + let mut receiver = receiver; + std::thread::spawn(move || { + rt.block_on(async move { + tracing::debug!("{} monitoring thread started", name); + loop { + tokio::select! { + result = receiver.recv() => { + match result { + Ok(event) => { + let guard = callbacks.lock().unwrap(); + if let Some(ref cb) = *guard { + dispatch_fn(cb, &event); + } + } + Err(broadcast::error::RecvError::Closed) => break, + Err(broadcast::error::RecvError::Lagged(_)) => continue, + } + } + _ = shutdown.cancelled() => break, + } + } + tracing::debug!("{} monitoring thread exiting", name); + }); + }) +} -/// # Safety +/// Spawns a monitoring thread for watch-based progress updates. /// -/// `CallbackInfo` is only `Sync` if the following conditions are met: -/// - All callback functions must be safe to call concurrently from multiple threads -/// - The `user_data` pointer must either: -/// - Point to thread-safe data (i.e., data that implements `Sync`) -/// - Be properly synchronized by the caller (e.g., using mutexes) -/// - Be null -/// -/// The caller is responsible for ensuring these conditions are met. Violating -/// these requirements will result in undefined behavior. -unsafe impl Sync for CallbackInfo {} - -impl CallbackRegistry { - fn new() -> Self { - Self { - callbacks: HashMap::new(), - } - } - - fn register(&mut self, info: CallbackInfo) -> u64 { - let id = CALLBACK_ID_COUNTER.fetch_add(1, Ordering::Relaxed); - self.callbacks.insert(id, info); - id - } - - fn get(&self, id: u64) -> Option<&CallbackInfo> { - self.callbacks.get(&id) - } - - fn unregister(&mut self, id: u64) -> Option { - self.callbacks.remove(&id) - } -} +/// Sends the initial progress value, then monitors for changes. +fn spawn_progress_monitor( + receiver: watch::Receiver

, + callbacks: Arc>>, + shutdown: CancellationToken, + rt: Handle, + dispatch_fn: F, +) -> JoinHandle<()> +where + P: Clone + Send + Sync + 'static, + C: Send + 'static, + F: Fn(&C, &P) + Send + 'static, +{ + let mut receiver = receiver; + std::thread::spawn(move || { + rt.block_on(async move { + tracing::debug!("Progress monitoring thread started"); + + // Send initial progress + { + let progress = receiver.borrow().clone(); + let guard = callbacks.lock().unwrap(); + if let Some(ref cb) = *guard { + dispatch_fn(cb, &progress); + } + } -/// Sync callback data that uses callback IDs instead of raw pointers -struct SyncCallbackData { - callback_id: u64, - _marker: std::marker::PhantomData<()>, + loop { + tokio::select! { + result = receiver.changed() => { + match result { + Ok(()) => { + let progress = receiver.borrow().clone(); + let guard = callbacks.lock().unwrap(); + if let Some(ref cb) = *guard { + dispatch_fn(cb, &progress); + } + } + Err(_) => break, + } + } + _ = shutdown.cancelled() => break, + } + } + tracing::debug!("Progress monitoring thread exiting"); + }); + }) } /// FFIDashSpvClient structure @@ -111,12 +125,12 @@ type SharedClient = Arc>>; pub struct FFIDashSpvClient { pub(crate) inner: SharedClient, pub(crate) runtime: Arc, - event_callbacks: Arc>, active_threads: Arc>>>, - sync_callbacks: Arc>>, shutdown_token: CancellationToken, - // Stored event receiver for pull-based draining (no background thread by default) - event_rx: Arc>>>, + sync_event_callbacks: Arc>>, + network_event_callbacks: Arc>>, + wallet_event_callbacks: Arc>>, + progress_callback: Arc>>, } /// Create a new SPV client and return an opaque pointer. @@ -170,11 +184,12 @@ pub unsafe extern "C" fn dash_spv_ffi_client_new( let ffi_client = FFIDashSpvClient { inner: Arc::new(Mutex::new(Some(client))), runtime, - event_callbacks: Arc::new(Mutex::new(FFIEventCallbacks::default())), active_threads: Arc::new(Mutex::new(Vec::new())), - sync_callbacks: Arc::new(Mutex::new(None)), shutdown_token: CancellationToken::new(), - event_rx: Arc::new(Mutex::new(None)), + sync_event_callbacks: Arc::new(Mutex::new(None)), + network_event_callbacks: Arc::new(Mutex::new(None)), + wallet_event_callbacks: Arc::new(Mutex::new(None)), + progress_callback: Arc::new(Mutex::new(None)), }; Box::into_raw(Box::new(ffi_client)) } @@ -207,165 +222,11 @@ impl FFIDashSpvClient { } } } - - /// Drain pending events and invoke configured callbacks (non-blocking). - fn drain_events_internal(&self) { - let mut rx_guard = self.event_rx.lock().unwrap(); - let Some(rx) = rx_guard.as_mut() else { - return; - }; - let callbacks = self.event_callbacks.lock().unwrap(); - // Prevent flooding the UI/main thread by limiting events per drain call. - // Remaining events stay queued and will be drained on the next tick. - let max_events_per_call: usize = 500; - let mut processed: usize = 0; - loop { - if processed >= max_events_per_call { - break; - } - match rx.try_recv() { - Ok(event) => match event { - dash_spv::types::SpvEvent::BalanceUpdate { - confirmed, - unconfirmed, - .. - } => { - callbacks.call_balance_update(confirmed, unconfirmed); - } - dash_spv::types::SpvEvent::TransactionDetected { - ref txid, - confirmed, - ref addresses, - amount, - block_height, - .. - } => { - if let Ok(txid_parsed) = txid.parse::() { - callbacks.call_transaction( - &txid_parsed, - confirmed, - amount, - addresses, - block_height, - ); - let wallet_id_hex = "unknown"; - let account_index = 0; - let block_height = block_height.unwrap_or(0); - let is_ours = amount != 0; - callbacks.call_wallet_transaction( - wallet_id_hex, - account_index, - &txid_parsed, - confirmed, - amount, - addresses, - block_height, - is_ours, - ); - } - } - dash_spv::types::SpvEvent::BlockProcessed { - height, - ref hash, - .. - } => { - if let Ok(hash_parsed) = hash.parse::() { - callbacks.call_block(height, &hash_parsed); - } - } - dash_spv::types::SpvEvent::SyncProgress { - .. - } => {} - dash_spv::types::SpvEvent::ChainLockReceived { - .. - } => {} - dash_spv::types::SpvEvent::InstantLockReceived { - .. - } => { - // InstantLock received and validated - // TODO: Add FFI callback if needed for instant lock notifications - } - dash_spv::types::SpvEvent::MempoolTransactionAdded { - ref txid, - amount, - ref addresses, - is_instant_send, - .. - } => { - callbacks.call_mempool_transaction_added( - txid, - amount, - addresses, - is_instant_send, - ); - } - dash_spv::types::SpvEvent::MempoolTransactionConfirmed { - ref txid, - block_height, - ref block_hash, - } => { - callbacks.call_mempool_transaction_confirmed( - txid, - block_height, - block_hash, - ); - } - dash_spv::types::SpvEvent::MempoolTransactionRemoved { - ref txid, - ref reason, - } => { - let ffi_reason: crate::types::FFIMempoolRemovalReason = - reason.clone().into(); - let reason_code = ffi_reason as u8; - callbacks.call_mempool_transaction_removed(txid, reason_code); - } - dash_spv::types::SpvEvent::CompactFilterMatched { - hash, - } => { - if let Ok(block_hash_parsed) = hash.parse::() { - callbacks.call_compact_filter_matched( - &block_hash_parsed, - &[], - "unknown", - ); - } - } - }, - Err(TryRecvError::Empty) => break, - Err(TryRecvError::Disconnected) => { - *rx_guard = None; - break; - } - } - processed += 1; - } - } -} - -/// Drain pending events and invoke configured callbacks (non-blocking). -/// -/// # Safety -/// - `client` must be a valid, non-null pointer. -#[no_mangle] -pub unsafe extern "C" fn dash_spv_ffi_client_drain_events(client: *mut FFIDashSpvClient) -> i32 { - null_check!(client); - let client = &*client; - client.drain_events_internal(); - FFIErrorCode::Success as i32 } fn stop_client_internal(client: &mut FFIDashSpvClient) -> Result<(), dash_spv::SpvError> { client.shutdown_token.cancel(); - // Ensure callbacks are cleared so no further progress/completion notifications fire. - { - let mut cb_guard = client.sync_callbacks.lock().unwrap(); - if let Some(ref callback_data) = *cb_guard { - CALLBACK_REGISTRY.lock().unwrap().unregister(callback_data.callback_id); - } - *cb_guard = None; - } - client.join_active_threads(); let inner = client.inner.clone(); @@ -468,24 +329,7 @@ pub unsafe extern "C" fn dash_spv_ffi_client_start(client: *mut FFIDashSpvClient }); match result { - Ok(()) => { - // After successful start, take event receiver for pull-based draining - let mut guard = client.inner.lock().unwrap(); - if let Some(ref mut spv_client) = *guard { - match spv_client.take_event_receiver() { - Some(rx) => { - *client.event_rx.lock().unwrap() = Some(rx); - tracing::debug!("Replaced FFI event receiver after client start"); - } - None => { - tracing::debug!( - "No new event receiver returned after client start; keeping existing receiver" - ); - } - } - } - FFIErrorCode::Success as i32 - } + Ok(()) => FFIErrorCode::Success as i32, Err(e) => { set_last_error(&e.to_string()); FFIErrorCode::from(e) as i32 @@ -525,11 +369,12 @@ pub fn client_test_sync(client: &FFIDashSpvClient) -> i32 { tracing::info!("Starting test sync..."); // Get initial height - let start_height = match spv_client.sync_progress().await { - Ok(progress) => progress.header_height, + let progress = spv_client.sync_progress(); + let start_height = match progress.headers() { + Ok(progress) => progress.current_height(), Err(e) => { tracing::error!("Failed to get initial height: {}", e); - return Err(e); + return Err(e.into()); } }; tracing::info!("Initial height: {}", start_height); @@ -538,13 +383,14 @@ pub fn client_test_sync(client: &FFIDashSpvClient) -> i32 { tokio::time::sleep(Duration::from_secs(10)).await; // Check if headers increased - let end_height = match spv_client.sync_progress().await { - Ok(progress) => progress.header_height, + let progress = spv_client.sync_progress(); + let end_height = match progress.headers() { + Ok(progress) => progress.current_height(), Err(e) => { tracing::error!("Failed to get final height: {}", e); let mut guard = client.inner.lock().unwrap(); *guard = Some(spv_client); - return Err(e); + return Err(e.into()); } }; tracing::info!("Final height: {}", end_height); @@ -573,236 +419,205 @@ pub fn client_test_sync(client: &FFIDashSpvClient) -> i32 { } } -/// Sync the SPV client to the chain tip with detailed progress updates. +/// Start the SPV client and begin syncing in the background. /// -/// # Safety -/// -/// This function is unsafe because: -/// - `client` must be a valid pointer to an initialized `FFIDashSpvClient` -/// - `user_data` must satisfy thread safety requirements: -/// - If non-null, it must point to data that is safe to access from multiple threads -/// - The caller must ensure proper synchronization if the data is mutable -/// - The data must remain valid for the entire duration of the sync operation -/// - Both `progress_callback` and `completion_callback` must be thread-safe and can be called from any thread +/// This is the streamlined entry point that combines `start()` and continuous monitoring +/// into a single non-blocking call. Use event callbacks (set via `set_sync_event_callbacks`, +/// `set_network_event_callbacks`, `set_wallet_event_callbacks`) to receive notifications +/// about sync progress, peer connections, and wallet activity. /// -/// # Parameters +/// Workflow: +/// 1. Configure event callbacks before calling `run()` +/// 2. Call `run()` - it returns immediately after spawning background sync threads +/// 3. Receive notifications via callbacks as sync progresses +/// 4. Call `stop()` when done /// -/// - `client`: Pointer to the SPV client -/// - `progress_callback`: Optional callback invoked periodically with sync progress -/// - `completion_callback`: Optional callback invoked on completion -/// - `user_data`: Optional user data pointer passed to all callbacks +/// # Safety +/// - `client` must be a valid, non-null pointer to a created client. /// /// # Returns -/// -/// 0 on success, error code on failure +/// 0 on success, error code on failure. #[no_mangle] -pub unsafe extern "C" fn dash_spv_ffi_client_sync_to_tip_with_progress( - client: *mut FFIDashSpvClient, - progress_callback: Option, - completion_callback: Option, - user_data: *mut c_void, -) -> i32 { +pub unsafe extern "C" fn dash_spv_ffi_client_run(client: *mut FFIDashSpvClient) -> i32 { null_check!(client); let client = &(*client); - // Register callbacks in the global registry - let callback_info = CallbackInfo::Detailed { - progress_callback, - completion_callback, - user_data, - }; - let callback_id = CALLBACK_REGISTRY.lock().unwrap().register(callback_info); - - // Store callback ID in the client - let callback_data = SyncCallbackData { - callback_id, - _marker: std::marker::PhantomData, - }; - *client.sync_callbacks.lock().unwrap() = Some(callback_data); + tracing::info!("dash_spv_ffi_client_run: starting client"); + // Start the client first let inner = client.inner.clone(); - let runtime = client.runtime.clone(); - let sync_callbacks = client.sync_callbacks.clone(); - - // Take progress receiver from client - let progress_receiver = { + let start_result = client.runtime.block_on(async { + let mut spv_client = { + let mut guard = inner.lock().unwrap(); + match guard.take() { + Some(c) => c, + None => { + return Err(dash_spv::SpvError::Storage(dash_spv::StorageError::NotFound( + "Client not initialized".to_string(), + ))) + } + } + }; + let res = spv_client.start().await; let mut guard = inner.lock().unwrap(); - guard.as_mut().and_then(|c| c.take_progress_receiver()) + *guard = Some(spv_client); + res + }); + + if let Err(e) = start_result { + tracing::error!("dash_spv_ffi_client_run: start failed: {}", e); + set_last_error(&e.to_string()); + return FFIErrorCode::from(e) as i32; + } + + tracing::info!("dash_spv_ffi_client_run: client started, setting up event monitoring"); + + // Get event subscriptions before taking the client for the sync thread. + // The sync thread needs exclusive access, so we must subscribe first. + let inner = client.inner.clone(); + let runtime_handle = client.runtime.handle().clone(); + let shutdown_token = client.shutdown_token.clone(); + + let (sync_event_rx, network_event_rx, progress_rx, wallet_event_rx) = { + let guard = inner.lock().unwrap(); + match guard.as_ref() { + Some(c) => { + // Get wallet event subscription using blocking_read since subscribe_events is on WalletManager + let wallet_rx = c.wallet().blocking_read().subscribe_events(); + ( + c.subscribe_sync_events(), + c.subscribe_network_events(), + c.subscribe_progress(), + wallet_rx, + ) + } + None => { + tracing::error!("dash_spv_ffi_client_run: client not available for subscriptions"); + set_last_error("Client not available"); + return FFIErrorCode::RuntimeError as i32; + } + } }; - // Setup progress monitoring with safe callback access - if let Some(mut receiver) = progress_receiver { - let runtime_handle = runtime.handle().clone(); - let sync_callbacks_clone = sync_callbacks.clone(); - let shutdown_token_monitor = client.shutdown_token.clone(); - - let handle = std::thread::spawn(move || { - runtime_handle.block_on(async move { - loop { - tokio::select! { - maybe_progress = receiver.recv() => { - match maybe_progress { - Some(progress) => { - // Handle callback in a thread-safe way - let should_stop = matches!( - progress.sync_stage, - SyncStage::Complete | SyncStage::Failed(_) - ); - - // Create FFI progress (stack-allocated to avoid double-free issues) - let mut ffi_progress = FFIDetailedSyncProgress::from(progress); - - // Call the callback using the registry - { - let cb_guard = sync_callbacks_clone.lock().unwrap(); - - if let Some(ref callback_data) = *cb_guard { - let registry = CALLBACK_REGISTRY.lock().unwrap(); - if let Some(CallbackInfo::Detailed { - progress_callback: Some(callback), - user_data, - .. - }) = registry.get(callback_data.callback_id) - { - // SAFETY: The callback and user_data are safely stored in the registry - // and accessed through thread-safe mechanisms. The registry ensures - // proper lifetime management without raw pointer passing across threads. - callback(&ffi_progress, *user_data); - - // Free any heap-allocated strings inside the progress struct - // to avoid leaking per-callback allocations (e.g., stage_message). - // Move stage_message out of the struct to avoid double-free. - unsafe { - // Move stage_message out of the struct (not using ptr::read to avoid double-free) - let stage_message = std::mem::replace( - &mut ffi_progress.stage_message, - crate::types::FFIString { - ptr: std::ptr::null_mut(), - length: 0, - }, - ); - // Destroy stage_message allocated in FFIDetailedSyncProgress::from - crate::types::dash_spv_ffi_string_destroy(stage_message); - // ffi_progress will be dropped normally here; no Drop impl exists - } - } - } - } - - if should_stop { - break; - } - } - None => break, - } - } - _ = shutdown_token_monitor.cancelled() => { - break; - } - } - } - }); - }); + // Spawn event monitoring threads for each callback type that is set + let rt = client.runtime.handle().clone(); + + if client.sync_event_callbacks.lock().unwrap().is_some() { + let handle = spawn_broadcast_monitor( + "Sync event", + sync_event_rx.resubscribe(), + client.sync_event_callbacks.clone(), + shutdown_token.clone(), + rt.clone(), + |cb, event| cb.dispatch(event), + ); + client.active_threads.lock().unwrap().push(handle); + } - // Store thread handle + if client.network_event_callbacks.lock().unwrap().is_some() { + let handle = spawn_broadcast_monitor( + "Network event", + network_event_rx.resubscribe(), + client.network_event_callbacks.clone(), + shutdown_token.clone(), + rt.clone(), + |cb, event| cb.dispatch(event), + ); client.active_threads.lock().unwrap().push(handle); } - // Spawn sync task in a separate thread with safe callback access - let runtime_handle = runtime.handle().clone(); - let sync_callbacks_clone = sync_callbacks.clone(); - let shutdown_token_sync = client.shutdown_token.clone(); + if client.progress_callback.lock().unwrap().is_some() { + let handle = spawn_progress_monitor( + progress_rx.clone(), + client.progress_callback.clone(), + shutdown_token.clone(), + rt.clone(), + |cb, progress| cb.dispatch(progress), + ); + client.active_threads.lock().unwrap().push(handle); + } + + if client.wallet_event_callbacks.lock().unwrap().is_some() { + let handle = spawn_broadcast_monitor( + "Wallet event", + wallet_event_rx.resubscribe(), + client.wallet_event_callbacks.clone(), + shutdown_token.clone(), + rt.clone(), + |cb, event| cb.dispatch(event), + ); + client.active_threads.lock().unwrap().push(handle); + } + + tracing::info!("dash_spv_ffi_client_run: spawning sync thread"); + + // Now take the client for the sync thread + let spv_client = { + let mut guard = inner.lock().unwrap(); + match guard.take() { + Some(c) => c, + None => { + tracing::error!("dash_spv_ffi_client_run: client not available for sync thread"); + set_last_error("Client not available"); + return FFIErrorCode::RuntimeError as i32; + } + } + }; + let sync_handle = std::thread::spawn(move || { - let shutdown_token_callback = shutdown_token_sync.clone(); - // Run monitoring loop - let monitor_result = runtime_handle.block_on({ - let inner = inner.clone(); - async move { - let mut spv_client = { - let mut guard = inner.lock().unwrap(); - match guard.take() { - Some(client) => client, - None => { - return Err(dash_spv::SpvError::Config( - "Client not initialized".to_string(), - )) - } - } - }; - let (_command_sender, command_receiver) = tokio::sync::mpsc::unbounded_channel(); - let run_token = shutdown_token_sync.clone(); - let (abort_handle, abort_registration) = AbortHandle::new_pair(); - let mut monitor_future = Box::pin(Abortable::new( - spv_client.monitor_network(command_receiver, run_token), - abort_registration, - )); - let result = tokio::select! { - res = &mut monitor_future => match res { + runtime_handle.block_on(async move { + tracing::debug!("Sync thread: starting"); + + let mut spv_client = spv_client; + + tracing::debug!("Sync thread: got client, starting monitor_network"); + + let (_command_sender, command_receiver) = tokio::sync::mpsc::unbounded_channel(); + let run_token = shutdown_token.clone(); + let (abort_handle, abort_registration) = AbortHandle::new_pair(); + + let mut monitor_future = Box::pin(Abortable::new( + spv_client.monitor_network(command_receiver, run_token), + abort_registration, + )); + + let result = tokio::select! { + res = &mut monitor_future => match res { + Ok(inner) => inner, + Err(_) => Ok(()), + }, + _ = shutdown_token.cancelled() => { + tracing::debug!("Sync thread: shutdown requested"); + abort_handle.abort(); + match monitor_future.as_mut().await { Ok(inner) => inner, Err(_) => Ok(()), - }, - _ = shutdown_token_sync.cancelled() => { - abort_handle.abort(); - match monitor_future.as_mut().await { - Ok(inner) => inner, - Err(_) => Ok(()), - } - } - }; - drop(monitor_future); - let mut guard = inner.lock().unwrap(); - *guard = Some(spv_client); - result - } - }); - - // Send completion callback and cleanup - { - let mut cb_guard = sync_callbacks_clone.lock().unwrap(); - if let Some(ref callback_data) = *cb_guard { - let mut registry = CALLBACK_REGISTRY.lock().unwrap(); - if let Some(CallbackInfo::Detailed { - completion_callback: Some(callback), - user_data, - .. - }) = registry.unregister(callback_data.callback_id) - { - if shutdown_token_callback.is_cancelled() { - let msg = CString::new("Sync stopped by request").unwrap_or_else(|_| { - CString::new("Sync stopped").expect("hardcoded string is safe") - }); - callback(false, msg.as_ptr(), user_data); - } else { - match monitor_result { - Ok(_) => { - let msg = CString::new("Sync completed successfully") - .unwrap_or_else(|_| { - CString::new("Sync completed") - .expect("hardcoded string is safe") - }); - callback(true, msg.as_ptr(), user_data); - } - Err(e) => { - let msg = match CString::new(format!("Sync failed: {}", e)) { - Ok(s) => s, - Err(_) => CString::new("Sync failed") - .expect("hardcoded string is safe"), - }; - callback(false, msg.as_ptr(), user_data); - } - } } } + }; + + drop(monitor_future); + + if let Err(e) = result { + tracing::error!("Sync thread: sync error: {}", e); } - // Clear the callbacks after completion - *cb_guard = None; - } + + tracing::debug!("Sync thread: putting client back"); + + // Put client back + let mut guard = inner.lock().unwrap(); + *guard = Some(spv_client); + + tracing::debug!("Sync thread: exiting"); + }); }); - // Store thread handle + // Store thread handle for cleanup client.active_threads.lock().unwrap().push(sync_handle); + tracing::info!("dash_spv_ffi_client_run: sync thread spawned, returning"); + FFIErrorCode::Success as i32 } @@ -858,14 +673,54 @@ pub unsafe extern "C" fn dash_spv_ffi_client_get_sync_progress( } } }; - let res = spv_client.sync_progress().await; + let res = spv_client.sync_progress(); let mut guard = inner.lock().unwrap(); *guard = Some(spv_client); - res + Ok(res) }); match result { - Ok(progress) => Box::into_raw(Box::new(progress.into())), + Ok(progress) => Box::into_raw(Box::new(FFISyncProgress::from(progress))), + Err(e) => { + set_last_error(&e.to_string()); + std::ptr::null_mut() + } + } +} + +/// Get the current manager-based sync progress. +/// +/// Returns the new parallel sync system's progress with per-manager details. +/// Use `dash_spv_ffi_manager_sync_progress_destroy` to free the returned struct. +/// +/// # Safety +/// - `client` must be a valid, non-null pointer. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_client_get_manager_sync_progress( + client: *mut FFIDashSpvClient, +) -> *mut FFISyncProgress { + null_check!(client, std::ptr::null_mut()); + + let client = &(*client); + let inner = client.inner.clone(); + + // Access client under lock and clone the progress + let result: Result = { + let guard = inner.lock().unwrap(); + match guard.as_ref() { + Some(spv_client) => { + // Clone the progress since we need it after releasing the lock + let new_progress = spv_client.progress().clone(); + Ok(FFISyncProgress::from(new_progress)) + } + None => Err(dash_spv::SpvError::Storage(dash_spv::StorageError::NotFound( + "Client not initialized".to_string(), + ))), + } + }; + + match result { + Ok(progress) => Box::into_raw(Box::new(progress)), Err(e) => { set_last_error(&e.to_string()); std::ptr::null_mut() @@ -1015,31 +870,6 @@ pub unsafe extern "C" fn dash_spv_ffi_client_clear_storage(client: *mut FFIDashS } } -/// Set event callbacks for the client. -/// -/// # Safety -/// - `client` must be a valid, non-null pointer. -#[no_mangle] -pub unsafe extern "C" fn dash_spv_ffi_client_set_event_callbacks( - client: *mut FFIDashSpvClient, - callbacks: FFIEventCallbacks, -) -> i32 { - null_check!(client); - - let client = &(*client); - - tracing::debug!("Setting event callbacks on FFI client"); - tracing::debug!(" Block callback: {}", callbacks.on_block.is_some()); - tracing::debug!(" Transaction callback: {}", callbacks.on_transaction.is_some()); - tracing::debug!(" Balance update callback: {}", callbacks.on_balance_update.is_some()); - - let mut event_callbacks = client.event_callbacks.lock().unwrap(); - *event_callbacks = callbacks; - - tracing::debug!("Event callbacks set successfully"); - FFIErrorCode::Success as i32 -} - /// Destroy the client and free associated resources. /// /// # Safety @@ -1052,11 +882,6 @@ pub unsafe extern "C" fn dash_spv_ffi_client_destroy(client: *mut FFIDashSpvClie // Cancel shutdown token to stop all threads client.shutdown_token.cancel(); - // Clean up any registered callbacks - if let Some(ref callback_data) = *client.sync_callbacks.lock().unwrap() { - CALLBACK_REGISTRY.lock().unwrap().unregister(callback_data.callback_id); - } - // Stop the SPV client client.runtime.block_on(async { if let Some(mut spv_client) = { @@ -1155,3 +980,159 @@ pub unsafe extern "C" fn dash_spv_ffi_wallet_manager_free(manager: *mut FFIWalle key_wallet_ffi::wallet_manager::wallet_manager_free(manager as *mut KeyWalletFFIWalletManager); } + +// ============================================================================ +// Event Callback Functions +// ============================================================================ + +/// Set sync event callbacks for push-based event notifications. +/// +/// The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. +/// Call this before calling run(). +/// +/// # Safety +/// - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. +/// - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. +/// - Callbacks must be thread-safe as they may be called from a background thread. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_client_set_sync_event_callbacks( + client: *mut FFIDashSpvClient, + callbacks: FFISyncEventCallbacks, +) -> i32 { + null_check!(client); + + let client = &(*client); + *client.sync_event_callbacks.lock().unwrap() = Some(callbacks); + + FFIErrorCode::Success as i32 +} + +/// Clear sync event callbacks. +/// +/// # Safety +/// - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_client_clear_sync_event_callbacks( + client: *mut FFIDashSpvClient, +) -> i32 { + null_check!(client); + + let client = &(*client); + *client.sync_event_callbacks.lock().unwrap() = None; + + FFIErrorCode::Success as i32 +} + +/// Set network event callbacks for push-based event notifications. +/// +/// The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. +/// Call this before calling run(). +/// +/// # Safety +/// - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. +/// - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. +/// - Callbacks must be thread-safe as they may be called from a background thread. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_client_set_network_event_callbacks( + client: *mut FFIDashSpvClient, + callbacks: FFINetworkEventCallbacks, +) -> i32 { + null_check!(client); + + let client = &(*client); + *client.network_event_callbacks.lock().unwrap() = Some(callbacks); + + FFIErrorCode::Success as i32 +} + +/// Clear network event callbacks. +/// +/// # Safety +/// - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_client_clear_network_event_callbacks( + client: *mut FFIDashSpvClient, +) -> i32 { + null_check!(client); + + let client = &(*client); + *client.network_event_callbacks.lock().unwrap() = None; + + FFIErrorCode::Success as i32 +} + +/// Set wallet event callbacks for push-based event notifications. +/// +/// The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. +/// Call this before calling run(). +/// +/// # Safety +/// - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. +/// - The `callbacks` struct and its `user_data` must remain valid until callbacks are cleared. +/// - Callbacks must be thread-safe as they may be called from a background thread. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_client_set_wallet_event_callbacks( + client: *mut FFIDashSpvClient, + callbacks: FFIWalletEventCallbacks, +) -> i32 { + null_check!(client); + + let client = &(*client); + *client.wallet_event_callbacks.lock().unwrap() = Some(callbacks); + + FFIErrorCode::Success as i32 +} + +/// Clear wallet event callbacks. +/// +/// # Safety +/// - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_client_clear_wallet_event_callbacks( + client: *mut FFIDashSpvClient, +) -> i32 { + null_check!(client); + + let client = &(*client); + *client.wallet_event_callbacks.lock().unwrap() = None; + + FFIErrorCode::Success as i32 +} + +/// Set progress callback for sync progress updates. +/// +/// The monitoring thread is spawned when `dash_spv_ffi_client_run` is called. +/// Call this before calling run(). +/// +/// # Safety +/// - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. +/// - The `callback` struct and its `user_data` must remain valid until the callback is cleared. +/// - The callback must be thread-safe as it may be called from a background thread. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_client_set_progress_callback( + client: *mut FFIDashSpvClient, + callback: crate::FFIProgressCallback, +) -> i32 { + null_check!(client); + + let client = &(*client); + *client.progress_callback.lock().unwrap() = Some(callback); + + FFIErrorCode::Success as i32 +} + +/// Clear progress callback. +/// +/// # Safety +/// - `client` must be a valid, non-null pointer to an `FFIDashSpvClient`. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_client_clear_progress_callback( + client: *mut FFIDashSpvClient, +) -> i32 { + null_check!(client); + + let client = &(*client); + *client.progress_callback.lock().unwrap() = None; + + FFIErrorCode::Success as i32 +} diff --git a/dash-spv-ffi/src/platform_integration.rs b/dash-spv-ffi/src/platform_integration.rs index 2144a7421..b868c8fb2 100644 --- a/dash-spv-ffi/src/platform_integration.rs +++ b/dash-spv-ffi/src/platform_integration.rs @@ -103,17 +103,27 @@ pub unsafe extern "C" fn ffi_dash_spv_get_quorum_public_key( // Get the masternode list engine directly for efficient access let engine = match spv_client.masternode_list_engine() { - Some(engine) => engine, - None => { + Ok(engine) => engine, + Err(e) => { return FFIResult::error( FFIErrorCode::RuntimeError, - "Masternode list engine not initialized. Core SDK may still be syncing.", + &format!( + "Masternode list engine not initialized: {}. Core SDK may still be syncing.", + e + ), ); } }; + // Lock the engine for reading + let engine_guard = engine.blocking_read(); + // Use the global quorum status index for efficient lookup - match engine.quorum_statuses.get(&llmq_type).and_then(|type_map| type_map.get(&quorum_hash)) { + match engine_guard + .quorum_statuses + .get(&llmq_type) + .and_then(|type_map| type_map.get(&quorum_hash)) + { Some((heights, public_key, _status)) => { // Check if the requested height is one of the heights where this quorum exists if !heights.contains(&core_chain_locked_height) { @@ -140,10 +150,10 @@ pub unsafe extern "C" fn ffi_dash_spv_get_quorum_public_key( } None => { // Quorum not found in global index - provide diagnostic info - let total_lists = engine.masternode_lists.len(); + let total_lists = engine_guard.masternode_lists.len(); let (min_height, max_height) = if total_lists > 0 { - let min = engine.masternode_lists.keys().min().copied().unwrap_or(0); - let max = engine.masternode_lists.keys().max().copied().unwrap_or(0); + let min = engine_guard.masternode_lists.keys().min().copied().unwrap_or(0); + let max = engine_guard.masternode_lists.keys().max().copied().unwrap_or(0); (min, max) } else { (0, 0) diff --git a/dash-spv-ffi/src/types.rs b/dash-spv-ffi/src/types.rs index c6839a8d7..b695d5db8 100644 --- a/dash-spv-ffi/src/types.rs +++ b/dash-spv-ffi/src/types.rs @@ -1,6 +1,10 @@ use dash_spv::client::config::MempoolStrategy; +use dash_spv::sync::{ + BlockHeadersProgress, BlocksProgress, ChainLockProgress, FilterHeadersProgress, + FiltersProgress, InstantSendProgress, MasternodesProgress, SyncProgress, SyncState, +}; use dash_spv::types::{DetailedSyncProgress, MempoolRemovalReason, SyncStage}; -use dash_spv::SyncProgress; +use dash_spv::SyncProgress as LegacySyncProgress; use std::ffi::{CStr, CString}; use std::os::raw::c_char; @@ -43,7 +47,7 @@ impl FFIString { } #[repr(C)] -pub struct FFISyncProgress { +pub struct FFILegacySyncProgress { pub header_height: u32, pub filter_header_height: u32, pub masternode_height: u32, @@ -53,9 +57,9 @@ pub struct FFISyncProgress { pub last_synced_filter_height: u32, } -impl From for FFISyncProgress { - fn from(progress: SyncProgress) -> Self { - FFISyncProgress { +impl From for FFILegacySyncProgress { + fn from(progress: LegacySyncProgress) -> Self { + FFILegacySyncProgress { header_height: progress.header_height, filter_header_height: progress.filter_header_height, masternode_height: progress.masternode_height, @@ -111,6 +115,295 @@ impl From for FFISyncStage { } } +/// SyncState exposed by the FFI as FFISyncState. +#[repr(C)] +#[derive(Debug, Clone, Copy, Default, PartialEq, Eq)] +pub enum FFISyncState { + #[default] + Initializing = 0, + WaitingForConnections = 1, + WaitForEvents = 2, + Syncing = 3, + Synced = 4, + Error = 5, +} + +impl From for FFISyncState { + fn from(state: SyncState) -> Self { + match state { + SyncState::Initializing => FFISyncState::Initializing, + SyncState::WaitingForConnections => FFISyncState::WaitingForConnections, + SyncState::WaitForEvents => FFISyncState::WaitForEvents, + SyncState::Syncing => FFISyncState::Syncing, + SyncState::Synced => FFISyncState::Synced, + SyncState::Error => FFISyncState::Error, + } + } +} + +/// Progress for block headers synchronization. +#[repr(C)] +#[derive(Debug, Clone, Default)] +pub struct FFIBlockHeadersProgress { + pub state: FFISyncState, + pub current_height: u32, + pub target_height: u32, + pub processed: u32, + pub buffered: u32, + pub percentage: f64, + pub last_activity: u64, +} + +impl From<&BlockHeadersProgress> for FFIBlockHeadersProgress { + fn from(progress: &BlockHeadersProgress) -> Self { + FFIBlockHeadersProgress { + state: progress.state().into(), + current_height: progress.current_height(), + target_height: progress.target_height(), + processed: progress.processed(), + buffered: progress.buffered(), + percentage: progress.percentage(), + last_activity: progress.last_activity().elapsed().as_secs(), + } + } +} + +/// Progress for filter headers synchronization. +#[repr(C)] +#[derive(Debug, Clone, Default)] +pub struct FFIFilterHeadersProgress { + pub state: FFISyncState, + pub current_height: u32, + pub target_height: u32, + pub block_header_tip_height: u32, + pub processed: u32, + pub percentage: f64, + pub last_activity: u64, +} + +impl From<&FilterHeadersProgress> for FFIFilterHeadersProgress { + fn from(progress: &FilterHeadersProgress) -> Self { + FFIFilterHeadersProgress { + state: progress.state().into(), + current_height: progress.current_height(), + target_height: progress.target_height(), + block_header_tip_height: progress.block_header_tip_height(), + processed: progress.processed(), + percentage: progress.percentage(), + last_activity: progress.last_activity().elapsed().as_secs(), + } + } +} + +/// Progress for compact block filters synchronization. +#[repr(C)] +#[derive(Debug, Clone, Default)] +pub struct FFIFiltersProgress { + pub state: FFISyncState, + pub current_height: u32, + pub target_height: u32, + pub filter_header_tip_height: u32, + pub downloaded: u32, + pub processed: u32, + pub matched: u32, + pub percentage: f64, + pub last_activity: u64, +} + +impl From<&FiltersProgress> for FFIFiltersProgress { + fn from(progress: &FiltersProgress) -> Self { + FFIFiltersProgress { + state: progress.state().into(), + current_height: progress.current_height(), + target_height: progress.target_height(), + filter_header_tip_height: progress.filter_header_tip_height(), + downloaded: progress.downloaded(), + processed: progress.processed(), + matched: progress.matched(), + percentage: progress.percentage(), + last_activity: progress.last_activity().elapsed().as_secs(), + } + } +} + +/// Progress for full block synchronization. +#[repr(C)] +#[derive(Debug, Clone, Default)] +pub struct FFIBlocksProgress { + pub state: FFISyncState, + pub last_processed: u32, + pub requested: u32, + pub from_storage: u32, + pub downloaded: u32, + pub processed: u32, + pub relevant: u32, + pub transactions: u32, + pub last_activity: u64, +} + +impl From<&BlocksProgress> for FFIBlocksProgress { + fn from(progress: &BlocksProgress) -> Self { + FFIBlocksProgress { + state: progress.state().into(), + last_processed: progress.last_processed(), + requested: progress.requested(), + from_storage: progress.from_storage(), + downloaded: progress.downloaded(), + processed: progress.processed(), + relevant: progress.relevant(), + transactions: progress.transactions(), + last_activity: progress.last_activity().elapsed().as_secs(), + } + } +} + +/// Progress for masternode list synchronization. +#[repr(C)] +#[derive(Debug, Clone, Default)] +pub struct FFIMasternodesProgress { + pub state: FFISyncState, + pub current_height: u32, + pub target_height: u32, + pub block_header_tip_height: u32, + pub diffs_processed: u32, + pub last_activity: u64, +} + +impl From<&MasternodesProgress> for FFIMasternodesProgress { + fn from(progress: &MasternodesProgress) -> Self { + FFIMasternodesProgress { + state: progress.state().into(), + current_height: progress.current_height(), + target_height: progress.target_height(), + block_header_tip_height: progress.block_header_tip_height(), + diffs_processed: progress.diffs_processed(), + last_activity: progress.last_activity().elapsed().as_secs(), + } + } +} + +/// Progress for ChainLock synchronization. +#[repr(C)] +#[derive(Debug, Clone, Default)] +pub struct FFIChainLockProgress { + pub state: FFISyncState, + pub best_validated_height: u32, + pub valid: u32, + pub invalid: u32, + pub last_activity: u64, +} + +impl From<&ChainLockProgress> for FFIChainLockProgress { + fn from(progress: &ChainLockProgress) -> Self { + FFIChainLockProgress { + state: progress.state().into(), + best_validated_height: progress.best_validated_height(), + valid: progress.valid(), + invalid: progress.invalid(), + last_activity: progress.last_activity().elapsed().as_secs(), + } + } +} + +/// Progress for InstantSend synchronization. +#[repr(C)] +#[derive(Debug, Clone, Default)] +pub struct FFIInstantSendProgress { + pub state: FFISyncState, + pub pending: u32, + pub valid: u32, + pub invalid: u32, + pub last_activity: u64, +} + +impl From<&InstantSendProgress> for FFIInstantSendProgress { + fn from(progress: &InstantSendProgress) -> Self { + FFIInstantSendProgress { + state: progress.state().into(), + pending: progress.pending() as u32, + valid: progress.valid(), + invalid: progress.invalid(), + last_activity: progress.last_activity().elapsed().as_secs(), + } + } +} + +/// Aggregate progress for all sync managers. +/// Provides a complete view of the parallel sync system's state. +#[repr(C)] +pub struct FFISyncProgress { + pub state: FFISyncState, + pub percentage: f64, + pub is_synced: bool, + /// Per-manager progress (null if manager not started). + pub headers: *mut FFIBlockHeadersProgress, + pub filter_headers: *mut FFIFilterHeadersProgress, + pub filters: *mut FFIFiltersProgress, + pub blocks: *mut FFIBlocksProgress, + pub masternodes: *mut FFIMasternodesProgress, + pub chainlocks: *mut FFIChainLockProgress, + pub instantsend: *mut FFIInstantSendProgress, +} + +impl From for FFISyncProgress { + fn from(progress: SyncProgress) -> Self { + let headers = progress + .headers() + .ok() + .map(|p| Box::into_raw(Box::new(FFIBlockHeadersProgress::from(p)))) + .unwrap_or(std::ptr::null_mut()); + + let filter_headers = progress + .filter_headers() + .ok() + .map(|p| Box::into_raw(Box::new(FFIFilterHeadersProgress::from(p)))) + .unwrap_or(std::ptr::null_mut()); + + let filters = progress + .filters() + .ok() + .map(|p| Box::into_raw(Box::new(FFIFiltersProgress::from(p)))) + .unwrap_or(std::ptr::null_mut()); + + let blocks = progress + .blocks() + .ok() + .map(|p| Box::into_raw(Box::new(FFIBlocksProgress::from(p)))) + .unwrap_or(std::ptr::null_mut()); + + let masternodes = progress + .masternodes() + .ok() + .map(|p| Box::into_raw(Box::new(FFIMasternodesProgress::from(p)))) + .unwrap_or(std::ptr::null_mut()); + + let chainlocks = progress + .chainlocks() + .ok() + .map(|p| Box::into_raw(Box::new(FFIChainLockProgress::from(p)))) + .unwrap_or(std::ptr::null_mut()); + + let instantsend = progress + .instantsend() + .ok() + .map(|p| Box::into_raw(Box::new(FFIInstantSendProgress::from(p)))) + .unwrap_or(std::ptr::null_mut()); + + Self { + state: progress.state().into(), + percentage: progress.percentage(), + is_synced: progress.is_synced(), + headers, + filter_headers, + filters, + blocks, + masternodes, + chainlocks, + instantsend, + } + } +} + #[repr(C)] pub struct FFIDetailedSyncProgress { pub total_height: u32, @@ -119,7 +412,7 @@ pub struct FFIDetailedSyncProgress { pub estimated_seconds_remaining: i64, // -1 if unknown pub stage: FFISyncStage, pub stage_message: FFIString, - pub overview: FFISyncProgress, + pub overview: FFILegacySyncProgress, pub total_headers: u64, pub sync_start_timestamp: i64, } @@ -156,7 +449,7 @@ impl From for FFIDetailedSyncProgress { SyncStage::Failed(err) => err.clone(), }; - let overview = FFISyncProgress::from(progress.sync_progress.clone()); + let overview = FFILegacySyncProgress::from(progress.sync_progress.clone()); FFIDetailedSyncProgress { total_height: progress.peer_best_height, @@ -238,3 +531,130 @@ impl From for FFIMempoolRemovalReason { } } } + +// ============================================================================ +// Destroy functions for new manager progress types +// ============================================================================ + +/// Destroy an `FFIBlockHeadersProgress` object. +/// +/// # Safety +/// - `progress` must be a pointer returned from this crate, or null. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_block_headers_progress_destroy( + progress: *mut FFIBlockHeadersProgress, +) { + if !progress.is_null() { + let _ = Box::from_raw(progress); + } +} + +/// Destroy an `FFIFilterHeadersProgress` object. +/// +/// # Safety +/// - `progress` must be a pointer returned from this crate, or null. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_filter_headers_progress_destroy( + progress: *mut FFIFilterHeadersProgress, +) { + if !progress.is_null() { + let _ = Box::from_raw(progress); + } +} + +/// Destroy an `FFIFiltersProgress` object. +/// +/// # Safety +/// - `progress` must be a pointer returned from this crate, or null. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_filters_progress_destroy(progress: *mut FFIFiltersProgress) { + if !progress.is_null() { + let _ = Box::from_raw(progress); + } +} + +/// Destroy an `FFIBlocksProgress` object. +/// +/// # Safety +/// - `progress` must be a pointer returned from this crate, or null. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_blocks_progress_destroy(progress: *mut FFIBlocksProgress) { + if !progress.is_null() { + let _ = Box::from_raw(progress); + } +} + +/// Destroy an `FFIMasternodesProgress` object. +/// +/// # Safety +/// - `progress` must be a pointer returned from this crate, or null. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_masternode_progress_destroy( + progress: *mut FFIMasternodesProgress, +) { + if !progress.is_null() { + let _ = Box::from_raw(progress); + } +} + +/// Destroy an `FFIChainLockProgress` object. +/// +/// # Safety +/// - `progress` must be a pointer returned from this crate, or null. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_chainlock_progress_destroy( + progress: *mut FFIChainLockProgress, +) { + if !progress.is_null() { + let _ = Box::from_raw(progress); + } +} + +/// Destroy an `FFIInstantSendProgress` object. +/// +/// # Safety +/// - `progress` must be a pointer returned from this crate, or null. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_instantsend_progress_destroy( + progress: *mut FFIInstantSendProgress, +) { + if !progress.is_null() { + let _ = Box::from_raw(progress); + } +} + +/// Destroy an `FFISyncProgress` object and all its nested pointers. +/// +/// # Safety +/// - `progress` must be a pointer returned from this crate, or null. +#[no_mangle] +pub unsafe extern "C" fn dash_spv_ffi_manager_sync_progress_destroy( + progress: *mut FFISyncProgress, +) { + if !progress.is_null() { + let p = Box::from_raw(progress); + + // Free all nested progress pointers + if !p.headers.is_null() { + dash_spv_ffi_block_headers_progress_destroy(p.headers); + } + if !p.filter_headers.is_null() { + dash_spv_ffi_filter_headers_progress_destroy(p.filter_headers); + } + if !p.filters.is_null() { + dash_spv_ffi_filters_progress_destroy(p.filters); + } + if !p.blocks.is_null() { + dash_spv_ffi_blocks_progress_destroy(p.blocks); + } + if !p.masternodes.is_null() { + dash_spv_ffi_masternode_progress_destroy(p.masternodes); + } + if !p.chainlocks.is_null() { + dash_spv_ffi_chainlock_progress_destroy(p.chainlocks); + } + if !p.instantsend.is_null() { + dash_spv_ffi_instantsend_progress_destroy(p.instantsend); + } + } +} diff --git a/dash-spv-ffi/tests/c_tests/test_event_draining.c b/dash-spv-ffi/tests/c_tests/test_event_draining.c deleted file mode 100644 index 48e404fe3..000000000 --- a/dash-spv-ffi/tests/c_tests/test_event_draining.c +++ /dev/null @@ -1,153 +0,0 @@ -#include -#include -#include -#include -#include -#include -#include -#include -#include "../../../key-wallet-ffi/include/key_wallet_ffi.h" -#include "../../dash_spv_ffi.h" - -// Define constants for better readability -#define FFIErrorCode_Success 0 -#define FFIErrorCode_NullPointer 1 -#define FFIValidationMode_None 0 - -// Test helper macros -#define TEST_ASSERT(condition) do { \ - if (!(condition)) { \ - fprintf(stderr, "Assertion failed: %s at %s:%d\n", #condition, __FILE__, __LINE__); \ - exit(1); \ - } \ -} while(0) - -#define TEST_SUCCESS(name) printf("✓ %s\n", name) -#define TEST_START(name) printf("Running %s...\n", name) - -FFIDashSpvClient* create_simple_test_client() { - // Create config - FFIClientConfig* config = dash_spv_ffi_config_new(REGTEST); - TEST_ASSERT(config != NULL); - - // Set data directory to temporary location - char temp_path[256]; - snprintf(temp_path, sizeof(temp_path), "/tmp/dash_spv_test_%d", getpid()); - int result = dash_spv_ffi_config_set_data_dir(config, temp_path); - TEST_ASSERT(result == FFIErrorCode_Success); - - // Set validation mode to none for faster testing - result = dash_spv_ffi_config_set_validation_mode(config, FFIValidationMode_None); - TEST_ASSERT(result == FFIErrorCode_Success); - - // Create client - FFIDashSpvClient* client = dash_spv_ffi_client_new(config); - TEST_ASSERT(client != NULL); - - // Clean up config - dash_spv_ffi_config_destroy(config); - - return client; -} - -void test_drain_events_null_client() { - TEST_START("test_drain_events_null_client"); - - // Test with null client pointer - int result = dash_spv_ffi_client_drain_events(NULL); - TEST_ASSERT(result == FFIErrorCode_NullPointer); - - // Check error was set - const char* error = dash_spv_ffi_get_last_error(); - TEST_ASSERT(error != NULL); - TEST_ASSERT(strstr(error, "Null") != NULL || strstr(error, "null") != NULL || strstr(error, "invalid") != NULL); - - TEST_SUCCESS("test_drain_events_null_client"); -} - -void test_drain_events_no_events() { - TEST_START("test_drain_events_no_events"); - - FFIDashSpvClient* client = create_simple_test_client(); - - // Call drain events - should succeed with no events - int result = dash_spv_ffi_client_drain_events(client); - TEST_ASSERT(result == FFIErrorCode_Success); - - dash_spv_ffi_client_destroy(client); - TEST_SUCCESS("test_drain_events_no_events"); -} - -void test_drain_events_multiple_calls() { - TEST_START("test_drain_events_multiple_calls"); - - FFIDashSpvClient* client = create_simple_test_client(); - - // Make multiple drain calls - should be idempotent - for (int i = 0; i < 10; i++) { - int result = dash_spv_ffi_client_drain_events(client); - TEST_ASSERT(result == FFIErrorCode_Success); - } - - dash_spv_ffi_client_destroy(client); - TEST_SUCCESS("test_drain_events_multiple_calls"); -} - -void test_drain_events_performance() { - TEST_START("test_drain_events_performance"); - - FFIDashSpvClient* client = create_simple_test_client(); - - // Test performance with many calls - const int num_calls = 1000; - clock_t start = clock(); - - for (int i = 0; i < num_calls; i++) { - int result = dash_spv_ffi_client_drain_events(client); - TEST_ASSERT(result == FFIErrorCode_Success); - } - - clock_t end = clock(); - double elapsed = ((double)(end - start)) / CLOCKS_PER_SEC; - - printf("Performance: %d drain_events calls took %.3f seconds (%.1f μs per call)\n", - num_calls, elapsed, (elapsed * 1000000) / num_calls); - - // Should be very fast - less than 100ms for 1000 calls - TEST_ASSERT(elapsed < 0.1); - - dash_spv_ffi_client_destroy(client); - TEST_SUCCESS("test_drain_events_performance"); -} - -void test_drain_events_memory_safety() { - TEST_START("test_drain_events_memory_safety"); - - // Test that repeated client creation/destruction with drain events doesn't leak - for (int iteration = 0; iteration < 5; iteration++) { - FFIDashSpvClient* client = create_simple_test_client(); - - // Multiple rapid drain calls - for (int i = 0; i < 20; i++) { - int result = dash_spv_ffi_client_drain_events(client); - TEST_ASSERT(result == FFIErrorCode_Success); - } - - dash_spv_ffi_client_destroy(client); - } - - TEST_SUCCESS("test_drain_events_memory_safety"); -} - -int main() { - printf("=== C Tests for dash_spv_ffi_client_drain_events ===\n"); - - test_drain_events_null_client(); - test_drain_events_no_events(); - test_drain_events_multiple_calls(); - test_drain_events_performance(); - test_drain_events_memory_safety(); - - printf("\n=== All event draining tests passed! ===\n"); - return 0; -} diff --git a/dash-spv-ffi/tests/c_tests/test_integration.c b/dash-spv-ffi/tests/c_tests/test_integration.c deleted file mode 100644 index 823faf8c7..000000000 --- a/dash-spv-ffi/tests/c_tests/test_integration.c +++ /dev/null @@ -1,290 +0,0 @@ -#include -#include -#include -#include -#include -#include -#include -#include "../../dash_spv_ffi.h" - -#define TEST_ASSERT(condition) do { \ - if (!(condition)) { \ - fprintf(stderr, "Assertion failed: %s at %s:%d\n", #condition, __FILE__, __LINE__); \ - exit(1); \ - } \ -} while(0) - -#define TEST_SUCCESS(name) printf("✓ %s\n", name) -#define TEST_START(name) printf("Running %s...\n", name) - -// Integration test context -typedef struct { - FFIDashSpvClient* client; - FFIClientConfig* config; - int sync_completed; - int block_count; - int tx_count; - uint64_t total_balance; -} IntegrationContext; - -// Event callbacks -void on_block_event(uint32_t height, const char* hash, void* user_data) { - IntegrationContext* ctx = (IntegrationContext*)user_data; - ctx->block_count++; - printf("New block at height %u: %s\n", height, hash ? hash : "null"); -} - -void on_transaction_event(const char* txid, int confirmed, void* user_data) { - IntegrationContext* ctx = (IntegrationContext*)user_data; - ctx->tx_count++; - printf("Transaction %s: confirmed=%d\n", txid ? txid : "null", confirmed); -} - -void on_balance_update_event(uint64_t confirmed, uint64_t unconfirmed, void* user_data) { - IntegrationContext* ctx = (IntegrationContext*)user_data; - ctx->total_balance = confirmed + unconfirmed; - printf("Balance update: confirmed=%llu, unconfirmed=%llu\n", - (unsigned long long)confirmed, (unsigned long long)unconfirmed); -} - -// Test full workflow -void test_full_workflow() { - TEST_START("test_full_workflow"); - - IntegrationContext ctx = {0}; - - // Create configuration - ctx.config = dash_spv_ffi_config_new(FFINetwork_Regtest); - TEST_ASSERT(ctx.config != NULL); - - // Configure client - dash_spv_ffi_config_set_data_dir(ctx.config, "/tmp/dash-spv-integration"); - dash_spv_ffi_config_set_validation_mode(ctx.config, FFIValidationMode_Basic); - dash_spv_ffi_config_set_max_peers(ctx.config, 8); - - // Add some test peers - dash_spv_ffi_config_add_peer(ctx.config, "127.0.0.1:19999"); - dash_spv_ffi_config_add_peer(ctx.config, "127.0.0.1:19998"); - - // Create client - ctx.client = dash_spv_ffi_client_new(ctx.config); - TEST_ASSERT(ctx.client != NULL); - - // Set up event callbacks - FFIEventCallbacks event_callbacks = {0}; - event_callbacks.on_block = on_block_event; - event_callbacks.on_transaction = on_transaction_event; - event_callbacks.on_balance_update = on_balance_update_event; - event_callbacks.user_data = &ctx; - - int32_t result = dash_spv_ffi_client_set_event_callbacks(ctx.client, event_callbacks); - TEST_ASSERT(result == FFIErrorCode_Success); - - // Add addresses to watch - const char* addresses[] = { - "XjSgy6PaVCB3V4KhCiCDkaVbx9ewxe9R1E", - "XuQQkwA4FYkq2XERzMY2CiAZhJTEkgZ6uN", - "XpAy3DUNod14KdJJh3XUjtkAiUkD2kd4JT" - }; - - for (int i = 0; i < 3; i++) { - result = dash_spv_ffi_client_watch_address(ctx.client, addresses[i]); - TEST_ASSERT(result == FFIErrorCode_Success); - } - - // Start the client - result = dash_spv_ffi_client_start(ctx.client); - printf("Client start result: %d\n", result); - - // Monitor for a while - time_t start_time = time(NULL); - time_t monitor_duration = 5; // 5 seconds - - while (time(NULL) - start_time < monitor_duration) { - // Check sync progress - FFISyncProgress* progress = dash_spv_ffi_client_get_sync_progress(ctx.client); - if (progress != NULL) { - printf("Sync progress: headers=%u, filters=%u, peers=%u\n", - progress->header_height, - progress->filter_header_height, - progress->peer_count); - dash_spv_ffi_sync_progress_destroy(progress); - } - - sleep(1); - } - - // Stop the client - result = dash_spv_ffi_client_stop(ctx.client); - TEST_ASSERT(result == FFIErrorCode_Success); - - // Print summary - printf("\nWorkflow summary:\n"); - printf(" Blocks received: %d\n", ctx.block_count); - printf(" Transactions: %d\n", ctx.tx_count); - printf(" Total balance: %llu\n", (unsigned long long)ctx.total_balance); - - // Clean up - dash_spv_ffi_client_destroy(ctx.client); - dash_spv_ffi_config_destroy(ctx.config); - - TEST_SUCCESS("test_full_workflow"); -} - -// Test persistence -void test_persistence() { - TEST_START("test_persistence"); - - const char* data_dir = "/tmp/dash-spv-persistence"; - - // Phase 1: Create client and add data - { - FFIClientConfig* config = dash_spv_ffi_config_new(FFINetwork_Regtest); - dash_spv_ffi_config_set_data_dir(config, data_dir); - - FFIDashSpvClient* client = dash_spv_ffi_client_new(config); - TEST_ASSERT(client != NULL); - - // Add watched addresses - dash_spv_ffi_client_watch_address(client, "XjSgy6PaVCB3V4KhCiCDkaVbx9ewxe9R1E"); - dash_spv_ffi_client_watch_address(client, "XuQQkwA4FYkq2XERzMY2CiAZhJTEkgZ6uN"); - - // Start and sync for a bit - dash_spv_ffi_client_start(client); - sleep(2); - - // Get current state - FFISyncProgress* progress = dash_spv_ffi_client_get_sync_progress(client); - uint32_t height1 = 0; - if (progress != NULL) { - height1 = progress->header_height; - dash_spv_ffi_sync_progress_destroy(progress); - } - - printf("Phase 1 height: %u\n", height1); - - dash_spv_ffi_client_stop(client); - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - } - - // Phase 2: Create new client with same data directory - { - FFIClientConfig* config = dash_spv_ffi_config_new(FFINetwork_Regtest); - dash_spv_ffi_config_set_data_dir(config, data_dir); - - FFIDashSpvClient* client = dash_spv_ffi_client_new(config); - TEST_ASSERT(client != NULL); - - // Check if state was persisted - FFISyncProgress* progress = dash_spv_ffi_client_get_sync_progress(client); - if (progress != NULL) { - printf("Phase 2 height: %u\n", progress->header_height); - dash_spv_ffi_sync_progress_destroy(progress); - } - - // Check watched addresses - FFIArray* watched = dash_spv_ffi_client_get_watched_addresses(client); - if (watched != NULL) { - printf("Persisted watched addresses: %zu\n", watched->len); - dash_spv_ffi_array_destroy(*watched); - } - - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - } - - TEST_SUCCESS("test_persistence"); -} - -// Test transaction handling -void test_transaction_handling() { - TEST_START("test_transaction_handling"); - - FFIClientConfig* config = dash_spv_ffi_config_testnet(); - dash_spv_ffi_config_set_data_dir(config, "/tmp/dash-spv-tx-test"); - - FFIDashSpvClient* client = dash_spv_ffi_client_new(config); - TEST_ASSERT(client != NULL); - - // Test transaction validation (minimal tx for testing) - const char* test_tx_hex = "01000000000100000000000000001976a914000000000000000000000000000000000000000088ac00000000"; - - // Try to broadcast (will likely fail, but tests the API) - int32_t result = dash_spv_ffi_client_broadcast_transaction(client, test_tx_hex); - printf("Broadcast result: %d\n", result); - - // If failed, check error - if (result != FFIErrorCode_Success) { - const char* error = dash_spv_ffi_get_last_error(); - if (error != NULL) { - printf("Broadcast error: %s\n", error); - } - dash_spv_ffi_clear_error(); - } - - // Test transaction query - const char* test_txid = "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef"; - FFITransaction* tx = dash_spv_ffi_client_get_transaction(client, test_txid); - if (tx == NULL) { - printf("Transaction not found (expected)\n"); - } else { - dash_spv_ffi_transaction_destroy(tx); - } - - // Test confirmation status - int32_t confirmations = dash_spv_ffi_client_get_transaction_confirmations(client, test_txid); - printf("Transaction confirmations: %d\n", confirmations); - - int32_t is_confirmed = dash_spv_ffi_client_is_transaction_confirmed(client, test_txid); - printf("Transaction confirmed: %d\n", is_confirmed); - - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - - TEST_SUCCESS("test_transaction_handling"); -} - -// Test rescan functionality -void test_rescan() { - TEST_START("test_rescan"); - - FFIClientConfig* config = dash_spv_ffi_config_testnet(); - dash_spv_ffi_config_set_data_dir(config, "/tmp/dash-spv-rescan-test"); - - FFIDashSpvClient* client = dash_spv_ffi_client_new(config); - TEST_ASSERT(client != NULL); - - // Add addresses to watch - dash_spv_ffi_client_watch_address(client, "XjSgy6PaVCB3V4KhCiCDkaVbx9ewxe9R1E"); - dash_spv_ffi_client_watch_address(client, "XuQQkwA4FYkq2XERzMY2CiAZhJTEkgZ6uN"); - - // Start rescan from height 0 - int32_t result = dash_spv_ffi_client_rescan_blockchain(client, 0); - printf("Rescan from height 0 result: %d\n", result); - - // Start rescan from specific height - result = dash_spv_ffi_client_rescan_blockchain(client, 100000); - printf("Rescan from height 100000 result: %d\n", result); - - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - - TEST_SUCCESS("test_rescan"); -} - -// Main test runner -int main() { - printf("Running Dash SPV FFI Integration C Tests\n"); - printf("========================================\n\n"); - - test_full_workflow(); - test_persistence(); - test_transaction_handling(); - test_rescan(); - - printf("\n========================================\n"); - printf("All integration tests completed!\n"); - - return 0; -} diff --git a/dash-spv-ffi/tests/integration/test_full_workflow.rs b/dash-spv-ffi/tests/integration/test_full_workflow.rs index cf0ceb09c..4dbc3c27f 100644 --- a/dash-spv-ffi/tests/integration/test_full_workflow.rs +++ b/dash-spv-ffi/tests/integration/test_full_workflow.rs @@ -90,13 +90,6 @@ mod tests { } } - let callbacks = FFICallbacks { - on_progress: Some(on_sync_progress), - on_completion: Some(on_sync_complete), - on_data: None, - user_data: &ctx as *const _ as *mut c_void, - }; - // Start the client let result = dash_spv_ffi_client_start(ctx.client); @@ -152,52 +145,6 @@ mod tests { assert_eq!(result, FFIErrorCode::Success as i32); } - // Set up event callbacks - let events = ctx.events.clone(); - - extern "C" fn on_block(height: u32, hash: *const c_char, user_data: *mut c_void) { - let ctx = unsafe { &*(user_data as *const IntegrationTestContext) }; - let hash_str = if hash.is_null() { - "null".to_string() - } else { - unsafe { CStr::from_ptr(hash).to_str().unwrap().to_string() } - }; - ctx.events.lock().unwrap().push(format!("New block at height {}: {}", height, hash_str)); - } - - extern "C" fn on_transaction(txid: *const c_char, confirmed: bool, user_data: *mut c_void) { - let ctx = unsafe { &*(user_data as *const IntegrationTestContext) }; - let txid_str = if txid.is_null() { - "null".to_string() - } else { - unsafe { CStr::from_ptr(txid).to_str().unwrap().to_string() } - }; - ctx.events.lock().unwrap().push( - format!("Transaction {}: confirmed={}", txid_str, confirmed) - ); - } - - extern "C" fn on_balance(confirmed: u64, unconfirmed: u64, user_data: *mut c_void) { - let ctx = unsafe { &*(user_data as *const IntegrationTestContext) }; - ctx.events.lock().unwrap().push( - format!("Balance update: confirmed={}, unconfirmed={}", confirmed, unconfirmed) - ); - } - - let event_callbacks = FFIEventCallbacks { - on_block: Some(on_block), - on_transaction: Some(on_transaction), - on_balance_update: Some(on_balance), - on_mempool_transaction_added: None, - on_mempool_transaction_confirmed: None, - on_mempool_transaction_removed: None, - on_compact_filter_matched: None, - on_wallet_transaction: None, - user_data: &ctx as *const _ as *mut c_void, - }; - - dash_spv_ffi_client_set_event_callbacks(ctx.client, event_callbacks); - // Start monitoring dash_spv_ffi_client_start(ctx.client); @@ -226,13 +173,6 @@ mod tests { dash_spv_ffi_client_stop(ctx.client); - // Check events - let events_vec = ctx.events.lock().unwrap(); - println!("Wallet monitoring events: {} total", events_vec.len()); - for event in events_vec.iter().take(10) { - println!(" {}", event); - } - ctx.cleanup(); } } @@ -251,37 +191,13 @@ mod tests { let test_tx_hex = "01000000000100000000000000001976a914000000000000000000000000000000000000000088ac00000000"; let c_tx = CString::new(test_tx_hex).unwrap(); - // Set up broadcast tracking - let broadcast_result = Arc::new(Mutex::new(None)); - let result_clone = broadcast_result.clone(); - - extern "C" fn on_broadcast_complete(success: bool, error: *const c_char, user_data: *mut c_void) { - let result = unsafe { &*(user_data as *const Arc>>) }; - let error_str = if error.is_null() { - String::new() - } else { - unsafe { CStr::from_ptr(error).to_str().unwrap().to_string() } - }; - *result.lock().unwrap() = Some((success, error_str)); - } - - let callbacks = FFICallbacks { - on_progress: None, - on_completion: Some(on_broadcast_complete), - on_data: None, - user_data: &result_clone as *const _ as *mut c_void, - }; - // Broadcast transaction let result = dash_spv_ffi_client_broadcast_transaction(ctx.client, c_tx.as_ptr()); // In a real test, we'd wait for the broadcast result thread::sleep(Duration::from_secs(2)); - // Check result - if let Some((success, error)) = &*broadcast_result.lock().unwrap() { - println!("Broadcast result: success={}, error={}", success, error); - } + println!("Broadcast result: {}", result); dash_spv_ffi_client_stop(ctx.client); ctx.cleanup(); diff --git a/dash-spv-ffi/tests/test_event_callbacks.rs b/dash-spv-ffi/tests/test_event_callbacks.rs deleted file mode 100644 index b189d0c65..000000000 --- a/dash-spv-ffi/tests/test_event_callbacks.rs +++ /dev/null @@ -1,512 +0,0 @@ -use dash_spv_ffi::callbacks::FFIEventCallbacks; -use dash_spv_ffi::*; -use key_wallet_ffi::FFINetwork; -use serial_test::serial; -use std::ffi::{c_char, c_void, CStr, CString}; -use std::sync::atomic::{AtomicBool, AtomicU32, AtomicU64, Ordering}; -use std::sync::Arc; -use std::thread; -use std::time::Duration; -use tempfile::TempDir; - -// Test data tracking -struct TestEventData { - block_received: AtomicBool, - block_height: AtomicU32, - transaction_received: AtomicBool, - balance_updated: AtomicBool, - confirmed_balance: AtomicU64, - unconfirmed_balance: AtomicU64, - compact_filter_matched: AtomicBool, - compact_filter_block_hash: std::sync::Mutex, - compact_filter_scripts: std::sync::Mutex, - wallet_transaction_received: AtomicBool, - wallet_transaction_wallet_id: std::sync::Mutex, - wallet_transaction_account_index: AtomicU32, - wallet_transaction_txid: std::sync::Mutex, -} - -impl TestEventData { - fn new() -> Arc { - Arc::new(Self { - block_received: AtomicBool::new(false), - block_height: AtomicU32::new(0), - transaction_received: AtomicBool::new(false), - balance_updated: AtomicBool::new(false), - confirmed_balance: AtomicU64::new(0), - unconfirmed_balance: AtomicU64::new(0), - compact_filter_matched: AtomicBool::new(false), - compact_filter_block_hash: std::sync::Mutex::new(String::new()), - compact_filter_scripts: std::sync::Mutex::new(String::new()), - wallet_transaction_received: AtomicBool::new(false), - wallet_transaction_wallet_id: std::sync::Mutex::new(String::new()), - wallet_transaction_account_index: AtomicU32::new(0), - wallet_transaction_txid: std::sync::Mutex::new(String::new()), - }) - } -} - -extern "C" fn test_block_callback(height: u32, _hash: *const [u8; 32], user_data: *mut c_void) { - println!("Test block callback called: height={}", height); - let data = unsafe { &*(user_data as *const TestEventData) }; - data.block_received.store(true, Ordering::SeqCst); - data.block_height.store(height, Ordering::SeqCst); -} - -extern "C" fn test_transaction_callback( - _txid: *const [u8; 32], - _confirmed: bool, - _amount: i64, - _addresses: *const c_char, - _block_height: u32, - user_data: *mut c_void, -) { - println!("Test transaction callback called"); - let data = unsafe { &*(user_data as *const TestEventData) }; - data.transaction_received.store(true, Ordering::SeqCst); -} - -extern "C" fn test_compact_filter_matched_callback( - block_hash: *const [u8; 32], - matched_scripts: *const c_char, - wallet_id: *const c_char, - user_data: *mut c_void, -) { - println!("Test compact filter matched callback called"); - let data = unsafe { &*(user_data as *const TestEventData) }; - - // Convert block hash to hex string - let hash_bytes = unsafe { &*block_hash }; - let hash_hex = hex::encode(hash_bytes); - - // Convert matched scripts to string - let scripts_str = if matched_scripts.is_null() { - String::new() - } else { - unsafe { CStr::from_ptr(matched_scripts).to_string_lossy().into_owned() } - }; - - // Convert wallet ID to string - let _wallet_id_str = if wallet_id.is_null() { - String::new() - } else { - unsafe { CStr::from_ptr(wallet_id).to_string_lossy().into_owned() } - }; - - *data.compact_filter_block_hash.lock().unwrap() = hash_hex; - *data.compact_filter_scripts.lock().unwrap() = scripts_str; - data.compact_filter_matched.store(true, Ordering::SeqCst); -} - -extern "C" fn test_wallet_transaction_callback( - wallet_id: *const c_char, - account_index: u32, - txid: *const [u8; 32], - confirmed: bool, - amount: i64, - _addresses: *const c_char, - _block_height: u32, - is_ours: bool, - user_data: *mut c_void, -) { - println!("Test wallet transaction callback called: wallet={}, account={}, confirmed={}, amount={}, is_ours={}", - unsafe { CStr::from_ptr(wallet_id).to_string_lossy() }, account_index, confirmed, amount, is_ours); - let data = unsafe { &*(user_data as *const TestEventData) }; - - // Convert wallet ID to string - let wallet_id_str = unsafe { CStr::from_ptr(wallet_id).to_string_lossy().into_owned() }; - - // Convert txid to hex string - let txid_bytes = unsafe { &*txid }; - let txid_hex = hex::encode(txid_bytes); - - *data.wallet_transaction_wallet_id.lock().unwrap() = wallet_id_str; - data.wallet_transaction_account_index.store(account_index, Ordering::SeqCst); - *data.wallet_transaction_txid.lock().unwrap() = txid_hex; - data.wallet_transaction_received.store(true, Ordering::SeqCst); -} - -extern "C" fn test_balance_callback(confirmed: u64, unconfirmed: u64, user_data: *mut c_void) { - println!("Test balance callback called: confirmed={}, unconfirmed={}", confirmed, unconfirmed); - let data = unsafe { &*(user_data as *const TestEventData) }; - data.balance_updated.store(true, Ordering::SeqCst); - data.confirmed_balance.store(confirmed, Ordering::SeqCst); - data.unconfirmed_balance.store(unconfirmed, Ordering::SeqCst); -} - -#[test] -fn test_event_callbacks_setup() { - // Initialize logging - unsafe { - dash_spv_ffi_init_logging(c"debug".as_ptr(), true, std::ptr::null(), 0); - } - - // Create test data - let test_data = TestEventData::new(); - let user_data = Arc::as_ptr(&test_data) as *mut c_void; - - // Create temp directory for test data - let temp_dir = TempDir::new().unwrap(); - - unsafe { - // Create config - let config = dash_spv_ffi_config_new(FFINetwork::Testnet); - assert!(!config.is_null()); - - // Set data directory to temp directory - let path = CString::new(temp_dir.path().to_str().unwrap()).unwrap(); - dash_spv_ffi_config_set_data_dir(config, path.as_ptr()); - - // Create client - let client = dash_spv_ffi_client_new(config); - assert!(!client.is_null()); - - // Set event callbacks before starting - let callbacks = FFIEventCallbacks { - on_block: Some(test_block_callback), - on_transaction: Some(test_transaction_callback), - on_balance_update: Some(test_balance_callback), - on_mempool_transaction_added: None, - on_mempool_transaction_confirmed: None, - on_mempool_transaction_removed: None, - on_compact_filter_matched: None, - on_wallet_transaction: None, - user_data, - }; - - let result = dash_spv_ffi_client_set_event_callbacks(client, callbacks); - assert_eq!(result, 0, "Failed to set event callbacks"); - - // Start client - let start_result = dash_spv_ffi_client_start(client); - assert_eq!(start_result, 0, "Failed to start client"); - - println!("Client started, waiting for events..."); - - // Wait a bit for events to be processed - thread::sleep(Duration::from_secs(5)); - - // Check if we received any events - if test_data.block_received.load(Ordering::SeqCst) { - let height = test_data.block_height.load(Ordering::SeqCst); - println!("✅ Block event received! Height: {}", height); - } else { - println!("⚠️ No block events received"); - } - - if test_data.transaction_received.load(Ordering::SeqCst) { - println!("✅ Transaction event received!"); - } else { - println!("⚠️ No transaction events received"); - } - - if test_data.balance_updated.load(Ordering::SeqCst) { - let confirmed = test_data.confirmed_balance.load(Ordering::SeqCst); - let unconfirmed = test_data.unconfirmed_balance.load(Ordering::SeqCst); - println!( - "✅ Balance event received! Confirmed: {}, Unconfirmed: {}", - confirmed, unconfirmed - ); - } else { - println!("⚠️ No balance events received"); - } - - // Stop and cleanup - let stop_result = dash_spv_ffi_client_stop(client); - assert_eq!(stop_result, 0, "Failed to stop client"); - - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - } - - // The test passes if we set up callbacks successfully - // Events may or may not fire depending on network conditions - println!("Test completed - callbacks were set up successfully"); -} - -#[test] -#[serial] -fn test_enhanced_event_callbacks() { - unsafe { - dash_spv_ffi_init_logging(c"info".as_ptr(), true, std::ptr::null(), 0); - - // Create test data - let event_data = TestEventData::new(); - - // Create config - let config = dash_spv_ffi_config_new(FFINetwork::Regtest); - assert!(!config.is_null()); - - // Set data directory - let temp_dir = TempDir::new().unwrap(); - let path = CString::new(temp_dir.path().to_str().unwrap()).unwrap(); - dash_spv_ffi_config_set_data_dir(config, path.as_ptr()); - - // Create client - let client = dash_spv_ffi_client_new(config); - assert!(!client.is_null()); - - // Set up enhanced event callbacks - let event_callbacks = FFIEventCallbacks { - on_block: Some(test_block_callback), - on_transaction: Some(test_transaction_callback), - on_balance_update: Some(test_balance_callback), - on_mempool_transaction_added: None, - on_mempool_transaction_confirmed: None, - on_mempool_transaction_removed: None, - on_compact_filter_matched: Some(test_compact_filter_matched_callback), - on_wallet_transaction: Some(test_wallet_transaction_callback), - user_data: Arc::as_ptr(&event_data) as *mut c_void, - }; - - let set_result = dash_spv_ffi_client_set_event_callbacks(client, event_callbacks); - assert_eq!( - set_result, - FFIErrorCode::Success as i32, - "Failed to set enhanced event callbacks" - ); - - // Note: Wallet-specific tests have been moved to key-wallet-ffi - // The wallet functionality is no longer part of dash-spv-ffi - // dash-spv-ffi now focuses purely on SPV network operations - println!("⚠️ Wallet tests have been moved to key-wallet-ffi"); - - // Clean up - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - - println!("✅ Enhanced event callbacks test completed successfully"); - } -} - -#[test] -#[serial] -fn test_drain_events_integration() { - unsafe { - println!("Testing drain_events integration with event callbacks..."); - - let event_data = TestEventData::new(); - - // Create config - let config = dash_spv_ffi_config_new(FFINetwork::Regtest); - assert!(!config.is_null()); - - // Set data directory - let temp_dir = TempDir::new().unwrap(); - let path = CString::new(temp_dir.path().to_str().unwrap()).unwrap(); - dash_spv_ffi_config_set_data_dir(config, path.as_ptr()); - - // Create client - let client = dash_spv_ffi_client_new(config); - assert!(!client.is_null()); - - // Set up all event callbacks using the unified API - let user_data = Arc::as_ptr(&event_data) as *mut c_void; - let callbacks = FFIEventCallbacks { - on_balance_update: Some(test_balance_callback), - on_transaction: Some(test_transaction_callback), - on_block: Some(test_block_callback), - on_compact_filter_matched: Some(test_compact_filter_matched_callback), - on_mempool_transaction_added: None, - on_mempool_transaction_confirmed: None, - on_mempool_transaction_removed: None, - on_wallet_transaction: None, - user_data, - }; - dash_spv_ffi_client_set_event_callbacks(client, callbacks); - - // Test drain_events with no pending events - let result = dash_spv_ffi_client_drain_events(client); - assert_eq!(result, FFIErrorCode::Success as i32); - - // Verify no events were processed (callbacks not called) - assert!(!event_data.block_received.load(Ordering::SeqCst)); - assert!(!event_data.transaction_received.load(Ordering::SeqCst)); - assert!(!event_data.balance_updated.load(Ordering::SeqCst)); - assert!(!event_data.compact_filter_matched.load(Ordering::SeqCst)); - - // Test multiple drain calls - for _ in 0..10 { - let result = dash_spv_ffi_client_drain_events(client); - assert_eq!(result, FFIErrorCode::Success as i32); - } - - // State should remain unchanged - assert!(!event_data.block_received.load(Ordering::SeqCst)); - assert!(!event_data.transaction_received.load(Ordering::SeqCst)); - assert!(!event_data.balance_updated.load(Ordering::SeqCst)); - - // Clean up - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - - println!("✅ drain_events integration test completed successfully"); - } -} - -#[test] -#[serial] -fn test_drain_events_concurrent_with_callbacks() { - unsafe { - println!("Testing drain_events concurrent access with callback setup..."); - - let event_data = TestEventData::new(); - - // Create config and client - let config = dash_spv_ffi_config_new(FFINetwork::Regtest); - assert!(!config.is_null()); - - let temp_dir = TempDir::new().unwrap(); - let path = CString::new(temp_dir.path().to_str().unwrap()).unwrap(); - dash_spv_ffi_config_set_data_dir(config, path.as_ptr()); - - let client = dash_spv_ffi_client_new(config); - assert!(!client.is_null()); - - // Set up callbacks while draining events concurrently - let user_data = Arc::as_ptr(&event_data) as *mut c_void; - - // Set up callbacks and drain events - let callbacks = FFIEventCallbacks { - on_balance_update: Some(test_balance_callback), - on_transaction: Some(test_transaction_callback), - on_block: Some(test_block_callback), - on_compact_filter_matched: None, - on_mempool_transaction_added: None, - on_mempool_transaction_confirmed: None, - on_mempool_transaction_removed: None, - on_wallet_transaction: None, - user_data, - }; - dash_spv_ffi_client_set_event_callbacks(client, callbacks); - - let result = dash_spv_ffi_client_drain_events(client); - assert_eq!(result, FFIErrorCode::Success as i32); - - // Test concurrent draining from multiple threads - let client_ptr = client as usize; - let handles: Vec<_> = (0..3) - .map(|thread_id| { - thread::spawn(move || { - let client = client_ptr as *mut FFIDashSpvClient; - for i in 0..20 { - let result = dash_spv_ffi_client_drain_events(client); - assert_eq!(result, FFIErrorCode::Success as i32); - - // Small delay to allow interleaving - if i % 5 == 0 { - thread::sleep(Duration::from_millis(1)); - } - } - println!("Thread {} completed drain operations", thread_id); - }) - }) - .collect(); - - // Wait for all threads - for handle in handles { - handle.join().unwrap(); - } - - // Final drain to ensure everything is cleaned up - let result = dash_spv_ffi_client_drain_events(client); - assert_eq!(result, FFIErrorCode::Success as i32); - - // Clean up - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - - println!("✅ Concurrent drain_events test completed successfully"); - } -} - -#[test] -#[serial] -fn test_drain_events_callback_lifecycle() { - unsafe { - println!("Testing drain_events through callback lifecycle..."); - - let event_data = TestEventData::new(); - - let config = dash_spv_ffi_config_new(FFINetwork::Regtest); - assert!(!config.is_null()); - - let temp_dir = TempDir::new().unwrap(); - let path = CString::new(temp_dir.path().to_str().unwrap()).unwrap(); - dash_spv_ffi_config_set_data_dir(config, path.as_ptr()); - - let client = dash_spv_ffi_client_new(config); - assert!(!client.is_null()); - - let user_data = Arc::as_ptr(&event_data) as *mut c_void; - - // Phase 1: No callbacks set - should work fine - let result = dash_spv_ffi_client_drain_events(client); - assert_eq!(result, FFIErrorCode::Success as i32); - - // Phase 2: Set some callbacks - let callbacks = FFIEventCallbacks { - on_balance_update: Some(test_balance_callback), - on_transaction: Some(test_transaction_callback), - on_block: None, - on_compact_filter_matched: None, - on_mempool_transaction_added: None, - on_mempool_transaction_confirmed: None, - on_mempool_transaction_removed: None, - on_wallet_transaction: None, - user_data, - }; - dash_spv_ffi_client_set_event_callbacks(client, callbacks); - - // Drain with callbacks set - let result = dash_spv_ffi_client_drain_events(client); - assert_eq!(result, FFIErrorCode::Success as i32); - - // Phase 3: Clear callbacks by setting to None - let callbacks = FFIEventCallbacks { - on_balance_update: None, - on_transaction: None, - on_block: None, - on_compact_filter_matched: None, - on_mempool_transaction_added: None, - on_mempool_transaction_confirmed: None, - on_mempool_transaction_removed: None, - on_wallet_transaction: None, - user_data: std::ptr::null_mut(), - }; - dash_spv_ffi_client_set_event_callbacks(client, callbacks); - - // Drain with cleared callbacks - let result = dash_spv_ffi_client_drain_events(client); - assert_eq!(result, FFIErrorCode::Success as i32); - - // Phase 4: Re-set callbacks with different functions - let callbacks = FFIEventCallbacks { - on_balance_update: None, - on_transaction: None, - on_block: Some(test_block_callback), - on_compact_filter_matched: None, - on_mempool_transaction_added: None, - on_mempool_transaction_confirmed: None, - on_mempool_transaction_removed: None, - on_wallet_transaction: None, - user_data, - }; - dash_spv_ffi_client_set_event_callbacks(client, callbacks); - - // Final drain - let result = dash_spv_ffi_client_drain_events(client); - assert_eq!(result, FFIErrorCode::Success as i32); - - // Verify no unexpected events were triggered - assert!(!event_data.balance_updated.load(Ordering::SeqCst)); - assert!(!event_data.transaction_received.load(Ordering::SeqCst)); - assert!(!event_data.block_received.load(Ordering::SeqCst)); - - // Clean up - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - - println!("✅ Callback lifecycle drain_events test completed successfully"); - } -} diff --git a/dash-spv-ffi/tests/test_types.rs b/dash-spv-ffi/tests/test_types.rs index ab1c7c5fe..efac13ecc 100644 --- a/dash-spv-ffi/tests/test_types.rs +++ b/dash-spv-ffi/tests/test_types.rs @@ -1,5 +1,10 @@ #[cfg(test)] mod tests { + use dash_spv::sync::{ + BlockHeadersProgress, BlocksProgress, ChainLockProgress, FilterHeadersProgress, + FiltersProgress, InstantSendProgress, MasternodesProgress, SyncProgress, SyncState, + }; + use dash_spv::SyncProgress as LegacySyncProgress; use dash_spv_ffi::*; use key_wallet_ffi::FFINetwork; @@ -40,8 +45,8 @@ mod tests { } #[test] - fn test_sync_progress_conversion() { - let progress = dash_spv::SyncProgress { + fn test_legacy_sync_progress_conversion() { + let progress = LegacySyncProgress { header_height: 100, filter_header_height: 90, masternode_height: 80, @@ -53,7 +58,7 @@ mod tests { last_update: std::time::SystemTime::now(), }; - let ffi_progress = FFISyncProgress::from(progress); + let ffi_progress = FFILegacySyncProgress::from(progress); assert_eq!(ffi_progress.header_height, 100); assert_eq!(ffi_progress.filter_header_height, 90); @@ -62,4 +67,158 @@ mod tests { assert_eq!(ffi_progress.filters_downloaded, 50); assert_eq!(ffi_progress.last_synced_filter_height, 45); } + + #[test] + fn test_sync_progress_conversion() { + let mut progress = SyncProgress::default(); + + let mut headers = BlockHeadersProgress::default(); + headers.set_state(SyncState::Syncing); + headers.update_current_height(100); + headers.update_target_height(200); + headers.add_processed(20); + headers.update_buffered(5); + progress.update_headers(headers); + + let mut filter_headers = FilterHeadersProgress::default(); + filter_headers.set_state(SyncState::WaitingForConnections); + filter_headers.update_current_height(150); + filter_headers.update_target_height(200); + filter_headers.update_block_header_tip_height(180); + filter_headers.add_processed(30); + progress.update_filter_headers(filter_headers); + + let mut filters = FiltersProgress::default(); + filters.set_state(SyncState::WaitForEvents); + filters.update_current_height(120); + filters.update_target_height(200); + filters.update_filter_header_tip_height(150); + filters.add_downloaded(40); + filters.add_processed(35); + filters.add_matched(10); + progress.update_filters(filters); + + let mut blocks = BlocksProgress::default(); + blocks.set_state(SyncState::Syncing); + blocks.update_last_processed(400); + blocks.add_requested(50); + blocks.add_from_storage(20); + blocks.add_downloaded(15); + blocks.add_processed(12); + blocks.add_relevant(8); + blocks.add_transactions(25); + progress.update_blocks(blocks); + + let mut masternodes = MasternodesProgress::default(); + masternodes.set_state(SyncState::Synced); + masternodes.update_current_height(500); + masternodes.update_target_height(550); + masternodes.update_block_header_tip_height(560); + masternodes.add_diffs_processed(3); + progress.update_masternodes(masternodes); + + let mut chainlocks = ChainLockProgress::default(); + chainlocks.set_state(SyncState::Error); + chainlocks.update_best_validated_height(600); + chainlocks.add_valid(10); + chainlocks.add_invalid(2); + progress.update_chainlocks(chainlocks); + + let mut instantsend = InstantSendProgress::default(); + instantsend.set_state(SyncState::Initializing); + instantsend.update_pending(700); + instantsend.add_valid(200); + instantsend.add_invalid(15); + progress.update_instantsend(instantsend); + + let ffi_progress = FFISyncProgress::from(progress); + + assert_eq!(ffi_progress.state, FFISyncState::Syncing); + assert_eq!(ffi_progress.percentage, 0.625); + + // Verify headers progress + assert!(!ffi_progress.headers.is_null()); + unsafe { + let headers = &*ffi_progress.headers; + assert_eq!(headers.state, FFISyncState::Syncing); + assert_eq!(headers.current_height, 100); + assert_eq!(headers.target_height, 200); + assert_eq!(headers.processed, 20); + assert_eq!(headers.buffered, 5); + } + + // Verify filter_headers progress + assert!(!ffi_progress.filter_headers.is_null()); + unsafe { + let filter_headers = &*ffi_progress.filter_headers; + assert_eq!(filter_headers.state, FFISyncState::WaitingForConnections); + assert_eq!(filter_headers.current_height, 150); + assert_eq!(filter_headers.target_height, 200); + assert_eq!(filter_headers.block_header_tip_height, 180); + assert_eq!(filter_headers.processed, 30); + } + + // Verify filters progress + assert!(!ffi_progress.filters.is_null()); + unsafe { + let filters = &*ffi_progress.filters; + assert_eq!(filters.state, FFISyncState::WaitForEvents); + assert_eq!(filters.current_height, 120); + assert_eq!(filters.target_height, 200); + assert_eq!(filters.filter_header_tip_height, 150); + assert_eq!(filters.downloaded, 40); + assert_eq!(filters.processed, 35); + assert_eq!(filters.matched, 10); + } + + // Verify blocks progress + assert!(!ffi_progress.blocks.is_null()); + unsafe { + let blocks = &*ffi_progress.blocks; + assert_eq!(blocks.state, FFISyncState::Syncing); + assert_eq!(blocks.last_processed, 400); + assert_eq!(blocks.requested, 50); + assert_eq!(blocks.from_storage, 20); + assert_eq!(blocks.downloaded, 15); + assert_eq!(blocks.processed, 12); + assert_eq!(blocks.relevant, 8); + assert_eq!(blocks.transactions, 25); + } + + // Verify masternodes progress + assert!(!ffi_progress.masternodes.is_null()); + unsafe { + let masternodes = &*ffi_progress.masternodes; + assert_eq!(masternodes.state, FFISyncState::Synced); + assert_eq!(masternodes.current_height, 500); + assert_eq!(masternodes.target_height, 550); + assert_eq!(masternodes.block_header_tip_height, 560); + assert_eq!(masternodes.diffs_processed, 3); + } + + // Verify chainlocks progress + assert!(!ffi_progress.chainlocks.is_null()); + unsafe { + let chainlocks = &*ffi_progress.chainlocks; + assert_eq!(chainlocks.state, FFISyncState::Error); + assert_eq!(chainlocks.best_validated_height, 600); + assert_eq!(chainlocks.valid, 10); + assert_eq!(chainlocks.invalid, 2); + } + + // Verify instantsend progress + assert!(!ffi_progress.instantsend.is_null()); + unsafe { + let instantsend = &*ffi_progress.instantsend; + assert_eq!(instantsend.state, FFISyncState::Initializing); + assert_eq!(instantsend.pending, 700); + assert_eq!(instantsend.valid, 200); + assert_eq!(instantsend.invalid, 15); + } + + // Cleanup all allocated memory + unsafe { + dash_spv_ffi_manager_sync_progress_destroy(Box::into_raw(Box::new(ffi_progress))); + } + } } diff --git a/dash-spv-ffi/tests/unit/test_async_operations.rs b/dash-spv-ffi/tests/unit/test_async_operations.rs index e97b02e10..b8ef6be8f 100644 --- a/dash-spv-ffi/tests/unit/test_async_operations.rs +++ b/dash-spv-ffi/tests/unit/test_async_operations.rs @@ -1,10 +1,9 @@ #[cfg(test)] mod tests { - use crate::types::FFIDetailedSyncProgress; use crate::*; use key_wallet_ffi::FFINetwork; use serial_test::serial; - use std::ffi::{CStr, CString}; + use std::ffi::CString; use std::os::raw::{c_char, c_void}; use std::sync::atomic::{AtomicBool, AtomicU32, Ordering}; use std::sync::{Arc, Barrier, Mutex}; @@ -12,53 +11,6 @@ mod tests { use std::time::{Duration, Instant}; use tempfile::TempDir; - struct TestCallbackData { - progress_count: Arc, - completion_called: Arc, - last_progress: Arc>, - error_message: Arc>>, - data_received: Arc>>, - } - - extern "C" fn test_progress_callback( - progress: *const FFIDetailedSyncProgress, - user_data: *mut c_void, - ) { - let data = unsafe { &*(user_data as *const TestCallbackData) }; - data.progress_count.fetch_add(1, Ordering::SeqCst); - if !progress.is_null() { - unsafe { - *data.last_progress.lock().unwrap() = (*progress).percentage; - } - } - } - - extern "C" fn test_completion_callback( - success: bool, - error: *const c_char, - user_data: *mut c_void, - ) { - let data = unsafe { &*(user_data as *const TestCallbackData) }; - data.completion_called.store(true, Ordering::SeqCst); - - if !success && !error.is_null() { - unsafe { - let error_str = CStr::from_ptr(error).to_str().unwrap(); - *data.error_message.lock().unwrap() = Some(error_str.to_string()); - } - } - } - - extern "C" fn test_data_callback(data_ptr: *const c_void, len: usize, user_data: *mut c_void) { - let data = unsafe { &*(user_data as *const TestCallbackData) }; - if !data_ptr.is_null() && len > 0 { - unsafe { - let slice = std::slice::from_raw_parts(data_ptr as *const u8, len); - data.data_received.lock().unwrap().extend_from_slice(slice); - } - } - } - fn create_test_client() -> (*mut FFIDashSpvClient, *mut FFIClientConfig, TempDir) { let temp_dir = TempDir::new().unwrap(); unsafe { @@ -75,141 +27,6 @@ mod tests { } } - #[test] - #[serial] - fn test_callback_with_null_functions() { - unsafe { - let (client, config, _temp_dir) = create_test_client(); - assert!(!client.is_null()); - - // Instead, test that we can safely destroy a client with null callbacks - // The test is really about null pointer safety, not sync functionality - println!("Testing null callback safety without starting client"); - - // Just verify we can safely clean up without crashes - // This tests the null callback handling in destruction paths - - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - } - } - - #[test] - #[serial] - fn test_callback_with_null_user_data() { - unsafe { - let (client, config, _temp_dir) = create_test_client(); - assert!(!client.is_null()); - - // Test null user_data handling in a different way - println!("Testing null user_data safety without starting client"); - - // We could test with get_sync_progress which shouldn't hang - let progress = dash_spv_ffi_client_get_sync_progress(client); - if !progress.is_null() { - dash_spv_ffi_sync_progress_destroy(progress); - } - - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - } - } - - #[test] - #[serial] - #[ignore] // Requires network connection - fn test_progress_callback_range() { - unsafe { - let (client, config, _temp_dir) = create_test_client(); - assert!(!client.is_null()); - - let test_data = TestCallbackData { - progress_count: Arc::new(AtomicU32::new(0)), - completion_called: Arc::new(AtomicBool::new(false)), - last_progress: Arc::new(Mutex::new(0.0)), - error_message: Arc::new(Mutex::new(None)), - data_received: Arc::new(Mutex::new(Vec::new())), - }; - - dash_spv_ffi_client_sync_to_tip_with_progress( - client, - Some(test_progress_callback), - Some(test_completion_callback), - &test_data as *const _ as *mut c_void, - ); - - // Give time for callbacks - thread::sleep(Duration::from_millis(100)); - - // Check progress was in valid range - let last_progress = *test_data.last_progress.lock().unwrap(); - assert!((0.0..=100.0).contains(&last_progress)); - - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - } - } - - #[test] - #[serial] - #[ignore] // Requires network connection - fn test_completion_callback_error_handling() { - unsafe { - let (client, config, _temp_dir) = create_test_client(); - assert!(!client.is_null()); - - let test_data = TestCallbackData { - progress_count: Arc::new(AtomicU32::new(0)), - completion_called: Arc::new(AtomicBool::new(false)), - last_progress: Arc::new(Mutex::new(0.0)), - error_message: Arc::new(Mutex::new(None)), - data_received: Arc::new(Mutex::new(Vec::new())), - }; - - // Stop client first to ensure sync fails - dash_spv_ffi_client_stop(client); - - // Wait for completion - let start = Instant::now(); - while !test_data.completion_called.load(Ordering::SeqCst) - && start.elapsed() < Duration::from_secs(5) - { - thread::sleep(Duration::from_millis(10)); - } - - // Should have called completion - assert!(test_data.completion_called.load(Ordering::SeqCst)); - - dash_spv_ffi_client_destroy(client); - dash_spv_ffi_config_destroy(config); - } - } - - #[test] - #[serial] - fn test_data_callback_zero_length() { - let test_data = TestCallbackData { - progress_count: Arc::new(AtomicU32::new(0)), - completion_called: Arc::new(AtomicBool::new(false)), - last_progress: Arc::new(Mutex::new(0.0)), - error_message: Arc::new(Mutex::new(None)), - data_received: Arc::new(Mutex::new(Vec::new())), - }; - - // Test with zero length - test_data_callback(std::ptr::null(), 0, &test_data as *const _ as *mut c_void); - assert!(test_data.data_received.lock().unwrap().is_empty()); - - // Test with valid data - let data = vec![1u8, 2, 3, 4, 5]; - test_data_callback( - data.as_ptr() as *const c_void, - data.len(), - &test_data as *const _ as *mut c_void, - ); - assert_eq!(*test_data.data_received.lock().unwrap(), data); - } - #[test] #[serial] #[ignore] // Disabled due to unreliable behavior in test environments @@ -529,64 +346,61 @@ mod tests { #[test] #[serial] - fn test_event_callbacks() { + fn test_sync_event_callbacks() { unsafe { let (client, config, _temp_dir) = create_test_client(); assert!(!client.is_null()); - let block_called = Arc::new(AtomicBool::new(false)); - let tx_called = Arc::new(AtomicBool::new(false)); - let balance_called = Arc::new(AtomicBool::new(false)); + let sync_started = Arc::new(AtomicBool::new(false)); + let headers_stored = Arc::new(AtomicBool::new(false)); + let sync_complete = Arc::new(AtomicBool::new(false)); struct EventData { - block: Arc, - tx: Arc, - balance: Arc, + sync_started: Arc, + headers_stored: Arc, + sync_complete: Arc, } let event_data = EventData { - block: block_called.clone(), - tx: tx_called.clone(), - balance: balance_called.clone(), + sync_started: sync_started.clone(), + headers_stored: headers_stored.clone(), + sync_complete: sync_complete.clone(), }; - extern "C" fn on_block(_height: u32, hash: *const [u8; 32], user_data: *mut c_void) { + extern "C" fn on_sync_start(_manager_id: FFIManagerId, user_data: *mut c_void) { let data = unsafe { &*(user_data as *const EventData) }; - data.block.store(true, Ordering::SeqCst); - assert!(!hash.is_null()); + data.sync_started.store(true, Ordering::SeqCst); } - extern "C" fn on_tx( - txid: *const [u8; 32], - _confirmed: bool, - _amount: i64, - _addresses: *const c_char, - _block_height: u32, - user_data: *mut c_void, - ) { + extern "C" fn on_block_headers_stored(_tip_height: u32, user_data: *mut c_void) { let data = unsafe { &*(user_data as *const EventData) }; - data.tx.store(true, Ordering::SeqCst); - assert!(!txid.is_null()); + data.headers_stored.store(true, Ordering::SeqCst); } - extern "C" fn on_balance(_confirmed: u64, _unconfirmed: u64, user_data: *mut c_void) { + extern "C" fn on_sync_complete(_header_tip: u32, user_data: *mut c_void) { let data = unsafe { &*(user_data as *const EventData) }; - data.balance.store(true, Ordering::SeqCst); + data.sync_complete.store(true, Ordering::SeqCst); } - let event_callbacks = FFIEventCallbacks { - on_block: Some(on_block), - on_transaction: Some(on_tx), - on_balance_update: Some(on_balance), - on_mempool_transaction_added: None, - on_mempool_transaction_confirmed: None, - on_mempool_transaction_removed: None, - on_compact_filter_matched: None, - on_wallet_transaction: None, + let sync_callbacks = FFISyncEventCallbacks { + on_sync_start: Some(on_sync_start), + on_block_headers_stored: Some(on_block_headers_stored), + on_block_header_sync_complete: None, + on_filter_headers_stored: None, + on_filter_headers_sync_complete: None, + on_filters_stored: None, + on_filters_sync_complete: None, + on_blocks_needed: None, + on_block_processed: None, + on_masternode_state_updated: None, + on_chainlock_received: None, + on_instantlock_received: None, + on_manager_error: None, + on_sync_complete: Some(on_sync_complete), user_data: &event_data as *const _ as *mut c_void, }; - let result = dash_spv_ffi_client_set_event_callbacks(client, event_callbacks); + let result = dash_spv_ffi_client_set_sync_event_callbacks(client, sync_callbacks); assert_eq!(result, FFIErrorCode::Success as i32); dash_spv_ffi_client_destroy(client); diff --git a/dash-spv-ffi/tests/unit/test_client_lifecycle.rs b/dash-spv-ffi/tests/unit/test_client_lifecycle.rs index d8283848e..db7389bf8 100644 --- a/dash-spv-ffi/tests/unit/test_client_lifecycle.rs +++ b/dash-spv-ffi/tests/unit/test_client_lifecycle.rs @@ -192,20 +192,18 @@ mod tests { // Get initial state let progress1 = dash_spv_ffi_client_get_sync_progress(client); - // State should be consistent - if !progress1.is_null() { - let progress = &*progress1; - - // Basic consistency checks - assert!( - progress.header_height <= progress.filter_header_height - || progress.filter_header_height == 0 - ); - // headers_downloaded is u64, always >= 0 - - dash_spv_ffi_sync_progress_destroy(progress1); - } + let progress = &*progress1; + let headers = &*progress.headers; + let filter_headers = &*progress.filter_headers; + + // Basic consistency checks + assert!( + headers.current_height <= filter_headers.target_height + || filter_headers.current_height == 0 + ); + // headers_downloaded is u64, always >= 0 + dash_spv_ffi_sync_progress_destroy(progress1); dash_spv_ffi_client_destroy(client); dash_spv_ffi_config_destroy(config); } diff --git a/dash-spv-ffi/tests/unit/test_type_conversions.rs b/dash-spv-ffi/tests/unit/test_type_conversions.rs index 8d2b60ef6..12e655215 100644 --- a/dash-spv-ffi/tests/unit/test_type_conversions.rs +++ b/dash-spv-ffi/tests/unit/test_type_conversions.rs @@ -90,7 +90,7 @@ mod tests { last_update: std::time::SystemTime::now(), }; - let ffi_progress = FFISyncProgress::from(progress); + let ffi_progress = FFILegacySyncProgress::from(progress); assert_eq!(ffi_progress.header_height, u32::MAX); assert_eq!(ffi_progress.filter_header_height, u32::MAX); assert_eq!(ffi_progress.masternode_height, u32::MAX); diff --git a/dash-spv/ARCHITECTURE.md b/dash-spv/ARCHITECTURE.md index dc1eb5aa9..820a2a6e9 100644 --- a/dash-spv/ARCHITECTURE.md +++ b/dash-spv/ARCHITECTURE.md @@ -29,33 +29,33 @@ ### Current State: Production-Ready Structure ✅ **Code Organization: EXCELLENT (A+)** -- ✅ All major modules refactored into focused components -- ✅ sync/filters/: 10 modules (4,281 lines) -- ✅ sync/sequential/: 11 modules (4,785 lines) +- ✅ Parallel event-driven sync architecture with 7 independent managers +- ✅ SyncManager trait with standard event loop pattern +- ✅ SyncEvent broadcast channel for inter-manager communication - ✅ client/: 8 modules (2,895 lines) - ✅ storage/disk/: 7 modules (2,458 lines) - ✅ All files under 1,500 lines (most under 500) **Critical Remaining Work:** -- 🚨 **Security**: BLS signature validation (ChainLocks + InstantLocks) - 1-2 weeks effort +- 🚨 **Security**: BLS signature validation (ChainLocks + InstantLocks) ### Key Architectural Strengths **EXCELLENT DESIGN:** -- ✅ **Trait-based abstractions** (NetworkManager, StorageManager, WalletInterface) -- ✅ **Sequential sync manager** with clear phase transitions +- ✅ **Trait-based abstractions** (NetworkManager, StorageManager, WalletInterface, SyncManager) +- ✅ **Parallel sync managers** running in independent tokio tasks +- ✅ **Event-driven coordination** via typed SyncEvent broadcast channel +- ✅ **Topic-based message routing** filters network messages by type +- ✅ **Reactive progress aggregation** via watch channel streams - ✅ **Modular organization** with focused responsibilities -- ✅ **Comprehensive error types** with clear categorization - ✅ **External wallet integration** with clean interface boundaries -- ✅ **Lock ordering documented** to prevent deadlocks -- ✅ **Performance optimizations** (cached headers, segmented storage, flow control) -- ✅ **Strong test coverage** (242/243 tests passing) +- ✅ **Performance optimizations** (parallel sync, cached headers, segmented storage) **AREAS FOR IMPROVEMENT:** - ⚠️ **BLS validation** required for mainnet security - ⚠️ **Integration tests** could be more comprehensive - ⚠️ **Resource limits** not yet enforced (connections, bandwidth) -- ℹ️ **Type aliases** could improve ergonomics (optional - generic design is intentional and beneficial) +- ℹ️ See `TODO_SYNC_ISSUES.md` for tracked sync-related issues ### Statistics @@ -63,10 +63,9 @@ |----------|-------|-------| | Total Files | 110+ | Well-organized module structure | | Total Lines | ~40,000 | All files appropriately sized | +| Sync Managers | 7 | Block headers, filter headers, filters, blocks, masternodes, chainlock, instantsend | | Largest File | network/manager.rs | 1,322 lines - Acceptable complexity | | Module Count | 10+ | Well-separated concerns | -| Test Coverage | 242/243 passing | 99.6% pass rate | -| Major Modules Refactored | 4 | sync/filters/, sync/sequential/, client/, storage/disk/ | --- @@ -77,7 +76,7 @@ ``` ┌─────────────────────────────────────────────────────────────┐ │ DashSpvClient │ -│ (Main Orchestrator - 2,819 lines) │ +│ (Main Entry Point) │ └─────────────────────────────────────────────────────────────┘ │ │ │ ▼ ▼ ▼ @@ -87,44 +86,77 @@ └───────────┘ └───────────┘ └───────────┘ │ ▼ - ┌─────────────────────────────────────────┐ - │ SyncManager │ - │ - HeadersSync │ - │ - MasternodeSync │ - │ - FilterSync (4,027 lines - TOO BIG) │ - └─────────────────────────────────────────┘ + ┌─────────────────────────────────────────────────────────┐ + │ SyncCoordinator │ + │ - Spawns managers in parallel tokio tasks │ + │ - Aggregates progress reactively via watch channels │ + │ - Coordinates graceful shutdown │ + └─────────────────────────────────────────────────────────┘ │ ▼ - ┌──────────────┬──────────────┬──────────────┐ - │ Validation │ ChainLock │ Bloom │ - │ Manager │ Manager │ Manager │ - └──────────────┴──────────────┴──────────────┘ + ┌─────────────────────────────────────────────────────────┐ + │ Parallel Sync Managers (7) │ + ├──────────────┬──────────────┬──────────────┬────────────┤ + │ BlockHeaders │ FilterHeaders│ Filters │ Blocks │ + │ Manager │ Manager │ Manager │ Manager │ + ├──────────────┼──────────────┼──────────────┼────────────┤ + │ Masternodes │ ChainLock │ InstantSend │ │ + │ Manager │ Manager │ Manager │ │ + └──────────────┴──────────────┴──────────────┴────────────┘ + │ + ▼ + ┌─────────────────────────────────────────────────────────┐ + │ SyncEvent Broadcast Channel │ + │ Inter-manager communication via typed events │ + └─────────────────────────────────────────────────────────┘ ``` ### Data Flow ``` -Network Messages → MessageHandler → SyncManager - │ - ▼ - ┌─────────────────────┐ - │ Validation Manager │ - └─────────────────────┘ - │ - ▼ - ┌─────────────────────┐ - │ Storage Manager │ - └─────────────────────┘ - │ - ▼ - ┌─────────────────────┐ - │ ChainState Update │ - └─────────────────────┘ - │ - ▼ - ┌─────────────────────┐ - │ Event Emission │ - └─────────────────────┘ +┌──────────────────────────────────────────────────────────────────────────┐ +│ Network Layer │ +│ - Topic-based message routing to subscribed managers │ +│ - NetworkEvent broadcast for peer connection changes │ +└──────────────────────────────────────────────────────────────────────────┘ + │ │ + │ Messages │ NetworkEvents + ▼ ▼ +┌──────────────────────────────────────────────────────────────────────────┐ +│ Manager Event Loop (per manager) │ +│ tokio::select! { │ +│ message = receiver.recv() => handle_message() │ +│ event = sync_events.recv() => handle_sync_event() │ +│ network = network_rx.recv()=> handle_network_event() │ +│ _ = tick_interval.tick() => tick() // timeouts, retries │ +│ } │ +└──────────────────────────────────────────────────────────────────────────┘ + │ + │ SyncEvents (broadcast) + ▼ +┌──────────────────────────────────────────────────────────────────────────┐ +│ Event Flow Between Managers │ +│ │ +│ BlockHeadersManager ──BlockHeadersStored──> FilterHeadersManager │ +│ ──BlockHeaderSyncComplete──> MasternodesManager │ +│ │ +│ FilterHeadersManager ──FilterHeadersStored──> FiltersManager │ +│ │ +│ FiltersManager ──BlocksNeeded──> BlocksManager │ +│ │ +│ BlocksManager ──BlockProcessed──> FiltersManager (for gap limit rescan) │ +│ │ +│ SyncCoordinator ──SyncComplete──> External listeners │ +└──────────────────────────────────────────────────────────────────────────┘ + │ + │ Progress (watch channels) + ▼ +┌──────────────────────────────────────────────────────────────────────────┐ +│ Progress Aggregation Task │ +│ - Merges progress from all manager watch channels │ +│ - Updates SyncProgress reactively when any manager changes │ +│ - Emits SyncComplete when all managers reach Synced state │ +└──────────────────────────────────────────────────────────────────────────┘ ``` --- @@ -956,192 +988,158 @@ storage/disk/ --- -### 7. SYNC MODULE (16 files, ~12,000 lines) 🚨 **NEEDS MAJOR REFACTORING** +### 7. SYNC MODULE - Parallel Event-Driven Architecture ✅ **PRODUCTION READY** #### Overview -The sync module coordinates all blockchain synchronization. This is the most complex part of the codebase. -#### `src/sync/mod.rs` (167 lines) ✅ GOOD +The sync module uses a parallel, event-driven architecture where 7 independent managers run concurrently in their own tokio tasks, communicating via a broadcast event channel. -**Purpose**: Module exports and common sync utilities. +#### Architecture Summary -**Analysis**: -- **GOOD**: Clean module organization +``` +SyncCoordinator +├── BlockHeadersManager - Downloads and validates block headers via checkpoints +├── FilterHeadersManager - Downloads BIP158 filter headers +├── FiltersManager - Downloads filters, matches against wallet +├── BlocksManager - Downloads matched blocks, processes through wallet +├── MasternodesManager - Synchronizes masternode list via QRInfo/MnListDiff +├── ChainLockManager - Receives and validates ChainLocks +└── InstantSendManager - Receives and validates InstantLocks +``` -#### `src/sync/sequential/` (Module - Refactored) ✅ **COMPLETE** +#### Core Components -**Purpose**: Sequential synchronization manager - coordinates all sync phases. +##### `src/sync/sync_coordinator.rs` - Parallel Orchestration -**REFACTORING STATUS**: Complete (2025-01-21) -- ✅ Converted from single 2,246-line file to 11 focused modules -- ✅ All 242 tests passing -- ✅ Production ready +The `SyncCoordinator` spawns each manager in its own tokio task for true parallel processing: -**Module Structure**: -``` -sync/sequential/ (4,785 lines total across 11 modules) -├── mod.rs (52 lines) - Module coordinator and re-exports -├── manager.rs (234 lines) - Core SyncManager struct and accessors -├── lifecycle.rs (225 lines) - Initialization, startup, and shutdown -├── phase_execution.rs (519 lines) - Phase execution, transitions, timeout handling -├── message_handlers.rs (808 lines) - Handlers for sync phase messages -├── post_sync.rs (530 lines) - Handlers for post-sync messages (after initial sync) -├── phases.rs (621 lines) - SyncPhase enum and phase-related types -├── progress.rs (369 lines) - Progress tracking utilities -├── recovery.rs (559 lines) - Recovery and error handling logic -├── request_control.rs (410 lines) - Request flow control -└── transitions.rs (458 lines) - Phase transition management +- **Task spawning**: Each manager runs independently via `JoinSet` +- **Progress aggregation**: Reactive progress updates via merged watch channel streams +- **Event bus**: Broadcast channel for inter-manager communication +- **Shutdown**: Graceful termination via `CancellationToken` + +##### `src/sync/sync_manager.rs` - Manager Trait + +The `SyncManager` trait defines the interface all managers implement: + +```rust +#[async_trait] +pub trait SyncManager: Send + Sync + Debug { + fn identifier(&self) -> ManagerIdentifier; + fn state(&self) -> SyncState; + fn wanted_message_types(&self) -> &'static [MessageType]; + + async fn initialize(&mut self) -> SyncResult<()>; + async fn start_sync(&mut self, requests: &RequestSender) -> SyncResult>; + async fn handle_message(&mut self, msg: Message, requests: &RequestSender) -> SyncResult>; + async fn handle_sync_event(&mut self, event: &SyncEvent, requests: &RequestSender) -> SyncResult>; + async fn tick(&mut self, requests: &RequestSender) -> SyncResult>; + fn progress(&self) -> SyncManagerProgress; + + // Default implementation provides the main event loop + async fn run(mut self, context: SyncManagerTaskContext) -> SyncResult; +} ``` -**What it does**: -- Coordinates header sync (via `HeaderSyncManagerWithReorg`) -- Coordinates masternode list sync (via `MasternodeSyncManager`) -- Coordinates filter sync (via `FilterSyncManager`) -- Manages sync state machine through SyncPhase enum -- Handles phase transitions with validation -- Implements error recovery and retry logic -- Tracks progress across all sync phases -- Routes network messages to appropriate handlers -- Handles post-sync maintenance (new blocks, filters, etc.) +The trait provides a default `run()` implementation with the standard event loop pattern: +- Process incoming network messages +- Handle sync events from other managers +- React to network events (peer changes) +- Periodic tick for timeouts and retries -**Complex Types Used**: -- **Generic constraints**: `` -- **State machine**: SyncPhase enum with strict sequential transitions -- **Shared state**: Arc> for wallet and stats -- **Sub-managers**: Delegates to specialized sync managers - -**Strengths**: -- ✅ **EXCELLENT**: Clean module separation by responsibility -- ✅ **EXCELLENT**: Sequential approach simplifies reasoning -- ✅ **GOOD**: Clear phase boundaries and transitions -- ✅ **GOOD**: Comprehensive error recovery -- ✅ **GOOD**: All phases well-documented -- ✅ **GOOD**: Lock ordering documented to prevent deadlocks - -#### `src/sync/filters/` (Module - Phase 1 Complete) ✅ **REFACTORED** - -**Purpose**: Compact filter synchronization logic. - -**REFACTORING STATUS**: Phase 1 Complete (2025-01-XX) -- ✅ Converted from single 4,060-line file to module directory -- ✅ Extracted types and constants to `types.rs` (89 lines) -- ✅ Main logic in `manager_full.rs` (4,027 lines - awaiting Phase 2) -- ✅ All 243 tests passing +##### `src/sync/events.rs` - Event Types -**Previous state**: Single file with 4,027 lines - UNACCEPTABLE -**Current state**: Module structure established - Phase 2 extraction needed +`SyncEvent` enables loose coupling between managers: -**What it does**: -- Filter header sync (CFHeaders) -- Compact filter download (CFilters) -- Filter matching against wallet addresses -- Gap detection and recovery -- Request batching and flow control -- Timeout and retry logic -- Progress tracking and statistics -- Peer selection and routing - -**Phase 2 Accomplishment (2025-01-21)**: -- ✅ All 8 modules successfully extracted -- ✅ `manager.rs` - Core coordinator (342 lines) -- ✅ `headers.rs` - CFHeaders sync (1,345 lines) -- ✅ `download.rs` - CFilter download (659 lines) -- ✅ `matching.rs` - Filter matching (454 lines) -- ✅ `gaps.rs` - Gap detection (490 lines) -- ✅ `retry.rs` - Retry logic (381 lines) -- ✅ `stats.rs` - Statistics (234 lines) -- ✅ `requests.rs` - Request management (248 lines) -- ✅ `types.rs` - Type definitions (86 lines) -- ✅ `mod.rs` - Module coordinator (42 lines) -- ✅ `manager_full.rs` deleted -- ✅ All 243 tests passing -- ✅ Compilation successful +| Event | Emitter | Consumers | +|-------|---------|-----------| +| `BlockHeadersStored` | BlockHeadersManager | FilterHeadersManager, MasternodesManager | +| `BlockHeaderSyncComplete` | BlockHeadersManager | MasternodesManager | +| `FilterHeadersStored` | FilterHeadersManager | FiltersManager | +| `FiltersSyncComplete` | FiltersManager | BlocksManager | +| `BlocksNeeded` | FiltersManager | BlocksManager | +| `BlockProcessed` | BlocksManager | FiltersManager (gap limit rescan) | +| `ChainLockReceived` | ChainLockManager | External listeners | +| `InstantLockReceived` | InstantSendManager | External listeners | +| `SyncComplete` | Coordinator | External listeners | + +##### `src/sync/progress.rs` - Aggregate Progress + +`SyncProgress` aggregates progress from all managers with type-safe accessors for each manager's progress type. + +#### Manager Modules + +Each manager follows a consistent structure: -**Final Module Structure:** ``` -sync/filters/ -├── mod.rs (42 lines) - Module coordinator -├── types.rs (86 lines) - Type definitions -├── manager.rs (342 lines) - Core coordinator -├── stats.rs (234 lines) - Statistics tracking -├── retry.rs (381 lines) - Timeout/retry logic -├── requests.rs (248 lines) - Request queues -├── gaps.rs (490 lines) - Gap detection -├── headers.rs (1,345 lines) - CFHeaders sync -├── download.rs (659 lines) - CFilter download -└── matching.rs (454 lines) - Filter matching +sync// +├── mod.rs - Module exports +├── manager.rs - Manager struct and core logic +├── sync_manager.rs - SyncManager trait implementation +├── pipeline.rs - Download pipeline (if applicable) +└── progress.rs - Progress tracking types ``` -**Analysis**: -- ✅ **COMPLETE**: All refactoring objectives met -- ✅ **MAINTAINABLE**: Clear module boundaries and responsibilities -- ✅ **TESTABLE**: Each module can be tested independently -- ✅ **DOCUMENTED**: Each module has focused documentation -- ✅ **PRODUCTION READY**: All tests passing, no regressions +##### `src/sync/block_headers/` - Block Header Sync -#### `src/sync/headers.rs` (705 lines) ⚠️ LARGE +- Downloads headers in parallel using checkpoint-based segments +- Pipeline buffers out-of-order responses and commits in order +- Handles both initial sync and post-sync new block announcements +- Emits `BlockHeadersStored` events as headers are committed -**Purpose**: Header synchronization logic. +##### `src/sync/filter_headers/` - Filter Header Sync -**What it does**: -- Downloads headers from peers -- Validates header chain -- Handles headers2 compression -- Detects reorgs +- Listens for `BlockHeadersStored` events to know download range +- Downloads BIP158 filter headers in batches +- Validates filter header chain continuity +- Emits `FilterHeadersStored` events -**Analysis**: -- **GOOD**: Comprehensive header sync -- **GOOD**: Headers2 support -- **ISSUE**: Could be split into headers1 and headers2 modules +##### `src/sync/filters/` - Filter Download and Matching -**Refactoring needed**: -- ⚠️ **MEDIUM**: Split headers1 and headers2 into separate files -- ⚠️ **LOW**: Add more documentation +- Listens for `FilterHeadersStored` to know available filter headers +- Downloads compact block filters +- Matches filters against wallet addresses +- Emits `BlocksNeeded` when matches found +- Handles gap limit rescanning when wallet discovers new addresses -#### `src/sync/headers_with_reorg.rs` (1,148 lines) 🚨 **TOO LARGE** +##### `src/sync/blocks/` - Block Download and Processing -**Purpose**: Header sync with reorganization detection. +- Listens for `BlocksNeeded` events from FiltersManager +- Downloads full blocks for matched heights +- Processes blocks through wallet for transaction extraction +- Emits `BlockProcessed` with any new addresses discovered -**Analysis**: -- **ISSUE**: 1,148 lines is too large -- **GOOD**: Handles complex reorg scenarios -- **ISSUE**: Overlaps with sync/headers.rs +##### `src/sync/masternodes/` - Masternode List Sync -**Refactoring needed**: -- ⚠️ **HIGH**: Merge with headers.rs or clearly separate concerns -- ⚠️ **HIGH**: Split into smaller modules +- Waits for `BlockHeaderSyncComplete` before starting +- Uses QRInfo for quorum-based sync or MnListDiff for incremental updates +- Updates masternode list engine with validated diffs +- Emits `MasternodeStateUpdated` events -#### `src/sync/masternodes.rs` (775 lines) ⚠️ LARGE +##### `src/sync/chainlock/` - ChainLock Processing -**Purpose**: Masternode list synchronization. +- Listens for ChainLock messages from network +- Validates signatures (requires quorum data from masternodes) +- Emits `ChainLockReceived` events -**What it does**: -- Downloads masternode diffs -- Updates masternode list engine -- Validates quorums +##### `src/sync/instantsend/` - InstantSend Processing -**Analysis**: -- **GOOD**: Dash-specific functionality -- **GOOD**: Proper validation -- **ISSUE**: Could be split +- Listens for InstantLock messages from network +- Validates signatures +- Emits `InstantLockReceived` events -**Refactoring needed**: -- ⚠️ **MEDIUM**: Split diff download and validation - -#### Other sync files: -- `chainlock_validation.rs` (231 lines) ✅ **GOOD** -- `discovery.rs` (98 lines) ✅ **GOOD** -- `embedded_data.rs` (118 lines) ✅ **GOOD** -- `state.rs` (157 lines) ✅ **GOOD** -- `validation.rs` (283 lines) ✅ **GOOD** - -**Overall Sync Module Assessment**: -- ✅ **EXCELLENT**: sync/filters/ fully refactored (10 modules, 4,281 lines) -- ✅ **EXCELLENT**: sync/sequential/ fully refactored (11 modules, 4,785 lines) -- ✅ **EXCELLENT**: State machine clearly modeled in phases.rs -- ✅ **EXCELLENT**: Error recovery consolidated in recovery.rs -- ✅ **GOOD**: Sequential approach is sound -- ✅ **GOOD**: Individual algorithms appear correct +#### Legacy Module + +The previous sequential sync implementation is preserved in `src/sync/legacy/` for reference. This approach used phase-based sequential synchronization where each phase completed before the next began. + +#### Design Strengths + +- **True parallelism**: Headers, filters, and masternodes sync concurrently +- **Loose coupling**: Managers communicate only via typed events +- **Independent progress**: Each manager tracks its own state +- **Graceful recovery**: Managers handle their own timeouts and retries +- **Type-safe events**: Compile-time verification of event contracts +- **Topic-based routing**: Network messages filtered by type before reaching managers --- @@ -1420,36 +1418,41 @@ Validation module handles header validation, ChainLock verification, and Instant ## Complexity Metrics +### Sync Module Structure + +| Manager | Module Path | Key Files | Description | +|---------|-------------|-----------|-------------| +| BlockHeadersManager | sync/block_headers/ | manager.rs, pipeline.rs, sync_manager.rs | Parallel header sync via checkpoints | +| FilterHeadersManager | sync/filter_headers/ | manager.rs, pipeline.rs, sync_manager.rs | BIP158 filter header sync | +| FiltersManager | sync/filters/ | manager.rs, pipeline.rs, sync_manager.rs | Filter download and wallet matching | +| BlocksManager | sync/blocks/ | manager.rs, pipeline.rs, sync_manager.rs | Block download for matched heights | +| MasternodesManager | sync/masternodes/ | manager.rs, pipeline.rs, sync_manager.rs | Masternode list via QRInfo/MnListDiff | +| ChainLockManager | sync/chainlock/ | manager.rs, sync_manager.rs | ChainLock message handling | +| InstantSendManager | sync/instantsend/ | manager.rs, sync_manager.rs | InstantLock message handling | + ### File Complexity (Largest Files) | File | Lines | Complexity | Notes | |------|-------|------------|-------| -| sync/filters/ | 10 modules (4,281 total) | ✅ EXCELLENT | Well-organized filter sync modules | -| sync/sequential/ | 11 modules (4,785 total) | ✅ EXCELLENT | Sequential sync pipeline modules | -| client/ | 8 modules (2,895 total) | ✅ EXCELLENT | Client functionality modules | -| storage/disk/ | 7 modules (2,458 total) | ✅ EXCELLENT | Persistent storage modules | -| network/manager.rs | 1,322 | ✅ ACCEPTABLE | Complex peer management logic | -| sync/headers_with_reorg.rs | 1,148 | ✅ ACCEPTABLE | Reorg handling complexity justified | -| types.rs | 1,064 | ✅ ACCEPTABLE | Core type definitions | -| mempool_filter.rs | 793 | ✅ GOOD | Mempool management | -| bloom/tests.rs | 799 | ✅ GOOD | Comprehensive bloom tests | -| sync/masternodes.rs | 775 | ✅ GOOD | Masternode sync logic | - -**Note:** All files are now at acceptable complexity levels. The 1,000-1,500 line files contain inherently complex logic that justifies their size. +| sync/ (total) | 60+ files | ✅ EXCELLENT | 7 parallel managers with consistent structure | +| client/ | 8 modules | ✅ EXCELLENT | Client functionality modules | +| storage/disk/ | 7 modules | ✅ EXCELLENT | Persistent storage modules | +| network/manager.rs | ~1,300 | ✅ ACCEPTABLE | Complex peer management logic | +| types.rs | ~1,065 | ✅ ACCEPTABLE | Core type definitions | ### Module Health -| Module | Files | Lines | Health | Characteristics | -|--------|-------|-------|--------|-----------------| -| sync/ | 37 | ~12,000 | ✅ EXCELLENT | Filters and sequential both fully modularized | -| client/ | 8 | ~2,895 | ✅ EXCELLENT | Clean separation: lifecycle, sync, progress, mempool, events | -| storage/ | 13 | ~3,500 | ✅ EXCELLENT | Disk storage split into focused modules | -| network/ | 14 | ~5,000 | ✅ GOOD | Handles peer management, connections, message routing | -| chain/ | 10 | ~3,500 | ✅ GOOD | ChainLock, checkpoint, orphan pool management | -| bloom/ | 6 | ~2,000 | ✅ GOOD | Bloom filter implementation for transaction filtering | -| validation/ | 6 | ~2,000 | ⚠️ FAIR | Needs BLS validation implementation (security) | -| error/ | 1 | 303 | ✅ EXCELLENT | Clean error hierarchy with thiserror | -| types/ | 1 | 1,065 | ✅ ACCEPTABLE | Core type definitions, reasonable size | +| Module | Files | Health | Characteristics | +|--------|-------|--------|-----------------| +| sync/ | 60+ | ✅ EXCELLENT | Parallel managers, SyncManager trait, event-driven | +| client/ | 8 | ✅ EXCELLENT | Clean separation: lifecycle, sync, progress, mempool, events | +| storage/ | 13 | ✅ EXCELLENT | Disk storage split into focused modules | +| network/ | 14 | ✅ GOOD | Handles peer management, connections, message routing | +| chain/ | 10 | ✅ GOOD | ChainLock, checkpoint, orphan pool management | +| bloom/ | 6 | ✅ GOOD | Bloom filter implementation for transaction filtering | +| validation/ | 6 | ⚠️ FAIR | Needs BLS validation implementation (security) | +| error/ | 1 | ✅ EXCELLENT | Clean error hierarchy with thiserror | +| types/ | 1 | ✅ ACCEPTABLE | Core type definitions, reasonable size | --- diff --git a/dash-spv/Cargo.toml b/dash-spv/Cargo.toml index e01d9fd92..71300548a 100644 --- a/dash-spv/Cargo.toml +++ b/dash-spv/Cargo.toml @@ -24,7 +24,9 @@ clap = { version = "4.0", features = ["derive", "env"] } # Async runtime tokio = { version = "1.0", features = ["full"] } tokio-util = "0.7" +tokio-stream = { version = "0.1", features = ["sync"] } async-trait = "0.1" +futures = "0.3" # Error handling thiserror = "1.0" diff --git a/dash-spv/src/client/chainlock.rs b/dash-spv/src/client/chainlock.rs index 1a6da5add..88a8fb8c8 100644 --- a/dash-spv/src/client/chainlock.rs +++ b/dash-spv/src/client/chainlock.rs @@ -6,15 +6,12 @@ //! - ChainLock validation updates //! - Pending ChainLock validation -use std::net::SocketAddr; -use std::sync::Arc; - use crate::error::{Result, SpvError}; use crate::network::NetworkManager; use crate::storage::StorageManager; use crate::types::SpvEvent; -use crate::validation::{InstantLockValidator, Validator}; use key_wallet_manager::wallet_interface::WalletInterface; +use std::net::SocketAddr; use super::DashSpvClient; @@ -78,81 +75,14 @@ impl DashSpvClient Result<()> { - tracing::info!("Processing InstantSendLock for tx {}", islock.txid); - - // Get the masternode engine from sync manager for proper quorum verification - let masternode_engine = self.sync_manager.get_masternode_engine().ok_or_else(|| { - SpvError::Validation(crate::error::ValidationError::MasternodeVerification( - "Masternode engine not available for InstantLock verification".to_string(), - )) - })?; - - // Validate the InstantLock (structure + BLS signature) - // This is REQUIRED for security - never accept InstantLocks without signature verification - let validator = InstantLockValidator::new(masternode_engine); - if let Err(e) = validator.validate(&islock) { - // Penalize the peer that relayed the invalid InstantLock - let reason = format!("Invalid InstantLock: {}", e); - tracing::warn!("{}", reason); - - // Ban the peer using the reputation system - self.network.penalize_peer_invalid_instantlock(peer_address, &reason).await; - - return Err(SpvError::Validation(e)); - } - - tracing::info!( - "✅ InstantSendLock validated successfully: txid={}, inputs={}", - islock.txid, - islock.inputs.len() - ); - - // Emit InstantLock event - self.emit_event(SpvEvent::InstantLockReceived { - txid: islock.txid, - inputs: islock.inputs.clone(), - }); - - Ok(()) - } - - /// Update ChainLock validation with masternode engine after sync completes. - /// This should be called when masternode sync finishes to enable full validation. - /// Returns true if the engine was successfully set. - pub fn update_chainlock_validation(&self) -> Result { - // Check if masternode sync has an engine available - if let Some(engine) = self.sync_manager.get_masternode_engine() { - // Clone the engine for the ChainLockManager - let engine_arc = Arc::new(engine.clone()); - self.chainlock_manager.set_masternode_engine(engine_arc); - - tracing::info!("Updated ChainLockManager with masternode engine for full validation"); - - // Note: Pending ChainLocks will be validated when they are next processed - // or can be triggered by calling validate_pending_chainlocks separately - // when mutable access to storage is available - - Ok(true) - } else { - tracing::warn!("Masternode engine not available for ChainLock validation update"); - Ok(false) - } - } - /// Validate all pending ChainLocks after masternode engine is available. /// This requires mutable access to self for storage access. pub async fn validate_pending_chainlocks(&mut self) -> Result<()> { diff --git a/dash-spv/src/client/core.rs b/dash-spv/src/client/core.rs index 512dd21ae..0550e6bd8 100644 --- a/dash-spv/src/client/core.rs +++ b/dash-spv/src/client/core.rs @@ -8,24 +8,26 @@ //! - Configuration updates //! - Terminal UI accessors -use std::sync::Arc; -use tokio::sync::{mpsc, Mutex, RwLock}; - #[cfg(feature = "terminal-ui")] use crate::terminal::TerminalUI; +use dashcore::sml::masternode_list_engine::MasternodeListEngine; +use std::sync::Arc; +use tokio::sync::{mpsc, Mutex, RwLock}; +use super::{ClientConfig, StatusDisplay}; use crate::chain::ChainLockManager; use crate::error::{Result, SpvError}; use crate::mempool_filter::MempoolFilter; use crate::network::NetworkManager; -use crate::storage::StorageManager; +use crate::storage::{ + PersistentBlockHeaderStorage, PersistentBlockStorage, PersistentFilterHeaderStorage, + PersistentFilterStorage, StorageManager, +}; use crate::sync::legacy::filters::FilterNotificationSender; -use crate::sync::legacy::SyncManager; -use crate::types::{ChainState, DetailedSyncProgress, MempoolState, SpvEvent}; +use crate::sync::SyncCoordinator; +use crate::types::{ChainState, MempoolState, SpvEvent}; use key_wallet_manager::wallet_interface::WalletInterface; -use super::{ClientConfig, StatusDisplay}; - /// Main Dash SPV client with generic trait-based architecture. /// /// # Generic Design Philosophy @@ -104,40 +106,19 @@ pub struct DashSpvClient>, /// External wallet implementation (required) pub(super) wallet: Arc>, - /// Synchronization manager for coordinating blockchain sync operations. - /// - /// # Architectural Design - /// - /// The sync manager is stored as a non-shared field (not wrapped in `Arc>`) - /// for the following reasons: - /// - /// 1. **Single Owner Pattern**: The sync manager is exclusively owned by the client, - /// ensuring clear ownership and preventing concurrent access issues. - /// - /// 2. **Sequential Operations**: Blockchain synchronization is inherently sequential - - /// headers must be validated in order, and sync phases must complete before - /// progressing to the next phase. - /// - /// 3. **Simplified State Management**: Avoiding shared ownership eliminates complex - /// synchronization issues and makes the sync state machine easier to reason about. - /// - /// ## Future Considerations - /// - /// If concurrent access becomes necessary (e.g., for monitoring sync progress from - /// multiple threads), consider: - /// - Using interior mutability patterns (`Arc>`) - /// - Extracting read-only state into a separate shared structure - /// - Implementing a message-passing architecture for sync commands - /// - /// The current design prioritizes simplicity and correctness over concurrent access. - pub(super) sync_manager: SyncManager, + pub(super) masternode_engine: Option>>, + pub(super) sync_coordinator: SyncCoordinator< + PersistentBlockHeaderStorage, + PersistentFilterHeaderStorage, + PersistentFilterStorage, + PersistentBlockStorage, + W, + >, pub(super) chainlock_manager: Arc, pub(super) running: Arc>, #[cfg(feature = "terminal-ui")] pub(super) terminal_ui: Option>, pub(super) filter_processor: Option, - pub(super) progress_sender: Option>, - pub(super) progress_receiver: Option>, pub(super) event_tx: mpsc::UnboundedSender, pub(super) event_rx: Option>, pub(super) mempool_state: Arc>, @@ -167,12 +148,6 @@ impl DashSpvClient &mut SyncManager { - &mut self.sync_manager - } - // ============ State Queries ============ /// Check if the client is running. @@ -213,9 +188,6 @@ impl DashSpvClient DashSpvClient DashSpvClient Option> { - self.progress_receiver.take() + /// Subscribe to sync progress updates via watch channel. + pub fn subscribe_progress(&self) -> watch::Receiver { + self.sync_coordinator.subscribe_progress() } - /// Emit a progress update. - pub(super) fn emit_progress(&self, progress: DetailedSyncProgress) { - if let Some(ref sender) = self.progress_sender { - let _ = sender.send(progress); - } + /// Get current sync progress. + pub fn progress(&self) -> SyncProgress { + self.sync_coordinator.progress() + } + + /// Subscribe to sync events from the sync coordinator. + pub fn subscribe_sync_events(&self) -> broadcast::Receiver { + self.sync_coordinator.subscribe_events() + } + + /// Subscribe to network events. + pub fn subscribe_network_events(&self) -> broadcast::Receiver { + self.network.subscribe_network_events() } } diff --git a/dash-spv/src/client/lifecycle.rs b/dash-spv/src/client/lifecycle.rs index b9b45e351..ef77b50ce 100644 --- a/dash-spv/src/client/lifecycle.rs +++ b/dash-spv/src/client/lifecycle.rs @@ -12,19 +12,26 @@ use std::collections::HashSet; use std::sync::Arc; use tokio::sync::{mpsc, Mutex, RwLock}; -use crate::chain::ChainLockManager; +use super::{ClientConfig, DashSpvClient}; +use crate::chain::checkpoints::{mainnet_checkpoints, testnet_checkpoints, CheckpointManager}; +use crate::chain::ChainLockManager as LegacyChainLockManager; use crate::error::{Result, SpvError}; use crate::mempool_filter::MempoolFilter; use crate::network::NetworkManager; -use crate::storage::StorageManager; -use crate::sync::legacy::SyncManager; -use crate::types::{ChainState, MempoolState, SharedFilterHeights}; +use crate::storage::{ + PersistentBlockHeaderStorage, PersistentBlockStorage, PersistentFilterHeaderStorage, + PersistentFilterStorage, StorageManager, +}; +use crate::sync::{ + BlockHeadersManager, BlocksManager, ChainLockManager, FilterHeadersManager, FiltersManager, + InstantSendManager, Managers, MasternodesManager, SyncCoordinator, +}; +use crate::types::{ChainState, MempoolState}; use dashcore::network::constants::NetworkExt; +use dashcore::sml::masternode_list_engine::MasternodeListEngine; use dashcore_hashes::Hash; use key_wallet_manager::wallet_interface::WalletInterface; -use super::{ClientConfig, DashSpvClient}; - impl DashSpvClient { /// Create a new SPV client with the given configuration, network, storage, and wallet. pub async fn new( @@ -39,21 +46,79 @@ impl DashSpvClient - let storage = Arc::new(Mutex::new(storage)); + let masternode_engine = { + if config.enable_masternodes { + Some(Arc::new(RwLock::new(MasternodeListEngine::default_for_network( + config.network, + )))) + } else { + None + } + }; - // Create sync manager - tracing::info!("Creating sequential sync manager"); - let received_filter_heights = SharedFilterHeights::new(Mutex::new(HashSet::new())); - let sync_manager = - SyncManager::new(&config, received_filter_heights, wallet.clone(), state.clone()) - .map_err(SpvError::Sync)?; + let mut managers: Managers< + PersistentBlockHeaderStorage, + PersistentFilterHeaderStorage, + PersistentFilterStorage, + PersistentBlockStorage, + W, + > = Managers::default(); + + let header_storage = storage.header_storage_ref().expect("Headers storage must exist"); + let checkpoints = match config.network { + dashcore::Network::Dash => mainnet_checkpoints(), + dashcore::Network::Testnet => testnet_checkpoints(), + _ => Vec::new(), + }; + let checkpoint_manager = Arc::new(CheckpointManager::new(checkpoints)); + managers.block_headers = + Some(BlockHeadersManager::new(header_storage.clone(), checkpoint_manager)); + + if config.enable_filters { + let filter_headers_storage = storage + .filter_header_storage_ref() + .expect("Filters headers storage must exist if filters are enabled"); + let filters_storage = storage + .filter_storage_ref() + .expect("Filters storage must exist if filters are enabled"); + let blocks_storage = storage + .block_storage_ref() + .expect("Blocks storage must exist if filters are enabled"); + + managers.filter_headers = Some(FilterHeadersManager::new( + header_storage.clone(), + filter_headers_storage.clone(), + )); + managers.filters = Some(FiltersManager::new( + wallet.clone(), + header_storage.clone(), + filter_headers_storage, + filters_storage, + )); + managers.blocks = + Some(BlocksManager::new(wallet.clone(), header_storage.clone(), blocks_storage)); + } - // Create ChainLock manager - let chainlock_manager = Arc::new(ChainLockManager::new(true)); + // Build masternode manager if enabled + if config.enable_masternodes { + let masternode_list_engine = masternode_engine + .clone() + .expect("Masternode list engine must exist if masternodes are enabled"); + managers.masternode = Some(MasternodesManager::new( + header_storage.clone(), + masternode_list_engine.clone(), + config.network, + )); + managers.chainlock = + Some(ChainLockManager::new(header_storage.clone(), masternode_list_engine.clone())); + managers.instantsend = Some(InstantSendManager::new(masternode_list_engine.clone())); + } - // Create progress channels - let (progress_sender, progress_receiver) = mpsc::unbounded_channel(); + // Create sync coordinator (managers are passed to start() later) + let sync_coordinator = SyncCoordinator::new(managers); + + // Create ChainLock manager + let chainlock_manager = Arc::new(LegacyChainLockManager::new(true)); // Create event channels let (event_tx, event_rx) = mpsc::unbounded_channel(); @@ -61,20 +126,22 @@ impl DashSpvClient + let storage = Arc::new(Mutex::new(storage)); + Ok(Self { config, state, network, storage, wallet, - sync_manager, + masternode_engine, + sync_coordinator, chainlock_manager, running: Arc::new(RwLock::new(false)), #[cfg(feature = "terminal-ui")] terminal_ui: None, filter_processor: None, - progress_sender: Some(progress_sender), - progress_receiver: Some(progress_receiver), event_tx, event_rx: Some(event_rx), mempool_state, @@ -128,26 +195,6 @@ impl DashSpvClient 0 { - tracing::info!("Found {} headers in storage, loading into sync manager...", tip_height); - let storage = self.storage.lock().await; - self.sync_manager.load_headers_from_storage(&storage).await - } - - // Connect to network - self.network.connect().await?; - - { - let mut running = self.running.write().await; - *running = true; - } - // Update terminal UI after connection with initial data #[cfg(feature = "terminal-ui")] if let Some(ui) = &self.terminal_ui { @@ -169,6 +216,22 @@ impl DashSpvClient DashSpvClient DashSpvClient { - /// Get current sync progress. - pub async fn sync_progress(&self) -> Result { - let display = self.create_status_display().await; - display.sync_progress().await - } - - /// Map a sync phase to a sync stage for progress reporting. - pub(super) fn map_phase_to_stage( - phase: &SyncPhase, - sync_progress: &SyncProgress, - peer_best_height: u32, - ) -> SyncStage { - match phase { - SyncPhase::Idle => { - if sync_progress.peer_count == 0 { - SyncStage::Connecting - } else { - SyncStage::QueryingPeerHeight - } - } - SyncPhase::DownloadingHeaders { - start_height, - target_height, - .. - } => SyncStage::DownloadingHeaders { - start: *start_height, - end: target_height.unwrap_or(peer_best_height), - }, - SyncPhase::DownloadingMnList { - diffs_processed, - .. - } => SyncStage::ValidatingHeaders { - batch_size: *diffs_processed as usize, - }, - SyncPhase::DownloadingCFHeaders { - current_height, - target_height, - .. - } => SyncStage::DownloadingFilterHeaders { - current: *current_height, - target: *target_height, - }, - SyncPhase::DownloadingFilters { - completed_heights, - total_filters, - .. - } => SyncStage::DownloadingFilters { - completed: completed_heights.len() as u32, - total: *total_filters, - }, - SyncPhase::DownloadingBlocks { - pending_blocks, - .. - } => SyncStage::DownloadingBlocks { - pending: pending_blocks.len(), - }, - SyncPhase::FullySynced { - .. - } => SyncStage::Complete, - } - } -} diff --git a/dash-spv/src/client/queries.rs b/dash-spv/src/client/queries.rs index 9bda2d2cf..7cd8f9979 100644 --- a/dash-spv/src/client/queries.rs +++ b/dash-spv/src/client/queries.rs @@ -11,11 +11,12 @@ use crate::network::NetworkManager; use crate::storage::StorageManager; use crate::types::AddressBalance; use dashcore::sml::llmq_type::LLMQType; -use dashcore::sml::masternode_list::MasternodeList; use dashcore::sml::masternode_list_engine::MasternodeListEngine; use dashcore::sml::quorum_entry::qualified_quorum_entry::QualifiedQuorumEntry; use dashcore::QuorumHash; use key_wallet_manager::wallet_interface::WalletInterface; +use std::sync::Arc; +use tokio::sync::RwLock; use super::DashSpvClient; @@ -49,27 +50,26 @@ impl DashSpvClient Option<&MasternodeListEngine> { - self.sync_manager.masternode_list_engine() - } - - /// Get the masternode list at a specific block height. - /// Returns None if the masternode list for that height is not available. - pub fn get_masternode_list_at_height(&self, height: u32) -> Option<&MasternodeList> { - self.masternode_list_engine().and_then(|engine| engine.masternode_lists.get(&height)) + /// Returns an error if the masternode engine is not initialized. + pub fn masternode_list_engine(&self) -> Result>> { + match self.masternode_engine { + Some(ref masternode_engine) => Ok(masternode_engine.clone()), + None => Err(SpvError::Config("Masternode list engine not initialized".to_string())), + } } /// Get a quorum entry by type and hash at a specific block height. - /// Returns None if the quorum is not found. - pub fn get_quorum_at_height( + /// Returns `SpvError::QuorumLookupError` if the quorum is not found. + pub async fn get_quorum_at_height( &self, height: u32, quorum_type: LLMQType, quorum_hash: QuorumHash, ) -> Result { + let masternode_engine = self.masternode_list_engine()?; + let masternode_engine_guard = masternode_engine.read().await; // First check if we have the masternode list at this height - match self.get_masternode_list_at_height(height) { + match masternode_engine_guard.masternode_lists.get(&height) { Some(ml) => { // We have the masternode list, now look for the quorum match ml.quorums.get(&quorum_type) { @@ -106,16 +106,10 @@ impl DashSpvClient { - tracing::warn!( - "No masternode list found at height {} - cannot retrieve quorum", - height - ); - Err(SpvError::QuorumLookupError(format!( - "No masternode list found at height {}", - height - ))) - } + None => Err(SpvError::QuorumLookupError(format!( + "No masternode list found at height {}", + height + ))), } } diff --git a/dash-spv/src/client/sync_coordinator.rs b/dash-spv/src/client/sync_coordinator.rs index 34229a5bf..43ab132c5 100644 --- a/dash-spv/src/client/sync_coordinator.rs +++ b/dash-spv/src/client/sync_coordinator.rs @@ -1,28 +1,24 @@ //! Sync coordination and orchestration. -//! -//! This module contains the core sync orchestration logic: -//! - monitor_network: Main event loop for processing network messages -//! - Sync state persistence and restoration -//! - Filter sync coordination -//! - Block processing delegation -//! - Balance change reporting -//! -//! This is the largest module as it handles all coordination between network, -//! storage, and the sync manager. -use super::{DashSpvClient, MessageHandler}; +use super::DashSpvClient; use crate::client::interface::DashSpvClientCommand; use crate::error::{Result, SpvError}; -use crate::network::constants::MESSAGE_RECEIVE_TIMEOUT; -use crate::network::{Message, MessageType, NetworkManager}; +use crate::network::NetworkManager; use crate::storage::StorageManager; -use crate::types::{DetailedSyncProgress, SyncProgress}; +use crate::sync::SyncProgress; use key_wallet_manager::wallet_interface::WalletInterface; -use std::time::{Duration, Instant, SystemTime}; +use std::time::Duration; use tokio::sync::mpsc::UnboundedReceiver; use tokio_util::sync::CancellationToken; +const SYNC_COORDINATOR_TICK_MS: Duration = Duration::from_millis(100); + impl DashSpvClient { + /// Get current sync progress. + pub fn sync_progress(&self) -> SyncProgress { + self.sync_coordinator.progress().clone() + } + /// Run continuous monitoring for new blocks, ChainLocks, InstantLocks, etc. /// /// This is the sole network message receiver to prevent race conditions. @@ -40,59 +36,8 @@ impl DashSpvClient = None; - - let mut message_receiver = self - .network - .message_receiver(&[ - MessageType::Headers, - MessageType::Headers2, - MessageType::CFHeaders, - MessageType::CFilter, - MessageType::Block, - MessageType::MnListDiff, - MessageType::QRInfo, - MessageType::CLSig, - MessageType::ISLock, - MessageType::Inv, - ]) - .await; + let mut sync_coordinator_tick_interval = tokio::time::interval(SYNC_COORDINATOR_TICK_MS); + let mut progress_updates = self.sync_coordinator.subscribe_progress(); loop { // Check if we should stop @@ -103,315 +48,6 @@ impl DashSpvClient 0 { - tracing::info!("🚀 Peers connected, starting initial sync operations..."); - - // Start initial sync with sequential sync manager - let mut storage = self.storage.lock().await; - match self.sync_manager.start_sync(&mut self.network, &mut *storage).await { - Ok(started) => { - tracing::info!("✅ Sequential sync start_sync returned: {}", started); - - // Send initial requests after sync is prepared - if let Err(e) = self - .sync_manager - .send_initial_requests(&mut self.network, &mut *storage) - .await - { - tracing::error!("Failed to send initial sync requests: {}", e); - - // Reset sync manager state to prevent inconsistent state - self.sync_manager.reset_pending_requests(); - tracing::warn!( - "Reset sync manager state after send_initial_requests failure" - ); - } - } - Err(e) => { - tracing::error!("Failed to start sequential sync: {}", e); - } - } - - initial_sync_started = true; - } - - // Check if it's time to update the status display - if last_status_update.elapsed() >= status_update_interval { - self.update_status_display().await; - - // Sequential sync handles filter gaps internally - - // Filter sync progress is handled by sequential sync manager internally - let ( - filters_requested, - filters_received, - basic_progress, - timeout, - total_missing, - actual_coverage, - missing_ranges, - ) = { - // For sequential sync, return default values - (0, 0, 0.0, false, 0, 0.0, Vec::<(u32, u32)>::new()) - }; - - if filters_requested > 0 { - // Check if sync is truly complete: both basic progress AND gap analysis must indicate completion - // This fixes a bug where "Complete!" was shown when only gap analysis returned 0 missing filters - // but basic progress (filters_received < filters_requested) indicated incomplete sync. - let is_complete = filters_received >= filters_requested && total_missing == 0; - - // Debug logging for completion detection - if filters_received >= filters_requested && total_missing > 0 { - tracing::debug!("🔍 Completion discrepancy detected: basic progress complete ({}/{}) but {} missing filters detected", - filters_received, filters_requested, total_missing); - } - - if !is_complete { - tracing::info!("📊 Filter sync: Basic {:.1}% ({}/{}), Actual coverage {:.1}%, Missing: {} filters in {} ranges", - basic_progress, filters_received, filters_requested, actual_coverage, total_missing, missing_ranges.len()); - - // Show first few missing ranges for debugging - if !missing_ranges.is_empty() { - let show_count = missing_ranges.len().min(3); - for (i, (start, end)) in - missing_ranges.iter().enumerate().take(show_count) - { - tracing::warn!( - " Gap {}: range {}-{} ({} filters)", - i + 1, - start, - end, - end - start + 1 - ); - } - if missing_ranges.len() > show_count { - tracing::warn!( - " ... and {} more gaps", - missing_ranges.len() - show_count - ); - } - } - } else { - tracing::info!( - "📊 Filter sync progress: {:.1}% ({}/{} filters received) - Complete!", - basic_progress, - filters_received, - filters_requested - ); - } - - if timeout { - tracing::warn!( - "⚠️ Filter sync timeout: no filters received in 30+ seconds" - ); - } - } - - // Wallet confirmations are now handled by the wallet itself via process_block - - // Emit detailed progress update - if last_rate_calc.elapsed() >= Duration::from_secs(1) { - // Storage tip now represents the absolute blockchain height. - let current_tip_height = { - let storage = self.storage.lock().await; - storage.get_tip_height().await.unwrap_or(0) - }; - let current_height = current_tip_height; - let peer_best = - self.network.get_peer_best_height().await.unwrap_or(current_height); - - // Calculate headers downloaded this second - if current_tip_height > last_height { - headers_this_second = current_tip_height - last_height; - last_height = current_tip_height; - } - - let headers_per_second = headers_this_second as f64; - let peer_count = self.network.peer_count() as u32; - let phase_snapshot = self.sync_manager.current_phase().clone(); - - let status_display = self.create_status_display().await; - let mut sync_progress = match status_display.sync_progress().await { - Ok(p) => p, - Err(e) => { - tracing::warn!("Failed to compute sync progress snapshot: {}", e); - SyncProgress::default() - } - }; - - // Update peer count with the latest network information. - sync_progress.peer_count = peer_count; - sync_progress.header_height = current_height; - sync_progress.filter_sync_available = self.config.enable_filters; - - let sync_stage = - Self::map_phase_to_stage(&phase_snapshot, &sync_progress, peer_best); - let filters_downloaded = sync_progress.filters_downloaded; - - let progress = DetailedSyncProgress { - sync_progress, - peer_best_height: peer_best, - percentage: if peer_best > 0 { - (current_height as f64 / peer_best as f64 * 100.0).min(100.0) - } else { - 0.0 - }, - headers_per_second, - bytes_per_second: 0, // TODO: Track actual bytes - estimated_time_remaining: if headers_per_second > 0.0 - && peer_best > current_height - { - let remaining = peer_best - current_height; - Some(Duration::from_secs_f64(remaining as f64 / headers_per_second)) - } else { - None - }, - sync_stage, - total_headers_processed: current_height as u64, - total_bytes_downloaded, - sync_start_time, - last_update_time: SystemTime::now(), - }; - - last_emitted_filters_downloaded = filters_downloaded; - self.emit_progress(progress); - - headers_this_second = 0; - last_rate_calc = Instant::now(); - } - - // Emit filter headers progress only when heights change - let (abs_header_height, filter_header_height) = { - let storage = self.storage.lock().await; - let storage_tip = storage.get_tip_height().await.unwrap_or(0); - let filter_tip = - storage.get_filter_tip_height().await.ok().flatten().unwrap_or(0); - (storage_tip, filter_tip) - }; - - { - // Build and emit a fresh DetailedSyncProgress snapshot reflecting current filter progress - let peer_best = - self.network.get_peer_best_height().await.unwrap_or(abs_header_height); - - let phase_snapshot = self.sync_manager.current_phase().clone(); - let status_display = self.create_status_display().await; - let mut sync_progress = match status_display.sync_progress().await { - Ok(p) => p, - Err(e) => { - tracing::warn!( - "Failed to compute sync progress snapshot (filter): {}", - e - ); - SyncProgress::default() - } - }; - // Ensure we include up-to-date header height and peer count - let peer_count = self.network.peer_count() as u32; - sync_progress.peer_count = peer_count; - sync_progress.header_height = abs_header_height; - sync_progress.filter_sync_available = self.config.enable_filters; - - let filters_downloaded = sync_progress.filters_downloaded; - let current_phase_name = phase_snapshot.name().to_string(); - let phase_changed = - last_emitted_phase_name.as_ref() != Some(¤t_phase_name); - - if abs_header_height != last_emitted_header_height - || filter_header_height != last_emitted_filter_header_height - || filters_downloaded != last_emitted_filters_downloaded - || phase_changed - { - let sync_stage = - Self::map_phase_to_stage(&phase_snapshot, &sync_progress, peer_best); - - let progress = DetailedSyncProgress { - sync_progress, - peer_best_height: peer_best, - percentage: if peer_best > 0 { - (abs_header_height as f64 / peer_best as f64 * 100.0).min(100.0) - } else { - 0.0 - }, - headers_per_second: 0.0, - bytes_per_second: 0, - estimated_time_remaining: None, - sync_stage, - total_headers_processed: abs_header_height as u64, - total_bytes_downloaded, - sync_start_time, - last_update_time: SystemTime::now(), - }; - last_emitted_header_height = abs_header_height; - last_emitted_filter_header_height = filter_header_height; - last_emitted_filters_downloaded = filters_downloaded; - last_emitted_phase_name = Some(current_phase_name.clone()); - - self.emit_progress(progress); - } - } - - last_status_update = Instant::now(); - } - - // Check for sync timeouts and handle recovery (only periodically, not every loop) - if last_timeout_check.elapsed() >= timeout_check_interval { - let mut storage = self.storage.lock().await; - self.sync_manager.check_timeout(&mut self.network, &mut *storage).await?; - drop(storage); - } - - // Check for request timeouts and handle retries - if last_timeout_check.elapsed() >= timeout_check_interval { - // Request timeout handling was part of the request tracking system - // For async block processing testing, we'll skip this for now - last_timeout_check = Instant::now(); - } - - // Check for wallet consistency issues periodically - if last_consistency_check.elapsed() >= consistency_check_interval { - tokio::spawn(async move { - // Run consistency check in background to avoid blocking the monitoring loop - // Note: This is a simplified approach - in production you might want more sophisticated scheduling - tracing::debug!("Running periodic wallet consistency check..."); - }); - last_consistency_check = Instant::now(); - } - - // Check if masternode sync has completed and update ChainLock validation - if !masternode_engine_updated && self.config.enable_masternodes { - // Check if we have a masternode engine available now - if let Ok(has_engine) = self.update_chainlock_validation() { - if has_engine { - masternode_engine_updated = true; - tracing::info!( - "✅ Masternode sync complete - ChainLock validation enabled" - ); - - // Validate any pending ChainLocks - if let Err(e) = self.validate_pending_chainlocks().await { - tracing::error!( - "Failed to validate pending ChainLocks after masternode sync: {}", - e - ); - } - } - } - } - - // Periodically retry validation of pending ChainLocks - if masternode_engine_updated - && last_chainlock_validation_check.elapsed() >= chainlock_validation_interval - { - tracing::debug!("Checking for pending ChainLocks to validate..."); - if let Err(e) = self.validate_pending_chainlocks().await { - tracing::debug!("Periodic pending ChainLock validation check failed: {}", e); - } - last_chainlock_validation_check = Instant::now(); - } - tokio::select! { received = command_receiver.recv() => { match received { @@ -421,54 +57,27 @@ impl DashSpvClient { - match received { - None => { - tracing::info!("Network message subscription channel closed."); - break; - } - Some(message) => { - // Wrap message handling in comprehensive error handling - match self.handle_network_message(message).await { - Ok(_) => { - // Message handled successfully - } - Err(e) => { - tracing::error!("Error handling network message: {}", e); - - // Categorize error severity - match &e { - SpvError::Network(_) => { - tracing::warn!("Network error during message handling - may recover automatically"); - } - SpvError::Storage(_) => { - tracing::error!("Storage error during message handling - this may affect data consistency"); - } - SpvError::Validation(_) => { - tracing::warn!("Validation error during message handling - message rejected"); - } - _ => { - tracing::error!("Unexpected error during message handling"); - } - } - - // Continue monitoring despite errors - tracing::debug!( - "Continuing network monitoring despite message handling error" - ); - } - } - }, + _ = progress_updates.changed() => { + tracing::info!("Sync progress:{}", *progress_updates.borrow()); + } + _ = sync_coordinator_tick_interval.tick() => { + // Tick the sync coordinator to aggregate progress + if let Err(e) = self.sync_coordinator.tick().await { + tracing::warn!("Sync coordinator tick error: {}", e); } } - _ = tokio::time::sleep(MESSAGE_RECEIVE_TIMEOUT) => {} _ = token.cancelled() => { - log::debug!("DashSpvClient run loop cancelled"); + tracing::debug!("DashSpvClient run loop cancelled"); break } } } + // Shutdown the sync coordinator + if let Err(e) = self.sync_coordinator.shutdown().await { + tracing::warn!("Error shutting down sync coordinator: {}", e); + } + Ok(()) } @@ -510,7 +119,7 @@ impl DashSpvClient { - let result = self.get_quorum_at_height(height, quorum_type, quorum_hash); + let result = self.get_quorum_at_height(height, quorum_type, quorum_hash).await; if sender.send(result).is_err() { return Err(SpvError::ChannelFailure( format!("GetQuorumByHeight({height}, {quorum_type}, {quorum_hash})"), @@ -522,70 +131,6 @@ impl DashSpvClient Result<()> { - // Check if this is a special message that needs client-level processing - let needs_special_processing = matches!( - &message.inner(), - dashcore::network::message::NetworkMessage::CLSig(_) - | dashcore::network::message::NetworkMessage::ISLock(_) - ); - - // Handle the message with storage locked - let handler_result = { - let mut storage = self.storage.lock().await; - - // Create a MessageHandler instance with all required parameters - let mut handler = MessageHandler::new( - &mut self.sync_manager, - &mut *storage, - &mut self.network, - &self.config, - &self.mempool_filter, - &self.mempool_state, - &self.event_tx, - ); - - // Delegate message handling to the MessageHandler - handler.handle_network_message(&message).await - }; - - // Handle result and process special messages after releasing storage lock - match handler_result { - Ok(_) => { - if needs_special_processing { - // Special handling for messages that need client-level processing - use dashcore::network::message::NetworkMessage; - match message.inner() { - NetworkMessage::CLSig(clsig) => { - // Additional client-level ChainLock processing - self.process_chainlock(message.peer_address(), clsig.clone()).await?; - } - NetworkMessage::ISLock(islock_msg) => { - // Only process InstantLocks when fully synced and masternode engine is available - if self.sync_manager.is_synced() - && self.sync_manager.get_masternode_engine().is_some() - { - self.process_instantsendlock( - message.peer_address(), - islock_msg.clone(), - ) - .await?; - } else { - tracing::debug!( - "Skipping InstantLock processing - not fully synced or masternode engine unavailable" - ); - } - } - _ => {} - } - } - Ok(()) - } - Err(e) => Err(e), - } - } - /// Report balance changes for watched addresses. #[allow(dead_code)] pub(super) async fn report_balance_changes( diff --git a/dash-spv/src/error.rs b/dash-spv/src/error.rs index 5e411449d..eff214f6f 100644 --- a/dash-spv/src/error.rs +++ b/dash-spv/src/error.rs @@ -226,6 +226,10 @@ pub enum SyncError { /// Headers2 decompression failed - can trigger fallback to regular headers #[error("Headers2 decompression failed: {0}")] Headers2DecompressionFailed(String), + + /// Masternode sync failed (QRInfo or MnListDiff processing error) + #[error("Masternode sync failed: {0}")] + MasternodeSyncFailed(String), } impl SyncError { @@ -239,6 +243,7 @@ impl SyncError { SyncError::Network(_) => "network", SyncError::Storage(_) => "storage", SyncError::Headers2DecompressionFailed(_) => "headers2", + SyncError::MasternodeSyncFailed(_) => "masternode", // Deprecated variant - should not be used #[allow(deprecated)] SyncError::SyncFailed(_) => "unknown", @@ -298,6 +303,24 @@ pub enum WalletError { /// Type alias for wallet operation results. pub type WalletResult = std::result::Result; +impl From for SyncError { + fn from(err: NetworkError) -> Self { + SyncError::Network(err.to_string()) + } +} + +impl From for SyncError { + fn from(err: StorageError) -> Self { + SyncError::Storage(err.to_string()) + } +} + +impl From for SyncError { + fn from(err: ValidationError) -> Self { + SyncError::Validation(err.to_string()) + } +} + #[cfg(test)] mod tests { use super::*; diff --git a/dash-spv/src/lib.rs b/dash-spv/src/lib.rs index 96fdb4975..c82cb97ba 100644 --- a/dash-spv/src/lib.rs +++ b/dash-spv/src/lib.rs @@ -35,7 +35,6 @@ //! //! // Create and start the client //! let mut client = DashSpvClient::new(config.clone(), network, storage, wallet).await?; -//! client.start().await?; //! //! let (_command_sender, command_receiver) = tokio::sync::mpsc::unbounded_channel(); //! let shutdown_token = CancellationToken::new(); diff --git a/dash-spv/src/main.rs b/dash-spv/src/main.rs index 967974536..22bc58258 100644 --- a/dash-spv/src/main.rs +++ b/dash-spv/src/main.rs @@ -299,8 +299,6 @@ async fn run() -> Result<(), Box> { process::exit(1); } - tracing::info!("Sync strategy: Sequential"); - // Create the wallet manager let mut wallet_manager = WalletManager::::new(config.network); let wallet_id = wallet_manager.create_wallet_from_mnemonic( diff --git a/dash-spv/src/network/event.rs b/dash-spv/src/network/event.rs new file mode 100644 index 000000000..3c0eaac34 --- /dev/null +++ b/dash-spv/src/network/event.rs @@ -0,0 +1,67 @@ +//! Network event system for peer connection state changes. +//! +//! This module provides events for network layer changes that sync managers +//! need to react to, such as peer connections and disconnections. + +use dashcore::prelude::CoreBlockHeight; +use std::net::SocketAddr; + +/// Events emitted by the network layer. +/// +/// These events inform sync managers about network state changes, +/// allowing them to wait for connections before sending requests. +#[derive(Debug, Clone)] +pub enum NetworkEvent { + /// A peer has connected. + PeerConnected { + /// Socket address of the connected peer. + address: SocketAddr, + }, + + /// A peer has disconnected. + PeerDisconnected { + /// Socket address of the disconnected peer. + address: SocketAddr, + }, + + /// Summary of connected peers (emitted after connect/disconnect). + /// + /// This event provides the current state of connections after any change. + PeersUpdated { + /// Number of currently connected peers. + connected_count: usize, + /// Addresses of all connected peers. + addresses: Vec, + /// Best height of connected peers. + best_height: Option, + }, +} + +impl NetworkEvent { + /// Get a short description of this event for logging. + pub fn description(&self) -> String { + match self { + NetworkEvent::PeerConnected { + address, + } => { + format!("PeerConnected({})", address) + } + NetworkEvent::PeerDisconnected { + address, + } => { + format!("PeerDisconnected({})", address) + } + NetworkEvent::PeersUpdated { + connected_count, + addresses: _, + best_height, + } => { + format!( + "PeersUpdated(connected={}, best_height={})", + connected_count, + best_height.unwrap_or(0) + ) + } + } + } +} diff --git a/dash-spv/src/network/manager.rs b/dash-spv/src/network/manager.rs index 6c46466d6..1f4232cc6 100644 --- a/dash-spv/src/network/manager.rs +++ b/dash-spv/src/network/manager.rs @@ -6,7 +6,7 @@ use std::path::PathBuf; use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::Arc; use std::time::{Duration, SystemTime}; -use tokio::sync::Mutex; +use tokio::sync::{broadcast, Mutex}; use tokio::task::JoinSet; use tokio::time; @@ -21,7 +21,8 @@ use crate::network::reputation::{ misbehavior_scores, positive_scores, PeerReputationManager, ReputationAware, }; use crate::network::{ - HandshakeManager, Message, MessageDispatcher, MessageType, NetworkManager, Peer, + HandshakeManager, Message, MessageDispatcher, MessageType, NetworkEvent, NetworkManager, + NetworkRequest, Peer, RequestSender, }; use crate::storage::{PeerStorage, PersistentPeerStorage, PersistentStorage}; use async_trait::async_trait; @@ -30,9 +31,11 @@ use dashcore::network::message::NetworkMessage; use dashcore::network::message_headers2::CompressionState; use dashcore::prelude::CoreBlockHeight; use dashcore::Network; -use tokio::sync::mpsc::UnboundedReceiver; +use tokio::sync::mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender}; use tokio_util::sync::CancellationToken; +const DEFAULT_NETWORK_EVENT_CAPACITY: usize = 10000; + /// Peer network manager pub struct PeerNetworkManager { /// Peer pool @@ -71,6 +74,14 @@ pub struct PeerNetworkManager { headers2_disabled: Arc>>, /// Dispatcher for unbounded and message-type filtered message distribution. message_dispatcher: Arc>, + /// Request queue sender, cloneable handle for sending requests to the network manager. + request_tx: UnboundedSender, + /// Request queue receiver (consumed by send loop). + request_rx: Arc>>>, + /// Round-robin counter for distributing requests across peers. + round_robin_counter: Arc, + /// Network event bus for notifying about network/peer related changes. + network_event_sender: broadcast::Sender, } impl PeerNetworkManager { @@ -90,6 +101,9 @@ impl PeerNetworkManager { // Determine exclusive mode: either explicitly requested or peers were provided let exclusive_mode = config.restrict_to_configured_peers || !config.peers.is_empty(); + // Create request queue for outgoing messages + let (request_tx, request_rx) = unbounded_channel(); + Ok(Self { pool: Arc::new(PeerPool::new()), discovery: Arc::new(discovery), @@ -109,6 +123,10 @@ impl PeerNetworkManager { connected_peer_count: Arc::new(AtomicUsize::new(0)), headers2_disabled: Arc::new(Mutex::new(HashSet::new())), message_dispatcher: Arc::new(Mutex::new(MessageDispatcher::default())), + request_tx, + request_rx: Arc::new(Mutex::new(Some(request_rx))), + round_robin_counter: Arc::new(AtomicUsize::new(0)), + network_event_sender: broadcast::Sender::new(DEFAULT_NETWORK_EVENT_CAPACITY), }) } @@ -120,6 +138,16 @@ impl PeerNetworkManager { self.message_dispatcher.lock().await.message_receiver(message_types) } + /// Get a RequestSender for queueing outgoing network requests. + pub fn request_sender(&self) -> RequestSender { + RequestSender::new(self.request_tx.clone()) + } + + /// Get the network event bus for sharing with other components. + pub fn network_event_sender(&self) -> &broadcast::Sender { + &self.network_event_sender + } + /// Start the network manager pub async fn start(&self) -> Result<(), Error> { log::info!("Starting peer network manager for {:?}", self.network); @@ -170,6 +198,9 @@ impl PeerNetworkManager { // Start maintenance loop self.start_maintenance_loop().await; + // Start request processing task for managers to queue outgoing messages + self.start_request_processor().await; + Ok(()) } @@ -204,6 +235,7 @@ impl PeerNetworkManager { let connected_peer_count = self.connected_peer_count.clone(); let headers2_disabled = self.headers2_disabled.clone(); let message_dispatcher = self.message_dispatcher.clone(); + let network_event_sender = self.network_event_sender.clone(); // Spawn connection task let mut tasks = self.tasks.lock().await; @@ -231,6 +263,19 @@ impl PeerNetworkManager { // Increment connected peer counter on successful add connected_peer_count.fetch_add(1, Ordering::Relaxed); + // Emit peer connected event + let count = connected_peer_count.load(Ordering::Relaxed); + let addresses = pool.get_connected_addresses().await; + let best_height = pool.get_best_height().await; + let _ = network_event_sender.send(NetworkEvent::PeerConnected { + address: addr, + }); + let _ = network_event_sender.send(NetworkEvent::PeersUpdated { + connected_count: count, + addresses, + best_height, + }); + // Add to known addresses addrv2_handler.add_known_address(addr, ServiceFlags::from(1)).await; @@ -244,6 +289,7 @@ impl PeerNetworkManager { connected_peer_count.clone(), headers2_disabled.clone(), message_dispatcher, + network_event_sender.clone(), ) .await; } @@ -288,6 +334,7 @@ impl PeerNetworkManager { connected_peer_count: Arc, headers2_disabled: Arc>>, message_dispatcher: Arc>, + network_event_sender: broadcast::Sender, ) { tokio::spawn(async move { log::debug!("Starting peer reader loop for {}", addr); @@ -591,6 +638,19 @@ impl PeerNetworkManager { if removed.is_some() { // Decrement connected peer counter when a peer is removed connected_peer_count.fetch_sub(1, Ordering::Relaxed); + + // Emit peer disconnected event + let count = connected_peer_count.load(Ordering::Relaxed); + let addresses = pool.get_connected_addresses().await; + let best_height = pool.get_best_height().await; + let _ = network_event_sender.send(NetworkEvent::PeerDisconnected { + address: addr, + }); + let _ = network_event_sender.send(NetworkEvent::PeersUpdated { + connected_count: count, + addresses, + best_height, + }); } headers2_disabled.lock().await.remove(&addr); @@ -606,6 +666,69 @@ impl PeerNetworkManager { }); } + /// Start the request processing task for outgoing messages from managers via RequestSender. + async fn start_request_processor(&self) { + // Take the receiver (only one task can own it) + let request_rx = { + let mut rx_guard = self.request_rx.lock().await; + rx_guard.take() + }; + + let Some(mut request_rx) = request_rx else { + log::warn!("Request processor already started or receiver unavailable"); + return; + }; + + let this = self.clone(); + let shutdown_token = self.shutdown_token.clone(); + + let mut tasks = self.tasks.lock().await; + tasks.spawn(async move { + log::info!("Starting request processor task"); + loop { + tokio::select! { + request = request_rx.recv() => { + match request { + Some(NetworkRequest::SendMessage(msg)) => { + log::debug!("Request processor: sending {}", msg.cmd()); + // Spawn each send concurrently to allow parallel requests across peers. + let this = this.clone(); + tokio::spawn(async move { + let result = match &msg { + // Distribute across peers for parallel sync + NetworkMessage::GetCFHeaders(_) + | NetworkMessage::GetCFilters(_) + | NetworkMessage::GetData(_) + | NetworkMessage::GetMnListD(_) + | NetworkMessage::GetQRInfo(_) + | NetworkMessage::GetHeaders(_) + | NetworkMessage::GetHeaders2(_) => { + this.send_distributed(msg).await + } + _ => { + this.send_to_single_peer(msg).await + } + }; + if let Err(e) = result { + log::error!("Request processor: failed to send message: {}", e); + } + }); + } + None => { + log::info!("Request processor: channel closed"); + break; + } + } + } + _ = shutdown_token.cancelled() => { + log::info!("Request processor: shutting down"); + break; + } + } + } + }); + } + /// Start peer connection maintenance loop async fn start_maintenance_loop(&self) { let pool = self.pool.clone(); @@ -944,6 +1067,104 @@ impl PeerNetworkManager { .map_err(|e| NetworkError::ProtocolError(format!("Failed to send to {}: {}", addr, e))) } + /// Send a message distributed across connected peers using round-robin selection. + /// + /// Peer selection and message handling based on message type: + /// - Filters (GetCFHeaders/GetCFilters): requires peers that support compact filters + /// - Headers (GetHeaders/GetHeaders2): prefers headers2 peers, upgrades GetHeaders if supported + /// - Other (blocks, masternode data, etc.): uses all connected peers + async fn send_distributed(&self, message: NetworkMessage) -> NetworkResult<()> { + let peers = self.pool.get_all_peers().await; + + if peers.is_empty() { + return Err(NetworkError::ConnectionFailed("No connected peers".to_string())); + } + + // Select eligible peers based on message type + let (selected_peers, require_capability) = match &message { + NetworkMessage::GetCFHeaders(_) | NetworkMessage::GetCFilters(_) => { + // Filter requests require compact filter support + let filter_peers: Vec<_> = { + let mut result = Vec::new(); + for (addr, peer) in &peers { + let peer_guard = peer.read().await; + if peer_guard.supports_compact_filters() { + result.push((*addr, peer.clone())); + } + } + result + }; + (filter_peers, true) + } + NetworkMessage::GetHeaders(_) | NetworkMessage::GetHeaders2(_) => { + // Prefer headers2 peers, fall back to all + let headers2_peers: Vec<_> = { + let mut result = Vec::new(); + for (addr, peer) in &peers { + let peer_guard = peer.read().await; + if peer_guard.supports_headers2() + && !self.headers2_disabled.lock().await.contains(addr) + { + result.push((*addr, peer.clone())); + } + } + result + }; + if headers2_peers.is_empty() { + (peers.clone(), false) + } else { + (headers2_peers, false) + } + } + _ => { + // All other messages use all connected peers + (peers.clone(), false) + } + }; + + if selected_peers.is_empty() { + return if require_capability { + Err(NetworkError::ProtocolError("No peers support required capability".to_string())) + } else { + Err(NetworkError::ConnectionFailed("No connected peers".to_string())) + }; + } + + // Round-robin selection + let idx = self.round_robin_counter.fetch_add(1, Ordering::Relaxed) % selected_peers.len(); + let (addr, peer) = &selected_peers[idx]; + + // Upgrade GetHeaders to GetHeaders2 if peer supports it + let message = match message { + NetworkMessage::GetHeaders(get_headers) => { + let peer_supports_headers2 = { + let peer_guard = peer.read().await; + peer_guard.can_request_headers2() + }; + if peer_supports_headers2 && !self.headers2_disabled.lock().await.contains(addr) { + log::debug!("Upgrading GetHeaders to GetHeaders2 for peer {}", addr); + NetworkMessage::GetHeaders2(get_headers) + } else { + NetworkMessage::GetHeaders(get_headers) + } + } + other => other, + }; + + log::debug!( + "Distributing {} request to peer {} (round-robin idx {})", + message.cmd(), + addr, + idx + ); + + let mut peer_guard = peer.write().await; + peer_guard + .send_message(message) + .await + .map_err(|e| NetworkError::ProtocolError(format!("Failed to send to {}: {}", addr, e))) + } + /// Broadcast a message to all connected peers pub async fn broadcast(&self, message: NetworkMessage) -> Vec> { let peers = self.pool.get_all_peers().await; @@ -1078,6 +1299,10 @@ impl Clone for PeerNetworkManager { connected_peer_count: self.connected_peer_count.clone(), headers2_disabled: self.headers2_disabled.clone(), message_dispatcher: self.message_dispatcher.clone(), + request_tx: self.request_tx.clone(), + request_rx: self.request_rx.clone(), + round_robin_counter: self.round_robin_counter.clone(), + network_event_sender: self.network_event_sender.clone(), } } } @@ -1093,6 +1318,10 @@ impl NetworkManager for PeerNetworkManager { self.message_dispatcher.lock().await.message_receiver(types) } + fn request_sender(&self) -> RequestSender { + PeerNetworkManager::request_sender(self) + } + async fn connect(&mut self) -> NetworkResult<()> { self.start().await.map_err(|e| NetworkError::ConnectionFailed(e.to_string())) } @@ -1222,4 +1451,8 @@ impl NetworkManager for PeerNetworkManager { false } + + fn subscribe_network_events(&self) -> broadcast::Receiver { + self.network_event_sender.subscribe() + } } diff --git a/dash-spv/src/network/mod.rs b/dash-spv/src/network/mod.rs index dbcea98b8..70a12b477 100644 --- a/dash-spv/src/network/mod.rs +++ b/dash-spv/src/network/mod.rs @@ -3,6 +3,7 @@ pub mod addrv2; pub mod constants; pub mod discovery; +mod event; pub mod handshake; pub mod manager; mod message_dispatcher; @@ -14,11 +15,21 @@ mod message_type; #[cfg(test)] mod tests; -use crate::error::NetworkResult; +pub use event::NetworkEvent; + use async_trait::async_trait; +use tokio::sync::{broadcast, mpsc}; + +use crate::error::NetworkResult; +use crate::NetworkError; use dashcore::network::message::NetworkMessage; +use dashcore::network::message_blockdata::{GetHeadersMessage, Inventory}; +use dashcore::network::message_filter::{GetCFHeaders, GetCFilters}; +use dashcore::network::message_qrinfo::GetQRInfo; +use dashcore::network::message_sml::GetMnListDiff; use dashcore::prelude::CoreBlockHeight; use dashcore::BlockHash; +use dashcore_hashes::Hash; pub use handshake::{HandshakeManager, HandshakeState}; pub use manager::PeerNetworkManager; pub use message_dispatcher::{Message, MessageDispatcher}; @@ -27,6 +38,98 @@ pub use peer::Peer; use std::net::SocketAddr; use tokio::sync::mpsc::UnboundedReceiver; +const FILTER_TYPE_DEFAULT: u8 = 0; + +/// Request to send to network. +#[derive(Debug)] +pub enum NetworkRequest { + /// Send a message to the network. + SendMessage(NetworkMessage), +} + +/// Handle for managers to queue outgoing network requests. +#[derive(Clone)] +pub struct RequestSender { + tx: mpsc::UnboundedSender, +} + +impl RequestSender { + /// Create a new RequestSender. + pub fn new(tx: mpsc::UnboundedSender) -> Self { + Self { + tx, + } + } + + /// Queue a message to be sent to the network. + fn send_message(&self, msg: NetworkMessage) -> NetworkResult<()> { + self.tx + .send(NetworkRequest::SendMessage(msg)) + .map_err(|e| NetworkError::ProtocolError(e.to_string())) + } + + pub fn request_inventory(&self, inventory: Vec) -> NetworkResult<()> { + self.send_message(NetworkMessage::GetData(inventory)) + } + + pub fn request_block_headers(&self, start_hash: BlockHash) -> NetworkResult<()> { + self.send_message(NetworkMessage::GetHeaders(GetHeadersMessage::new( + vec![start_hash], + BlockHash::all_zeros(), + ))) + } + + pub fn request_filter_headers( + &self, + start_height: u32, + stop_hash: BlockHash, + ) -> NetworkResult<()> { + self.send_message(NetworkMessage::GetCFHeaders(GetCFHeaders { + filter_type: FILTER_TYPE_DEFAULT, + start_height, + stop_hash, + })) + } + + pub fn request_filters(&self, start_height: u32, stop_hash: BlockHash) -> NetworkResult<()> { + self.send_message(NetworkMessage::GetCFilters(GetCFilters { + filter_type: FILTER_TYPE_DEFAULT, + start_height, + stop_hash, + })) + } + + pub fn request_mnlist_diff( + &self, + base_block_hash: BlockHash, + block_hash: BlockHash, + ) -> NetworkResult<()> { + self.send_message(NetworkMessage::GetMnListD(GetMnListDiff { + base_block_hash, + block_hash, + })) + } + + pub fn request_qr_info( + &self, + known_block_hashes: Vec, + target_block_hash: BlockHash, + extra_share: bool, + ) -> NetworkResult<()> { + self.send_message(NetworkMessage::GetQRInfo(GetQRInfo { + base_block_hashes: known_block_hashes, + block_request_hash: target_block_hash, + extra_share, + })) + } + + pub fn request_blocks(&self, hashes: Vec) -> NetworkResult<()> { + self.send_message(NetworkMessage::GetData( + hashes.into_iter().map(Inventory::Block).collect(), + )) + } +} + /// Network manager trait for abstracting network operations. #[async_trait] pub trait NetworkManager: Send + Sync + 'static { @@ -36,6 +139,11 @@ pub trait NetworkManager: Send + Sync + 'static { /// Creates and returns a receiver that yields only messages of the matching the provided message types. async fn message_receiver(&mut self, types: &[MessageType]) -> UnboundedReceiver; + /// Get a sender for queuing outgoing network requests. + /// + /// Messages sent via this sender are delivered to the network asynchronously. + fn request_sender(&self) -> RequestSender; + /// Connect to the network. async fn connect(&mut self) -> NetworkResult<()>; @@ -117,4 +225,9 @@ pub trait NetworkManager: Send + Sync + 'static { ) .await; } + + /// Subscribe to network events (peer connections, disconnections). + /// + /// Returns a broadcast receiver for network events. + fn subscribe_network_events(&self) -> broadcast::Receiver; } diff --git a/dash-spv/src/storage/block_headers.rs b/dash-spv/src/storage/block_headers.rs index 36d8dabfb..f75332f3a 100644 --- a/dash-spv/src/storage/block_headers.rs +++ b/dash-spv/src/storage/block_headers.rs @@ -41,7 +41,7 @@ impl BlockHeaderTip { } #[async_trait] -pub trait BlockHeaderStorage { +pub trait BlockHeaderStorage: Send + Sync + 'static { async fn store_headers(&mut self, headers: &[BlockHeader]) -> StorageResult<()>; async fn store_headers_at_height( @@ -50,6 +50,17 @@ pub trait BlockHeaderStorage { height: u32, ) -> StorageResult<()>; + //TODO - change API of the BlockHeaderStorage trait to accept (store) and return (load) + // HashedBlockHeaders instead of BlockHeaders to avoid unnecessary hashing and remove + // the two store_hashed_headers methods below. + async fn store_hashed_headers(&mut self, headers: &[HashedBlockHeader]) -> StorageResult<()>; + + async fn store_hashed_headers_at_height( + &mut self, + headers: &[HashedBlockHeader], + height: u32, + ) -> StorageResult<()>; + async fn load_headers(&self, range: Range) -> StorageResult>; async fn get_header(&self, height: u32) -> StorageResult> { @@ -144,11 +155,24 @@ impl BlockHeaderStorage for PersistentBlockHeaderStorage { headers: &[BlockHeader], height: u32, ) -> StorageResult<()> { - let mut height = height; let headers = headers.iter().map(HashedBlockHeader::from).collect::>(); + self.store_hashed_headers_at_height(&headers, height).await + } + + async fn store_hashed_headers(&mut self, headers: &[HashedBlockHeader]) -> StorageResult<()> { + let height = self.block_headers.read().await.next_height(); + self.store_hashed_headers_at_height(headers, height).await + } + + async fn store_hashed_headers_at_height( + &mut self, + headers: &[HashedBlockHeader], + height: u32, + ) -> StorageResult<()> { + let mut height = height; - self.block_headers.write().await.store_items_at_height(&headers, height).await?; + self.block_headers.write().await.store_items_at_height(headers, height).await?; for header in headers { self.header_hash_index.insert(*header.hash(), height); diff --git a/dash-spv/src/storage/filter_headers.rs b/dash-spv/src/storage/filter_headers.rs index 0674d8036..da077871c 100644 --- a/dash-spv/src/storage/filter_headers.rs +++ b/dash-spv/src/storage/filter_headers.rs @@ -8,7 +8,7 @@ use std::path::PathBuf; use tokio::sync::RwLock; #[async_trait] -pub trait FilterHeaderStorage { +pub trait FilterHeaderStorage: Send + Sync + 'static { async fn store_filter_headers(&mut self, headers: &[FilterHeader]) -> StorageResult<()>; async fn store_filter_headers_at_height( diff --git a/dash-spv/src/storage/filters.rs b/dash-spv/src/storage/filters.rs index 83a7d2495..33bab100d 100644 --- a/dash-spv/src/storage/filters.rs +++ b/dash-spv/src/storage/filters.rs @@ -9,10 +9,12 @@ use crate::{ }; #[async_trait] -pub trait FilterStorage { +pub trait FilterStorage: Send + Sync + 'static { async fn store_filter(&mut self, height: u32, filter: &[u8]) -> StorageResult<()>; async fn load_filters(&self, range: Range) -> StorageResult>>; + + async fn filter_tip_height(&self) -> StorageResult; } pub struct PersistentFilterStorage { @@ -56,4 +58,8 @@ impl FilterStorage for PersistentFilterStorage { async fn load_filters(&self, range: Range) -> StorageResult>> { self.filters.write().await.get_items(range).await } + + async fn filter_tip_height(&self) -> StorageResult { + Ok(self.filters.read().await.tip_height().unwrap_or(0)) + } } diff --git a/dash-spv/src/storage/mod.rs b/dash-spv/src/storage/mod.rs index 841df7065..c967ff58d 100644 --- a/dash-spv/src/storage/mod.rs +++ b/dash-spv/src/storage/mod.rs @@ -26,23 +26,21 @@ use std::time::Duration; use tokio::sync::RwLock; use crate::error::StorageResult; -use crate::storage::block_headers::{BlockHeaderTip, PersistentBlockHeaderStorage}; use crate::storage::chainstate::PersistentChainStateStorage; -use crate::storage::filter_headers::PersistentFilterHeaderStorage; -use crate::storage::filters::PersistentFilterStorage; use crate::storage::lockfile::LockFile; -use crate::storage::masternode::PersistentMasternodeStateStorage; use crate::storage::metadata::PersistentMetadataStorage; use crate::storage::transactions::PersistentTransactionStorage; -use crate::types::{HashedBlock, MempoolState, UnconfirmedTransaction}; +use crate::types::{HashedBlock, HashedBlockHeader, MempoolState, UnconfirmedTransaction}; use crate::{ChainState, ClientConfig}; -pub use crate::storage::block_headers::BlockHeaderStorage; +pub use crate::storage::block_headers::{ + BlockHeaderStorage, BlockHeaderTip, PersistentBlockHeaderStorage, +}; pub use crate::storage::blocks::{BlockStorage, PersistentBlockStorage}; pub use crate::storage::chainstate::ChainStateStorage; -pub use crate::storage::filter_headers::FilterHeaderStorage; -pub use crate::storage::filters::FilterStorage; -pub use crate::storage::masternode::MasternodeStateStorage; +pub use crate::storage::filter_headers::{FilterHeaderStorage, PersistentFilterHeaderStorage}; +pub use crate::storage::filters::{FilterStorage, PersistentFilterStorage}; +pub use crate::storage::masternode::{MasternodeStateStorage, PersistentMasternodeStateStorage}; pub use crate::storage::metadata::MetadataStorage; pub use crate::storage::peers::{PeerStorage, PersistentPeerStorage}; pub use crate::storage::transactions::TransactionStorage; @@ -77,6 +75,26 @@ pub trait StorageManager: /// Stops all background tasks and persists the data. async fn shutdown(&mut self); + + /// Get shared reference to header storage for parallel access. + fn header_storage_ref(&self) -> Option>> { + None + } + + /// Get shared reference to filter header storage for parallel access. + fn filter_header_storage_ref(&self) -> Option>> { + None + } + + /// Get shared reference to filter storage for parallel access. + fn filter_storage_ref(&self) -> Option>> { + None + } + + /// Get shared reference to block storage for parallel access. + fn block_storage_ref(&self) -> Option>> { + None + } } /// Disk-based storage manager with segmented files and async background saving. @@ -195,6 +213,41 @@ impl DiskStorageManager { } } + /// Get a reference to the block headers storage. + pub fn header_storage(&self) -> Arc> { + Arc::clone(&self.block_headers) + } + + /// Get a reference to the filter headers storage. + pub fn filter_header_storage(&self) -> Arc> { + Arc::clone(&self.filter_headers) + } + + /// Get a reference to the filters storage. + pub fn filter_storage(&self) -> Arc> { + Arc::clone(&self.filters) + } + + /// Get a reference to the block storage. + pub fn block_storage(&self) -> Arc> { + Arc::clone(&self.blocks) + } + + /// Get a reference to the transaction storage. + pub fn transaction_storage(&self) -> Arc> { + Arc::clone(&self.transactions) + } + + /// Get a reference to the metadata storage. + pub fn metadata_storage(&self) -> Arc> { + Arc::clone(&self.metadata) + } + + /// Get a reference to the masternode state storage. + pub fn masternode_storage(&self) -> Arc> { + Arc::clone(&self.masternodestate) + } + async fn persist(&self) { let storage_path = &self.storage_path; @@ -261,6 +314,22 @@ impl StorageManager for DiskStorageManager { self.persist().await; } + + fn header_storage_ref(&self) -> Option>> { + Some(Arc::clone(&self.block_headers)) + } + + fn filter_header_storage_ref(&self) -> Option>> { + Some(Arc::clone(&self.filter_headers)) + } + + fn filter_storage_ref(&self) -> Option>> { + Some(Arc::clone(&self.filters)) + } + + fn block_storage_ref(&self) -> Option>> { + Some(Arc::clone(&self.blocks)) + } } #[async_trait] @@ -277,6 +346,18 @@ impl BlockHeaderStorage for DiskStorageManager { self.block_headers.write().await.store_headers_at_height(headers, height).await } + async fn store_hashed_headers(&mut self, headers: &[HashedBlockHeader]) -> StorageResult<()> { + self.block_headers.write().await.store_hashed_headers(headers).await + } + + async fn store_hashed_headers_at_height( + &mut self, + headers: &[HashedBlockHeader], + height: u32, + ) -> StorageResult<()> { + self.block_headers.write().await.store_hashed_headers_at_height(headers, height).await + } + async fn load_headers(&self, range: Range) -> StorageResult> { self.block_headers.read().await.load_headers(range).await } @@ -341,6 +422,10 @@ impl filters::FilterStorage for DiskStorageManager { async fn load_filters(&self, range: Range) -> StorageResult>> { self.filters.read().await.load_filters(range).await } + + async fn filter_tip_height(&self) -> StorageResult { + self.filters.read().await.filter_tip_height().await + } } #[async_trait] diff --git a/dash-spv/src/sync/block_headers/manager.rs b/dash-spv/src/sync/block_headers/manager.rs new file mode 100644 index 000000000..3655b9eea --- /dev/null +++ b/dash-spv/src/sync/block_headers/manager.rs @@ -0,0 +1,268 @@ +//! Headers manager for parallel sync. +//! +//! Downloads and validates block headers from peers. Handles both initial sync +//! and post-sync header updates. Emits BlockHeadersStored events for other managers. +//! +//! Uses HeadersPipeline for parallel downloads across checkpoint-defined segments +//! during initial sync. The same pipeline is reused for post-sync updates. + +use std::collections::HashMap; +use std::sync::Arc; +use std::time::Instant; + +use crate::chain::CheckpointManager; +use crate::error::{SyncError, SyncResult}; +use crate::network::RequestSender; +use crate::storage::{BlockHeaderStorage, BlockHeaderTip}; +use crate::sync::block_headers::HeadersPipeline; +use crate::sync::{BlockHeadersProgress, SyncEvent, SyncManager, SyncState}; +use crate::types::HashedBlockHeader; +use crate::validation::{BlockHeaderValidator, Validator}; +use dashcore::block::Header; +use dashcore::network::message_blockdata::Inventory; +use dashcore::BlockHash; +use tokio::sync::RwLock; + +/// Headers manager for downloading and validating block headers. +/// +/// This manager handles: +/// - Initial header sync using parallel pipeline (checkpoint-based segments) +/// - Post-sync header updates via inventory announcements +/// +/// Generic over `H: BlockHeaderStorage` to allow different storage implementations. +pub struct BlockHeadersManager { + /// Current progress of the manager. + pub(super) progress: BlockHeadersProgress, + /// Block header storage. + pub(super) header_storage: Arc>, + /// Pipeline for parallel header downloads (used for both initial sync and post-sync). + pub(super) pipeline: HeadersPipeline, + /// Pending block announcements waiting for headers message (post-sync). + pub(super) pending_announcements: HashMap, +} + +impl std::fmt::Debug for BlockHeadersManager { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("BlockHeadersManager") + .field("progress", &self.progress) + .field("pipeline", &self.pipeline) + .finish_non_exhaustive() + } +} + +impl BlockHeadersManager { + /// Create a new headers manager with the given storage and checkpoint manager. + pub fn new(header_storage: Arc>, checkpoint_manager: Arc) -> Self { + Self { + progress: BlockHeadersProgress::default(), + header_storage, + pipeline: HeadersPipeline::new(checkpoint_manager.clone()), + pending_announcements: HashMap::new(), + } + } + + pub(super) async fn tip(&self) -> SyncResult { + self.header_storage + .read() + .await + .get_tip() + .await + .ok_or_else(|| SyncError::MissingDependency("storage not initialized".to_string())) + } + + /// Validate and store headers batch. + async fn store_headers(&mut self, headers: &[HashedBlockHeader]) -> SyncResult { + debug_assert!(!headers.is_empty()); + + // Validate batch for internal continuity and PoW + BlockHeaderValidator::new().validate(headers)?; + + // Store headers + self.header_storage.write().await.store_hashed_headers(headers).await?; + + let tip = self.tip().await?; + + // Update state + self.progress.update_current_height(tip.height()); + self.progress.add_processed(headers.len() as u32); + + Ok(tip) + } + + /// Handle incoming headers message (used for both initial sync and post-sync). + pub(super) async fn handle_headers_pipeline( + &mut self, + headers: &[Header], + requests: &RequestSender, + ) -> SyncResult> { + if !self.pipeline.is_initialized() { + // Pipeline not initialized (shouldn't happen in normal flow) + tracing::warn!("Received headers but pipeline not initialized"); + return Ok(vec![]); + } + + let was_syncing = self.state() == SyncState::Syncing; + + // Route headers to the pipeline, validates checkpoint match. + let matched = self.pipeline.receive_headers(headers)?; + + if matched.is_none() && !headers.is_empty() { + tracing::debug!( + "Headers not matched by pipeline (prev_hash: {}), may be post-sync update", + headers[0].prev_blockhash + ); + } + + // Send more requests if capacity available + let sent = self.pipeline.send_pending(requests)?; + if sent > 0 { + tracing::debug!("Pipeline sent {} more requests", sent); + } + + // Process ready-to-store segments + let mut events = Vec::new(); + let ready_batches = self.pipeline.take_ready_to_store(); + + for (_start_height, batch_headers) in ready_batches { + if !batch_headers.is_empty() { + // Validate chain continuity with current tip + let tip = self.tip().await?; + if batch_headers[0].header().prev_blockhash != *tip.hash() { + return Err(SyncError::Validation(format!( + "Segment chain break: expected prev {}, got {}", + tip.hash(), + batch_headers[0].header().prev_blockhash + ))); + } + + // Clear any pending announcements for headers we're storing + for header in &batch_headers { + self.pending_announcements.remove(header.hash()); + } + + let new_tip = self.store_headers(&batch_headers).await?; + // Update target if we've exceeded it (post-sync case) + if new_tip.height() > self.progress.target_height() { + self.progress.update_target_height(new_tip.height()); + } + events.push(SyncEvent::BlockHeadersStored { + tip_height: new_tip.height(), + }); + } + } + + if was_syncing && self.pipeline.is_complete() { + // If blocks were announced during sync, request them before finalizing the sync + if !self.pending_announcements.is_empty() { + tracing::info!( + "Pipeline complete but {} blocks announced during sync, requesting headers", + self.pending_announcements.len() + ); + self.pipeline.reset_tip_segment(); + self.pipeline.send_pending(requests)?; + } else { + // Synced to the tip and no pending announcements, finalize and emit event + let tip = self.tip().await?; + self.progress.update_target_height(tip.height()); + self.progress.set_state(SyncState::Synced); + tracing::info!("Headers sync complete at height {}", tip.height()); + events.push(SyncEvent::BlockHeaderSyncComplete { + tip_height: tip.height(), + }); + } + } + + self.progress.bump_last_activity(); + Ok(events) + } + + /// Handle inventory announcements for new blocks. + /// + /// During initial sync, Dash Core sends inv (not headers2) because it doesn't + /// think we have the parent block. We track these announcements so we can + /// request headers after sync completes. + /// + /// When synced, we expect unsolicited headers2 announcements. The tick handler + /// uses a timeout to send fallback GetHeaders if headers don't arrive. + pub(super) async fn handle_inventory( + &mut self, + inv: &[Inventory], + _requests: &RequestSender, + ) -> SyncResult<()> { + for inv_item in inv { + if let Inventory::Block(block_hash) = inv_item { + // Check if already pending + if self.pending_announcements.contains_key(block_hash) { + continue; + } + + // Check if we already have this block + if let Ok(Some(_)) = + self.header_storage.read().await.get_header_height_by_hash(block_hash).await + { + continue; + } + + tracing::info!("New block announced via inv: {}", block_hash); + self.pending_announcements.insert(*block_hash, Instant::now()); + } + } + + Ok(()) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::chain::checkpoints::testnet_checkpoints; + use crate::network::MessageType; + use crate::storage::{DiskStorageManager, PersistentBlockHeaderStorage}; + use crate::sync::{ManagerIdentifier, SyncManagerProgress}; + + type TestBlockHeadersManager = BlockHeadersManager; + + fn create_test_checkpoint_manager() -> Arc { + Arc::new(CheckpointManager::new(testnet_checkpoints())) + } + + async fn create_test_manager() -> TestBlockHeadersManager { + let storage = DiskStorageManager::with_temp_dir().await.unwrap(); + let checkpoint_manager = create_test_checkpoint_manager(); + BlockHeadersManager::new(storage.header_storage(), checkpoint_manager) + } + + #[tokio::test] + async fn test_block_headers_manager_new() { + let manager = create_test_manager().await; + assert_eq!(manager.identifier(), ManagerIdentifier::BlockHeader); + assert_eq!(manager.state(), SyncState::Initializing); + assert_eq!(manager.wanted_message_types(), vec![MessageType::Headers, MessageType::Inv]); + } + + #[tokio::test] + async fn test_headers_manager_progress() { + let mut manager = create_test_manager().await; + manager.progress.update_current_height(100); + manager.progress.update_target_height(200); + manager.progress.add_processed(50); + + let progress = manager.progress(); + if let SyncManagerProgress::BlockHeaders(progress) = progress { + assert_eq!(progress.state(), SyncState::Initializing); + assert_eq!(progress.current_height(), 100); + assert_eq!(progress.target_height(), 200); + assert_eq!(progress.processed(), 50); + assert!(progress.last_activity().elapsed().as_secs() < 1); + } else { + panic!("Expected SyncManagerProgress::BlockHeaders"); + } + } + + #[tokio::test] + async fn test_headers_manager_has_pipeline() { + let manager = create_test_manager().await; + assert!(!manager.pipeline.is_initialized()); + assert_eq!(manager.pipeline.segment_count(), 0); + } +} diff --git a/dash-spv/src/sync/block_headers/mod.rs b/dash-spv/src/sync/block_headers/mod.rs new file mode 100644 index 000000000..2493e9d40 --- /dev/null +++ b/dash-spv/src/sync/block_headers/mod.rs @@ -0,0 +1,9 @@ +mod manager; +mod pipeline; +mod progress; +mod segment_state; +mod sync_manager; + +pub use manager::BlockHeadersManager; +pub(crate) use pipeline::HeadersPipeline; +pub use progress::BlockHeadersProgress; diff --git a/dash-spv/src/sync/block_headers/pipeline.rs b/dash-spv/src/sync/block_headers/pipeline.rs new file mode 100644 index 000000000..40f0b42c5 --- /dev/null +++ b/dash-spv/src/sync/block_headers/pipeline.rs @@ -0,0 +1,396 @@ +//! Headers pipeline for parallel downloads across checkpoint-defined segments. +//! +//! Uses checkpoints to create independent download segments that can be +//! downloaded in parallel from multiple peers. Each segment tracks its own +//! progress and buffers headers until ready for ordered storage. + +use std::sync::Arc; + +use dashcore::block::Header; +use dashcore::BlockHash; + +use crate::chain::CheckpointManager; +use crate::error::SyncResult; +use crate::network::RequestSender; +use crate::sync::block_headers::segment_state::SegmentState; +use crate::types::HashedBlockHeader; + +/// Pipeline for parallel header downloads across checkpoint-defined segments. +/// +/// Divides the blockchain into segments based on checkpoints and downloads +/// them in parallel. Headers are buffered and stored in order to maintain +/// chain consistency. +pub struct HeadersPipeline { + /// Download segments (ordered by height). + segments: Vec, + /// Index of the next segment to store (all previous must be complete). + next_to_store: usize, + /// Checkpoint manager reference. + checkpoint_manager: Arc, + /// Whether the pipeline has been initialized. + initialized: bool, +} + +impl std::fmt::Debug for HeadersPipeline { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("HeadersPipeline") + .field("segments", &self.segments) + .field("next_to_store", &self.next_to_store) + .field("initialized", &self.initialized) + .finish_non_exhaustive() + } +} + +impl HeadersPipeline { + /// Create a new headers pipeline with the given checkpoint manager. + pub fn new(checkpoint_manager: Arc) -> Self { + Self { + segments: Vec::new(), + next_to_store: 0, + checkpoint_manager, + initialized: false, + } + } + + /// Initialize the pipeline for downloading from current_height to target_height. + pub fn init(&mut self, current_height: u32, current_hash: BlockHash, target_height: u32) { + self.segments.clear(); + self.next_to_store = 0; + self.initialized = true; + + // Get checkpoint heights and find which ones are relevant + let checkpoint_heights = self.checkpoint_manager.checkpoint_heights(); + + // Find checkpoints between current_height and target_height + let mut boundaries: Vec<(u32, BlockHash)> = Vec::new(); + + // Start from current position + boundaries.push((current_height, current_hash)); + + // Add checkpoints that are above current_height + for &height in checkpoint_heights { + if height > current_height && height <= target_height { + if let Some(cp) = self.checkpoint_manager.get_checkpoint(height) { + boundaries.push((height, cp.block_hash)); + } + } + } + + // Sort by height + boundaries.sort_by_key(|(h, _)| *h); + + // Create segments between consecutive boundaries + for i in 0..boundaries.len() { + let (start_height, start_hash) = boundaries[i]; + let (target_height, target_hash) = if i + 1 < boundaries.len() { + let (h, hash) = boundaries[i + 1]; + (Some(h), Some(hash)) + } else { + // Last segment goes to tip (unknown target) + (None, None) + }; + + let segment = + SegmentState::new(i, start_height, start_hash, target_height, target_hash); + + tracing::info!( + "Created segment {}: {} -> {:?} (start_hash: {})", + i, + start_height, + target_height, + start_hash + ); + + self.segments.push(segment); + } + + tracing::info!( + "HeadersPipeline initialized with {} segments for heights {} to {}", + self.segments.len(), + current_height, + target_height + ); + } + + /// Get the number of segments in the pipeline. + pub fn segment_count(&self) -> usize { + self.segments.len() + } + + /// Send pending requests for active segments. + /// Returns the number of requests sent. + pub fn send_pending(&mut self, requests: &RequestSender) -> SyncResult { + let mut sent = 0; + for segment in &mut self.segments { + // Skip completed segments + if segment.complete { + continue; + } + while segment.can_send() { + segment.send_request(requests)?; + sent += 1; + } + } + Ok(sent) + } + + /// Try to match incoming headers to the correct segment. + /// Returns the segment index if matched, or None if headers don't belong to any segment. + /// Returns an error if checkpoint validation fails. + pub fn receive_headers(&mut self, headers: &[Header]) -> SyncResult> { + if headers.is_empty() { + // Empty response means the peer has no more headers after our locator. + // Route to the tip segment (target_height is None) if it has in-flight requests. + // Middle segments complete via checkpoint validation, not empty responses. + for segment in &mut self.segments { + if !segment.complete + && segment.target_height.is_none() + && segment.coordinator.active_count() > 0 + { + tracing::debug!( + "Routing empty response to tip segment {} at height {}", + segment.segment_id, + segment.current_height + ); + segment.receive_headers(headers)?; + return Ok(Some(segment.segment_id)); + } + } + return Ok(None); + } + + let prev_hash = headers[0].prev_blockhash; + + // Find the segment that matches + for (idx, segment) in self.segments.iter_mut().enumerate() { + if segment.matches(&prev_hash) { + // If tip segment was completed but receives new headers (post-sync), + // reset it so take_ready_to_store() can process the new headers + if segment.complete && segment.target_height.is_none() { + segment.complete = false; + self.next_to_store = idx; + tracing::debug!( + "Tip segment {} receiving post-sync headers, reset for continued processing", + segment.segment_id + ); + } + segment.receive_headers(headers)?; + return Ok(Some(segment.segment_id)); + } + } + + tracing::warn!( + "Received {} headers with prev_hash {} but no segment matched", + headers.len(), + prev_hash + ); + Ok(None) + } + + /// Get segments that are ready to store (complete and in order). + /// Returns tuples of (start_height, headers). + pub fn take_ready_to_store(&mut self) -> Vec<(u32, Vec)> { + let mut ready = Vec::new(); + + while self.next_to_store < self.segments.len() { + // Check if segment has buffered headers + if self.segments[self.next_to_store].buffered_headers.is_empty() { + break; + } + + // For non-first segments, check if previous segment completed + if self.next_to_store > 0 { + let prev_complete = self.segments[self.next_to_store - 1].complete; + let prev_empty = self.segments[self.next_to_store - 1].buffered_headers.is_empty(); + if !prev_complete || !prev_empty { + break; + } + } + + let segment = &mut self.segments[self.next_to_store]; + let start_height = segment.start_height + 1; // +1 because we store headers after start + let segment_id = segment.segment_id; + let headers = segment.take_buffered(); + let is_complete = segment.complete; + let is_empty = segment.buffered_headers.is_empty(); + + if !headers.is_empty() { + tracing::info!( + "Segment {}: {} headers ready to store from height {}", + segment_id, + headers.len(), + start_height + ); + ready.push((start_height, headers)); + } + + // If this segment is complete and drained, move to next + if is_complete && is_empty { + self.next_to_store += 1; + } else { + break; + } + } + + ready + } + + /// Check if all segments are complete. + pub fn is_complete(&self) -> bool { + self.segments.iter().all(|s| s.complete && s.buffered_headers.is_empty()) + } + + /// Get the total number of buffered headers across all segments. + pub fn total_buffered(&self) -> u32 { + self.segments.iter().map(|s| s.buffered_headers.len() as u32).sum() + } + + /// Check for timeouts in all segments. + pub fn handle_timeouts(&mut self) { + for segment in &mut self.segments { + segment.handle_timeouts(); + } + } + + /// Check if pipeline is initialized. + pub fn is_initialized(&self) -> bool { + self.initialized + } + + /// Reset the tip segment for continued syncing after initial sync completes. + /// This allows the pipeline to be reused for post-sync header updates. + /// Returns true if the tip segment was reset, false if not found or not complete. + pub fn reset_tip_segment(&mut self) -> bool { + // Find the tip segment (target_height is None) + for (idx, segment) in self.segments.iter_mut().enumerate() { + if segment.target_height.is_none() && segment.complete { + segment.complete = false; + // Reset next_to_store so buffered headers can be processed + self.next_to_store = idx; + tracing::debug!( + "Reset tip segment {} at height {} for continued syncing", + segment.segment_id, + segment.current_height + ); + return true; + } + } + false + } + + /// Check if the tip segment has active requests in flight. + pub fn tip_segment_has_pending_request(&self) -> bool { + self.segments + .iter() + .find(|s| s.target_height.is_none()) + .is_some_and(|s| !s.complete && s.coordinator.active_count() > 0) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::chain::checkpoints::{mainnet_checkpoints, testnet_checkpoints}; + use tokio::sync::mpsc::unbounded_channel; + + use crate::network::{NetworkRequest, RequestSender}; + + fn create_test_checkpoint_manager(is_testnet: bool) -> Arc { + let checkpoints = if is_testnet { + testnet_checkpoints() + } else { + mainnet_checkpoints() + }; + Arc::new(CheckpointManager::new(checkpoints)) + } + + fn create_test_request_sender( + ) -> (RequestSender, tokio::sync::mpsc::UnboundedReceiver) { + let (tx, rx) = unbounded_channel(); + (RequestSender::new(tx), rx) + } + + #[test] + fn test_pipeline_new() { + let cm = create_test_checkpoint_manager(true); + let pipeline = HeadersPipeline::new(cm); + + assert!(!pipeline.is_initialized()); + assert_eq!(pipeline.segment_count(), 0); + } + + #[test] + fn test_pipeline_init_testnet() { + let cm = create_test_checkpoint_manager(true); + let mut pipeline = HeadersPipeline::new(cm.clone()); + + // Get genesis hash for testnet + let genesis = cm.get_checkpoint(0).unwrap(); + pipeline.init(0, genesis.block_hash, 1_200_000); + + assert!(pipeline.is_initialized()); + // Should have segments: 0->500k, 500k->800k, 800k->1.1M, 1.1M->tip + assert!(pipeline.segment_count() >= 3); + } + + #[test] + fn test_pipeline_init_from_middle() { + let cm = create_test_checkpoint_manager(true); + let mut pipeline = HeadersPipeline::new(cm.clone()); + + // Start from checkpoint 500k + let cp_500k = cm.get_checkpoint(500_000).unwrap(); + pipeline.init(500_000, cp_500k.block_hash, 1_200_000); + + // Should have fewer segments since we're starting from 500k + assert!(pipeline.is_initialized()); + // Segments: 500k->800k, 800k->1.1M, 1.1M->tip + assert!(pipeline.segment_count() >= 2); + } + + #[test] + fn test_pipeline_send_pending() { + let cm = create_test_checkpoint_manager(true); + let mut pipeline = HeadersPipeline::new(cm.clone()); + + let genesis = cm.get_checkpoint(0).unwrap(); + pipeline.init(0, genesis.block_hash, 1_200_000); + + let (sender, mut rx) = create_test_request_sender(); + + let sent = pipeline.send_pending(&sender).unwrap(); + + // Should send at least one request per segment + assert!(sent >= pipeline.segment_count()); + + // Verify messages were queued + let mut count = 0; + while rx.try_recv().is_ok() { + count += 1; + } + assert_eq!(count, sent); + } + + #[test] + fn test_pipeline_is_complete_initially() { + let cm = create_test_checkpoint_manager(true); + let mut pipeline = HeadersPipeline::new(cm.clone()); + + let genesis = cm.get_checkpoint(0).unwrap(); + pipeline.init(0, genesis.block_hash, 1_200_000); + + assert!(!pipeline.is_complete()); + } + + #[test] + fn test_take_ready_to_store_empty() { + let cm = create_test_checkpoint_manager(true); + let mut pipeline = HeadersPipeline::new(cm.clone()); + + let genesis = cm.get_checkpoint(0).unwrap(); + pipeline.init(0, genesis.block_hash, 1_200_000); + + let ready = pipeline.take_ready_to_store(); + assert!(ready.is_empty()); + } +} diff --git a/dash-spv/src/sync/block_headers/progress.rs b/dash-spv/src/sync/block_headers/progress.rs new file mode 100644 index 000000000..3b52c2ae8 --- /dev/null +++ b/dash-spv/src/sync/block_headers/progress.rs @@ -0,0 +1,121 @@ +use crate::sync::SyncState; +use std::fmt; +use std::time::Instant; + +/// Progress for block-header synchronization. +#[derive(Debug, Clone, PartialEq)] +pub struct BlockHeadersProgress { + /// Current sync state. + state: SyncState, + /// The tip height of the block-header storage. + current_height: u32, + /// Equals to current_height (blockchain tip) when synced and to the best height of connected peers during initial sync. + target_height: u32, + /// Number of block-headers processed (stored) in the current sync session. + processed: u32, + /// Number of headers currently buffered in the pipeline (waiting to be stored). + buffered: u32, + /// The last time a block-header was stored to disk or the last manager state change. + last_activity: Instant, +} + +impl Default for BlockHeadersProgress { + fn default() -> Self { + Self { + state: SyncState::default(), + current_height: 0, + target_height: 0, + processed: 0, + buffered: 0, + last_activity: Instant::now(), + } + } +} + +impl BlockHeadersProgress { + /// Get completion percentage (0.0 to 1.0). + /// Includes buffered headers for more accurate progress during parallel sync. + pub fn percentage(&self) -> f64 { + if self.target_height == 0 { + return 1.0; + } + // Include buffered headers in progress calculation + (self.effective_height() as f64 / self.target_height as f64).min(1.0) + } + /// Get the current sync state. + pub fn state(&self) -> SyncState { + self.state + } + /// Get the current height (last successfully processed height). + pub fn current_height(&self) -> u32 { + self.current_height + } + /// Get the target height (the best height of the connected peers) + pub fn target_height(&self) -> u32 { + self.target_height + } + /// Number of block-headers processed (stored) in the current sync session. + pub fn processed(&self) -> u32 { + self.processed + } + /// Get the effective height (current_height + buffered). + pub fn effective_height(&self) -> u32 { + self.current_height + self.buffered + } + /// The last time a block-header was stored to disk or the last manager state change. + pub fn last_activity(&self) -> Instant { + self.last_activity + } + /// Update the sync state and bump the last activity time. + pub fn set_state(&mut self, state: SyncState) { + self.state = state; + self.bump_last_activity(); + } + /// Update the current height (last successfully processed height). + pub fn update_current_height(&mut self, height: u32) { + self.current_height = height; + self.bump_last_activity(); + } + /// Update the target height (the best height of the connected peers). + /// Only updates if the new height is greater than the current target (monotonic increase). + pub fn update_target_height(&mut self, height: u32) { + if height > self.target_height { + self.target_height = height; + self.bump_last_activity(); + } + } + /// Add a number to the processed counter. + pub fn add_processed(&mut self, count: u32) { + self.processed += count; + self.bump_last_activity(); + } + /// Add a number to the buffered counter. + pub fn buffered(&self) -> u32 { + self.buffered + } + /// Update the buffered counter. + pub fn update_buffered(&mut self, count: u32) { + self.buffered = count; + } + /// Bump the last activity time. + pub fn bump_last_activity(&mut self) { + self.last_activity = Instant::now(); + } +} + +impl fmt::Display for BlockHeadersProgress { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + let pct = self.percentage() * 100.0; + write!( + f, + "{:?} {}/{} ({:.1}%) processed: {}, buffered: {}, last_activity: {}s", + self.state, + self.effective_height(), + self.target_height, + pct, + self.processed, + self.buffered, + self.last_activity.elapsed().as_secs() + ) + } +} diff --git a/dash-spv/src/sync/block_headers/segment_state.rs b/dash-spv/src/sync/block_headers/segment_state.rs new file mode 100644 index 000000000..ac17d5cb6 --- /dev/null +++ b/dash-spv/src/sync/block_headers/segment_state.rs @@ -0,0 +1,311 @@ +use crate::error::{SyncError, SyncResult}; +use crate::network::RequestSender; +use crate::sync::download_coordinator::{DownloadConfig, DownloadCoordinator}; +use crate::types::HashedBlockHeader; +use dashcore::{BlockHash, Header}; +use std::time::Duration; + +/// Timeout for header requests. +const HEADERS_TIMEOUT: Duration = Duration::from_secs(30); + +/// State for a single download segment between two checkpoints. +#[derive(Debug)] +pub(super) struct SegmentState { + /// Unique segment identifier (index in segments array). + pub(super) segment_id: usize, + /// Starting height of this segment. + pub(super) start_height: u32, + /// Target height (None for tip segment). + pub(super) target_height: Option, + /// Target hash (next checkpoint hash for validation). + target_hash: Option, + /// Current tip hash for GetHeaders locator. + current_tip_hash: BlockHash, + /// Current height reached in this segment. + pub(super) current_height: u32, + /// Download coordinator for tracking in-flight requests. + pub(super) coordinator: DownloadCoordinator, + /// Buffered headers waiting to be stored. + pub(super) buffered_headers: Vec, + /// Whether this segment has completed downloading. + pub(super) complete: bool, +} + +impl SegmentState { + /// Create a new segment state. + pub(super) fn new( + segment_id: usize, + start_height: u32, + start_hash: BlockHash, + target_height: Option, + target_hash: Option, + ) -> Self { + Self { + segment_id, + start_height, + target_height, + target_hash, + current_tip_hash: start_hash, + current_height: start_height, + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_max_concurrent(1) // Only 1 request at a time (sequential getheaders) + .with_timeout(HEADERS_TIMEOUT) + .with_max_retries(3), + ), + buffered_headers: Vec::new(), + complete: false, + } + } + + /// Check if the segment can send more requests. + /// Only one getheaders request can be in-flight at a time (sequential protocol). + pub(super) fn can_send(&self) -> bool { + !self.complete && !self.coordinator.is_in_flight(&self.current_tip_hash) + } + + /// Send a GetHeaders request for this segment. + pub(super) fn send_request(&mut self, requests: &RequestSender) -> SyncResult<()> { + requests.request_block_headers(self.current_tip_hash)?; + self.coordinator.mark_sent(&[self.current_tip_hash]); + tracing::debug!( + "Segment {}: sent GetHeaders from height {} hash {}", + self.segment_id, + self.current_height, + self.current_tip_hash + ); + Ok(()) + } + + /// Try to match incoming headers to this segment. + /// Returns true if the headers belong to this segment. + pub(super) fn matches(&self, prev_blockhash: &BlockHash) -> bool { + // Match if prev_blockhash equals our current tip hash + &self.current_tip_hash == prev_blockhash + } + + /// Process received headers for this segment. + /// Returns the number of headers processed, or an error if checkpoint validation fails. + pub(super) fn receive_headers(&mut self, headers: &[Header]) -> SyncResult { + if headers.is_empty() { + // Empty response means we've reached the peer's tip for this segment + self.complete = true; + // Clear in-flight tracking for the current tip hash + self.coordinator.receive(&self.current_tip_hash); + tracing::info!( + "Segment {}: complete (empty response at height {})", + self.segment_id, + self.current_height + ); + return Ok(0); + } + + // Mark the request as received + let prev_hash = headers[0].prev_blockhash; + self.coordinator.receive(&prev_hash); + + // Process headers + let mut processed = 0; + for header in headers { + let hashed = HashedBlockHeader::from(*header); + let hash = *hashed.hash(); + let height = self.current_height + processed as u32 + 1; + + // Check if we've reached the target (next checkpoint) + if let (Some(target_height), Some(target_hash)) = (self.target_height, self.target_hash) + { + if height == target_height { + if hash == target_hash { + tracing::info!( + "Segment {}: reached target checkpoint at height {}", + self.segment_id, + target_height + ); + self.buffered_headers.push(hashed); + processed += 1; + self.complete = true; + break; + } else { + tracing::error!( + "Segment {}: checkpoint mismatch at height {}! expected {}, got {}", + self.segment_id, + target_height, + target_hash, + hash + ); + return Err(SyncError::Validation(format!( + "Block at height {} does not match checkpoint: expected {}, got {}", + target_height, target_hash, hash + ))); + } + } + } + + self.buffered_headers.push(hashed); + processed += 1; + } + + // Update current tip for next request + if processed > 0 { + self.current_tip_hash = headers[processed - 1].block_hash(); + self.current_height += processed as u32; + } + + tracing::debug!( + "Segment {}: received {} headers, now at height {}, buffered {}", + self.segment_id, + processed, + self.current_height, + self.buffered_headers.len() + ); + + Ok(processed) + } + + /// Take buffered headers from this segment. + pub(super) fn take_buffered(&mut self) -> Vec { + std::mem::take(&mut self.buffered_headers) + } + + /// Check for timed out requests and handle retries. + pub(super) fn handle_timeouts(&mut self) { + let timed_out = self.coordinator.check_timeouts(); + for hash in timed_out { + tracing::warn!( + "Segment {}: request timed out for hash {}, will retry", + self.segment_id, + hash + ); + // Re-enqueue for retry + self.coordinator.enqueue_retry(hash); + } + } +} + +#[cfg(test)] +mod tests { + use crate::error::SyncError; + use crate::sync::block_headers::segment_state::SegmentState; + use crate::types::HashedBlockHeader; + use dashcore::{BlockHash, Header}; + + #[test] + fn test_segment_state_new() { + let hash = BlockHash::dummy(0); + let segment = SegmentState::new(0, 0, hash, Some(500_000), None); + + assert_eq!(segment.segment_id, 0); + assert_eq!(segment.start_height, 0); + assert_eq!(segment.current_height, 0); + assert!(!segment.complete); + assert!(segment.buffered_headers.is_empty()); + } + + #[test] + fn test_segment_can_send() { + let hash = BlockHash::dummy(0); + let segment = SegmentState::new(0, 0, hash, Some(1000), None); + + assert!(segment.can_send()); + } + + #[test] + fn test_segment_matches() { + let hash = BlockHash::dummy(0); + let segment = SegmentState::new(0, 0, hash, Some(1000), None); + + assert!(segment.matches(&hash)); + assert!(!segment.matches(&BlockHash::dummy(1))); + } + + #[test] + fn test_segment_receive_empty() { + let hash = BlockHash::dummy(1); + let mut segment = SegmentState::new(0, 0, hash, Some(1000), None); + + let processed = segment.receive_headers(&[]).unwrap(); + + assert_eq!(processed, 0); + assert!(segment.complete); + } + + #[test] + fn test_segment_receive_headers() { + let hash = BlockHash::dummy(1); + let mut segment = SegmentState::new(0, 0, hash, None, None); + segment.coordinator.mark_sent(&[hash]); + + // Create dummy headers that chain from all-zeros + let headers: Vec

= (1..=10).map(Header::dummy).collect(); + + // Manually fix the prev_blockhash of first header + let mut first = headers[0]; + first.prev_blockhash = hash; + + let processed = segment.receive_headers(&[first]).unwrap(); + + assert_eq!(processed, 1); + assert_eq!(segment.buffered_headers.len(), 1); + assert_eq!(segment.current_height, 1); + assert!(!segment.complete); + } + + #[test] + fn test_segment_checkpoint_mismatch_returns_error() { + let start_hash = BlockHash::dummy(0); + // Segment with checkpoint at height 1 expecting a specific hash + let expected_checkpoint_hash = BlockHash::dummy(99); + let mut segment = + SegmentState::new(0, 0, start_hash, Some(1), Some(expected_checkpoint_hash)); + segment.coordinator.mark_sent(&[start_hash]); + + // Create a header that will be at height 1 but with a different hash + let mut header = Header::dummy(1); + header.prev_blockhash = start_hash; + + // The header's hash won't match the expected checkpoint hash + let hashed = HashedBlockHeader::from(header); + let actual_hash = hashed.hash(); + assert_ne!(*actual_hash, expected_checkpoint_hash); + + // Receiving this header should fail with a validation error + let result = segment.receive_headers(&[header]); + assert!(result.is_err()); + + let err = result.unwrap_err(); + match err { + SyncError::Validation(msg) => { + assert!(msg.contains("does not match checkpoint")); + assert!(msg.contains("height 1")); + } + _ => panic!("Expected SyncError::Validation, got {:?}", err), + } + + // Segment should not be complete and no headers should be buffered + assert!(!segment.complete); + assert!(segment.buffered_headers.is_empty()); + } + + #[test] + fn test_segment_checkpoint_match_completes_segment() { + let start_hash = BlockHash::dummy(0); + // Create a header first to get its hash for the checkpoint + let mut header = Header::dummy(1); + header.prev_blockhash = start_hash; + let hashed = HashedBlockHeader::from(header); + let header_hash = *hashed.hash(); + + // Create segment with checkpoint matching the header's hash + let mut segment = SegmentState::new(0, 0, start_hash, Some(1), Some(header_hash)); + segment.coordinator.mark_sent(&[start_hash]); + + // Receiving this header should succeed and complete the segment + let result = segment.receive_headers(&[header]); + assert!(result.is_ok()); + assert_eq!(result.unwrap(), 1); + + // Segment should be complete with the header buffered + assert!(segment.complete); + assert_eq!(segment.buffered_headers.len(), 1); + } +} diff --git a/dash-spv/src/sync/block_headers/sync_manager.rs b/dash-spv/src/sync/block_headers/sync_manager.rs new file mode 100644 index 000000000..f07f7b343 --- /dev/null +++ b/dash-spv/src/sync/block_headers/sync_manager.rs @@ -0,0 +1,209 @@ +use crate::error::SyncResult; +use crate::network::{Message, MessageType, NetworkEvent, RequestSender}; +use crate::storage::BlockHeaderStorage; +use crate::sync::{ + BlockHeadersManager, ManagerIdentifier, SyncEvent, SyncManager, SyncManagerProgress, SyncState, +}; +use crate::SyncError; +use async_trait::async_trait; +use dashcore::network::message::NetworkMessage; +use dashcore::BlockHash; +use std::time::{Duration, Instant}; + +/// Timeout waiting for unsolicited header messages after a block announcement. +pub(super) const UNSOLICITED_HEADERS_WAIT_TIMEOUT: Duration = Duration::from_secs(3); + +#[async_trait] +impl SyncManager for BlockHeadersManager { + fn identifier(&self) -> ManagerIdentifier { + ManagerIdentifier::BlockHeader + } + + fn state(&self) -> SyncState { + self.progress.state() + } + + fn set_state(&mut self, state: SyncState) { + self.progress.set_state(state); + } + + fn update_target_height(&mut self, height: u32) { + self.progress.update_target_height(height); + } + + fn wanted_message_types(&self) -> &'static [MessageType] { + &[MessageType::Headers, MessageType::Inv] + } + + async fn initialize(&mut self) -> SyncResult<()> { + let tip = self + .header_storage + .read() + .await + .get_tip() + .await + .ok_or_else(|| SyncError::MissingDependency("No tip in storage".to_string()))?; + + self.progress.set_state(SyncState::WaitingForConnections); + self.progress.update_current_height(tip.height()); + self.progress.update_target_height(tip.height()); + + tracing::info!("BlockHeadersManager initialized at height {}", tip.height()); + + Ok(()) + } + + async fn start_sync(&mut self, requests: &RequestSender) -> SyncResult> { + if self.state() != SyncState::WaitingForConnections { + tracing::warn!("{} sync already started.", self.identifier()); + return Ok(vec![]); + } + self.progress.set_state(SyncState::Syncing); + + let tip = self.tip().await?; + let target_height = self.progress.target_height(); + + // Initialize the pipeline with checkpoint-based segments + self.pipeline.init(tip.height(), *tip.hash(), target_height); + + tracing::info!( + "Starting parallel header sync from {} to {} ({} segments)", + tip.height(), + target_height, + self.pipeline.segment_count() + ); + + // Send initial batch of requests + let sent = self.pipeline.send_pending(requests)?; + tracing::info!("Pipeline: sent {} initial requests", sent); + + Ok(vec![SyncEvent::SyncStart { + identifier: self.identifier(), + }]) + } + + async fn handle_message( + &mut self, + msg: Message, + requests: &RequestSender, + ) -> SyncResult> { + match msg.inner() { + NetworkMessage::Headers(headers) => { + // Always route through pipeline when initialized + self.handle_headers_pipeline(headers, requests).await + } + + NetworkMessage::Inv(inv) => { + self.handle_inventory(inv, requests).await?; + Ok(vec![]) + } + + _ => Ok(vec![]), + } + } + + async fn handle_sync_event( + &mut self, + _event: &SyncEvent, + _requests: &RequestSender, + ) -> SyncResult> { + // BlockHeadersManager doesn't react to events from other managers + Ok(vec![]) + } + + async fn tick(&mut self, requests: &RequestSender) -> SyncResult> { + if !self.pipeline.is_initialized() { + return Ok(vec![]); + } + + self.pipeline.handle_timeouts(); + + // During initial sync, send more requests and log progress + if self.state() == SyncState::Syncing { + let sent = self.pipeline.send_pending(requests)?; + if sent > 0 { + tracing::debug!("Tick: pipeline sent {} more requests", sent); + } + + return Ok(vec![]); + } + + // Post-sync: check for stale block announcements + if self.state() == SyncState::Synced { + let now = Instant::now(); + let stale: Vec = self + .pending_announcements + .iter() + .filter(|(_, announced_at)| { + now.duration_since(**announced_at) > UNSOLICITED_HEADERS_WAIT_TIMEOUT + }) + .map(|(hash, _)| *hash) + .collect(); + + if !stale.is_empty() { + tracing::info!( + "Sending fallback GetHeaders for {} stale announcements", + stale.len() + ); + + // Reset tip segment and send requests via pipeline + self.pipeline.reset_tip_segment(); + self.pipeline.send_pending(requests)?; + + for hash in stale { + self.pending_announcements.remove(&hash); + } + } + } + + Ok(vec![]) + } + + async fn handle_network_event( + &mut self, + event: &NetworkEvent, + requests: &RequestSender, + ) -> SyncResult> { + if let NetworkEvent::PeersUpdated { + connected_count, + best_height, + .. + } = event + { + if let Some(best_height) = best_height { + self.progress.update_target_height(*best_height); + } + if *connected_count == 0 { + self.stop_sync(); + } else if *connected_count > 0 { + if self.state() == SyncState::WaitingForConnections { + return self.start_sync(requests).await; + } + // When already synced but behind peer height, request missing headers + if self.state() == SyncState::Synced { + if let Some(best_height) = best_height { + if *best_height > self.progress.current_height() + && !self.pipeline.tip_segment_has_pending_request() + { + tracing::info!( + "Peer height {} > our height {}, requesting headers to catch up", + best_height, + self.progress.current_height() + ); + // Reset tip segment and send requests via pipeline + self.pipeline.reset_tip_segment(); + self.pipeline.send_pending(requests)?; + } + } + } + } + } + Ok(vec![]) + } + + fn progress(&self) -> SyncManagerProgress { + let mut progress = self.progress.clone(); + progress.update_buffered(self.pipeline.total_buffered()); + SyncManagerProgress::BlockHeaders(progress) + } +} diff --git a/dash-spv/src/sync/blocks/manager.rs b/dash-spv/src/sync/blocks/manager.rs new file mode 100644 index 000000000..8e6ad8a92 --- /dev/null +++ b/dash-spv/src/sync/blocks/manager.rs @@ -0,0 +1,221 @@ +//! Blocks manager for parallel sync. +//! +//! Downloads blocks that matched wallet filters and processes them in height order. +//! Subscribes to BlockNeeded events and emits BlockProcessed events. + +use std::sync::Arc; + +use tokio::sync::RwLock; + +use super::pipeline::BlocksPipeline; +use crate::error::SyncResult; +use crate::network::RequestSender; +use crate::storage::{BlockHeaderStorage, BlockStorage}; +use crate::sync::{BlocksProgress, SyncEvent, SyncManager, SyncState}; +use key_wallet_manager::wallet_interface::WalletInterface; + +/// Blocks manager for downloading and processing matching blocks. +/// +/// This manager: +/// - Subscribes to BlockNeeded events from FiltersManager +/// - Downloads blocks using pipelined requests +/// - Processes blocks through wallet in height order +/// - Emits BlockProcessed events +/// +/// Generic over: +/// - `H: BlockHeaderStorage` for height lookups +/// - `B: BlockStorage` for storing and loading blocks +/// - `W: WalletInterface` for wallet operations +pub struct BlocksManager { + /// Current progress of the manager. + pub(super) progress: BlocksProgress, + /// Block header storage (for height lookups). + pub(super) header_storage: Arc>, + /// Block storage (for storing and loading blocks). + pub(super) block_storage: Arc>, + /// Wallet for processing blocks. + pub(super) wallet: Arc>, + /// Pipeline for downloading blocks (handles buffering and height ordering). + pub(super) pipeline: BlocksPipeline, + /// Whether FiltersSyncComplete has been received. + pub(super) filters_sync_complete: bool, +} + +impl BlocksManager { + /// Create a new blocks manager with the given storage references. + pub fn new( + wallet: Arc>, + header_storage: Arc>, + block_storage: Arc>, + ) -> Self { + Self { + progress: BlocksProgress::default(), + header_storage, + block_storage, + wallet, + pipeline: BlocksPipeline::new(), + filters_sync_complete: false, + } + } + + pub(super) async fn send_pending(&mut self, requests: &RequestSender) -> SyncResult<()> { + let sent = self.pipeline.send_pending(requests).await?; + if sent > 0 { + self.progress.add_requested(sent as u32); + } + Ok(()) + } + + /// Process buffered blocks in height order. + /// + /// Uses the pipeline's height-ordering logic to ensure blocks are processed + /// in the correct sequence. + pub(super) async fn process_buffered_blocks(&mut self) -> SyncResult> { + let mut events = Vec::new(); + + // Process blocks in height order using pipeline's ordering logic + while let Some((block, height)) = self.pipeline.take_next_ordered_block() { + let hash = block.block_hash(); + + // Process block through wallet + let mut wallet = self.wallet.write().await; + let result = wallet.process_block(&block, height).await; + drop(wallet); + + let total_relevant = result.relevant_tx_count(); + if total_relevant > 0 { + tracing::info!( + "Found {} relevant transactions ({} new, {} existing) {} at height {}, new addresses: {}", + total_relevant, + result.new_txids.len(), + result.existing_txids.len(), + hash, + height, + result.new_addresses.len() + ); + } + + // Collect new addresses for gap limit rescanning + let new_addresses: Vec<_> = result.new_addresses.into_iter().collect(); + if !new_addresses.is_empty() { + tracing::debug!( + "Block {} generated {} new addresses for gap limit maintenance", + height, + new_addresses.len() + ); + } + + self.progress.add_processed(1); + if total_relevant > 0 { + self.progress.add_relevant(1); + } + // Only count new transactions to avoid double-counting during rescans + self.progress.add_transactions(result.new_txids.len() as u32); + self.progress.update_last_processed(height); + + events.push(SyncEvent::BlockProcessed { + block_hash: hash, + height, + new_addresses, + }); + } + + // Check if pipeline is empty + if self.pipeline.is_complete() && self.state() == SyncState::Syncing { + if self.filters_sync_complete { + // Filters are done and pipeline is empty - we're fully synced + self.progress.set_state(SyncState::Synced); + tracing::info!( + "Block sync complete, processed {} blocks", + self.progress.processed() + ); + } else { + // Pipeline empty but filters still syncing - wait for more blocks + self.progress.set_state(SyncState::WaitForEvents); + } + } + + Ok(events) + } +} + +impl std::fmt::Debug + for BlocksManager +{ + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("BlocksManager") + .field("progress", &self.progress) + .field("pipeline", &self.pipeline) + .finish() + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::network::{MessageType, NetworkManager}; + use crate::storage::{ + DiskStorageManager, PersistentBlockHeaderStorage, PersistentBlockStorage, + }; + use crate::sync::{ManagerIdentifier, SyncEvent, SyncManagerProgress}; + use crate::test_utils::MockNetworkManager; + use key_wallet_manager::test_utils::MockWallet; + use key_wallet_manager::wallet_manager::FilterMatchKey; + use std::collections::BTreeSet; + + type TestBlocksManager = + BlocksManager; + type TestSyncManager = dyn SyncManager; + + async fn create_test_manager() -> TestBlocksManager { + let storage = DiskStorageManager::with_temp_dir().await.unwrap(); + let wallet = Arc::new(RwLock::new(MockWallet::new())); + BlocksManager::new(wallet, storage.header_storage(), storage.block_storage()) + } + + #[tokio::test] + async fn test_blocks_manager_new() { + let manager = create_test_manager().await; + assert_eq!(manager.identifier(), ManagerIdentifier::Block); + assert_eq!(manager.state(), SyncState::Initializing); + assert_eq!(manager.wanted_message_types(), vec![MessageType::Block]); + } + + #[tokio::test] + async fn test_blocks_manager_progress() { + let mut manager = create_test_manager().await; + manager.progress.update_last_processed(500); + manager.progress.add_processed(10); + + let manager_ref: &TestSyncManager = &manager; + let progress = manager_ref.progress(); + if let SyncManagerProgress::Blocks(blocks_progress) = progress { + assert_eq!(blocks_progress.last_processed(), 500); + assert_eq!(blocks_progress.processed(), 10); + } else { + panic!("Expected SyncManagerProgress::Blocks"); + } + } + + #[tokio::test] + async fn test_blocks_manager_handle_blocks_needed_event() { + let mut manager = create_test_manager().await; + manager.progress.set_state(SyncState::Synced); + + let network = MockNetworkManager::new(); + let requests = network.request_sender(); + + let block_hash = dashcore::BlockHash::dummy(0); + let mut blocks = BTreeSet::new(); + blocks.insert(FilterMatchKey::new(100, block_hash)); + let event = SyncEvent::BlocksNeeded { + blocks, + }; + + let events = manager.handle_sync_event(&event, &requests).await.unwrap(); + + // Should queue the block + assert_eq!(manager.state(), SyncState::Syncing); + assert!(events.is_empty()); + } +} diff --git a/dash-spv/src/sync/blocks/mod.rs b/dash-spv/src/sync/blocks/mod.rs new file mode 100644 index 000000000..43aca4bba --- /dev/null +++ b/dash-spv/src/sync/blocks/mod.rs @@ -0,0 +1,7 @@ +mod manager; +mod pipeline; +mod progress; +mod sync_manager; + +pub use manager::BlocksManager; +pub use progress::BlocksProgress; diff --git a/dash-spv/src/sync/blocks/pipeline.rs b/dash-spv/src/sync/blocks/pipeline.rs new file mode 100644 index 000000000..4076a77ed --- /dev/null +++ b/dash-spv/src/sync/blocks/pipeline.rs @@ -0,0 +1,517 @@ +//! Blocks pipeline implementation. +//! +//! Handles concurrent block downloads with timeout and retry logic. +//! Uses the generic DownloadCoordinator for core mechanics. + +use std::collections::{BTreeMap, BTreeSet, HashMap}; +use std::time::Duration; + +use crate::error::SyncResult; +use crate::network::RequestSender; +use crate::sync::download_coordinator::{DownloadConfig, DownloadCoordinator}; +use dashcore::blockdata::block::Block; +use dashcore::BlockHash; +use key_wallet_manager::wallet_manager::FilterMatchKey; + +/// Maximum number of concurrent block downloads. +const MAX_CONCURRENT_BLOCK_DOWNLOADS: usize = 20; + +/// Timeout for block downloads before retry. +const BLOCK_TIMEOUT: Duration = Duration::from_secs(30); + +/// Maximum number of retries for block downloads. +const BLOCK_MAX_RETRIES: u32 = 3; + +/// Maximum blocks per GetData request, kept a bit lower for better download distribution to multiple peers +const BLOCKS_PER_REQUEST: usize = 8; + +/// Pipeline for downloading blocks with height-ordered processing. +/// +/// Uses DownloadCoordinator for core download mechanics. +/// This is a thin wrapper that handles building GetData inventory messages. +/// Tracks block heights to enable ordered processing and buffers downloaded blocks. +pub(super) struct BlocksPipeline { + /// Core download coordinator (handles pending, in-flight, timeouts). + coordinator: DownloadCoordinator, + /// Number of completed block downloads. + completed_count: u32, + /// Heights queued or in-flight (waiting for download). + pending_heights: BTreeSet, + /// Downloaded blocks ready to process (height -> Block). + downloaded: BTreeMap, + /// Map hash -> height for looking up height when block arrives. + hash_to_height: HashMap, +} + +impl std::fmt::Debug for BlocksPipeline { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("BlocksPipeline") + .field("coordinator", &self.coordinator) + .field("completed_count", &self.completed_count) + .field("pending_heights", &self.pending_heights.len()) + .field("downloaded", &self.downloaded.len()) + .finish() + } +} + +impl Default for BlocksPipeline { + fn default() -> Self { + Self::new() + } +} + +impl BlocksPipeline { + /// Create a new blocks pipeline. + pub(super) fn new() -> Self { + Self { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_max_concurrent(MAX_CONCURRENT_BLOCK_DOWNLOADS) + .with_timeout(BLOCK_TIMEOUT) + .with_max_retries(BLOCK_MAX_RETRIES), + ), + completed_count: 0, + pending_heights: BTreeSet::new(), + downloaded: BTreeMap::new(), + hash_to_height: HashMap::new(), + } + } + + /// Queue blocks with their heights for download. + /// + /// This is the preferred method as it enables height-ordered processing. + pub(super) fn queue(&mut self, blocks: impl IntoIterator) { + for key in blocks { + self.coordinator.enqueue([*key.hash()]); + self.pending_heights.insert(key.height()); + self.hash_to_height.insert(*key.hash(), key.height()); + } + } + + /// Check if the pipeline has completed all work. + /// + /// Returns true when no blocks are pending, downloading, or waiting to be processed. + pub(super) fn is_complete(&self) -> bool { + self.coordinator.is_empty() && self.downloaded.is_empty() && self.pending_heights.is_empty() + } + + /// Check if there are pending requests to make. + pub(super) fn has_pending_requests(&self) -> bool { + self.coordinator.available_to_send() > 0 + } + + /// Send pending block requests up to the concurrency limit. + /// + /// Sends multiple smaller GetData messages to distribute requests across peers. + /// Returns the number of blocks requested. + pub(super) async fn send_pending(&mut self, requests: &RequestSender) -> SyncResult { + let mut total_sent = 0; + + while self.coordinator.available_to_send() > 0 { + // Take a batch of up to BLOCKS_PER_REQUEST + let count = self.coordinator.available_to_send().min(BLOCKS_PER_REQUEST); + let hashes = self.coordinator.take_pending(count); + if hashes.is_empty() { + break; + } + + requests.request_blocks(hashes.clone())?; + self.coordinator.mark_sent(&hashes); + total_sent += hashes.len(); + + tracing::debug!( + "Requested {} blocks ({} downloading, {} pending)", + hashes.len(), + self.coordinator.active_count(), + self.coordinator.pending_count() + ); + } + + Ok(total_sent) + } + + /// Handle a received block using internal height mapping. + /// + /// Looks up the height from the internal hash_to_height map and stores + /// the block in the downloaded buffer for height-ordered processing. + /// Returns `true` if this was a tracked block, `false` if unrequested. + pub(super) fn receive_block(&mut self, block: &Block) -> bool { + let hash = block.block_hash(); + if !self.coordinator.receive(&hash) { + tracing::debug!("Ignoring unrequested block: {}", hash); + return false; + } + + if let Some(height) = self.hash_to_height.remove(&hash) { + self.pending_heights.remove(&height); + self.downloaded.insert(height, block.clone()); + self.completed_count += 1; + true + } else { + // Block was tracked by coordinator but not by height mapping. + // This can happen if queue() was used instead of queue_with_heights(). + self.completed_count += 1; + true + } + } + + /// Take the next block that's safe to process in height order. + /// + /// Returns None if: + /// - No downloaded blocks available, or + /// - Waiting for a lower-height block still pending + pub(super) fn take_next_ordered_block(&mut self) -> Option<(Block, u32)> { + let lowest_downloaded = *self.downloaded.keys().next()?; + + // Check if any pending blocks have lower heights + if let Some(&min_pending) = self.pending_heights.first() { + if min_pending < lowest_downloaded { + return None; // Wait for lower block + } + } + + // Safe to return this block + let block = self.downloaded.remove(&lowest_downloaded).unwrap(); + Some((block, lowest_downloaded)) + } + + /// Add a block that was loaded from storage (skip download). + /// + /// Used when blocks are already persisted from a previous sync. + pub(super) fn add_from_storage(&mut self, block: Block, height: u32) { + self.downloaded.insert(height, block); + } + + /// Check for timed out downloads and re-queue them. + /// + /// Returns the list of timed out block hashes. + pub(super) fn handle_timeouts(&mut self) -> Vec { + let timed_out = self.coordinator.check_and_retry_timeouts(); + + if !timed_out.is_empty() { + tracing::debug!("Re-queued {} timed out block downloads", timed_out.len()); + } + + timed_out + } +} + +#[cfg(test)] +mod tests { + use dashcore_hashes::Hash; + + use super::*; + + fn test_hash(n: u8) -> BlockHash { + BlockHash::from_byte_array([n; 32]) + } + + fn make_test_block(n: u8) -> Block { + use dashcore::blockdata::block::Header; + let header = Header { + version: dashcore::blockdata::block::Version::from_consensus(1), + prev_blockhash: BlockHash::from_byte_array([n; 32]), + merkle_root: dashcore::TxMerkleNode::all_zeros(), + time: n as u32, + bits: dashcore::CompactTarget::from_consensus(0), + nonce: n as u32, + }; + Block { + header, + txdata: vec![], + } + } + + #[test] + fn test_blocks_pipeline_new() { + let pipeline = BlocksPipeline::new(); + assert_eq!(pipeline.coordinator.pending_count(), 0); + assert_eq!(pipeline.coordinator.active_count(), 0); + assert_eq!(pipeline.completed_count, 0); + assert!(pipeline.is_complete()); + } + + #[test] + fn test_queue_block() { + let mut pipeline = BlocksPipeline::new(); + let block = make_test_block(1); + pipeline.queue([FilterMatchKey::new(100, block.block_hash())]); + + assert_eq!(pipeline.coordinator.pending_count(), 1); + assert!(!pipeline.is_complete()); + assert!(pipeline.has_pending_requests()); + } + + #[test] + fn test_queue_multiple() { + let mut pipeline = BlocksPipeline::new(); + let block1 = make_test_block(1); + let block2 = make_test_block(2); + let block3 = make_test_block(3); + pipeline.queue([ + FilterMatchKey::new(100, block1.block_hash()), + FilterMatchKey::new(101, block2.block_hash()), + FilterMatchKey::new(102, block3.block_hash()), + ]); + + assert_eq!(pipeline.coordinator.pending_count(), 3); + assert_eq!(pipeline.pending_heights.len(), 3); + assert!(pipeline.pending_heights.contains(&100)); + assert!(pipeline.pending_heights.contains(&101)); + assert!(pipeline.pending_heights.contains(&102)); + } + + #[test] + fn test_receive_block_with_height() { + let mut pipeline = BlocksPipeline::new(); + let block = make_test_block(1); + let hash = block.block_hash(); + + // Queue with height tracking + pipeline.queue([FilterMatchKey::new(100, block.block_hash())]); + + // Simulate sending via coordinator + let hashes = pipeline.coordinator.take_pending(1); + pipeline.coordinator.mark_sent(&hashes); + assert_eq!(pipeline.coordinator.active_count(), 1); + + // Receive block + assert!(pipeline.receive_block(&block)); + assert_eq!(pipeline.coordinator.active_count(), 0); + assert_eq!(pipeline.completed_count, 1); + assert_eq!(pipeline.downloaded.len(), 1); + assert!(pipeline.pending_heights.is_empty()); + assert_eq!(pipeline.downloaded.get(&100).unwrap().block_hash(), hash); + } + + #[test] + fn test_receive_block_unrequested() { + let mut pipeline = BlocksPipeline::new(); + let block = make_test_block(1); + + assert!(!pipeline.receive_block(&block)); + assert_eq!(pipeline.completed_count, 0); + assert!(pipeline.downloaded.is_empty()); + } + + #[test] + fn test_max_concurrent() { + let mut pipeline = BlocksPipeline::new(); + + // Queue more blocks than max concurrent + for i in 0..=MAX_CONCURRENT_BLOCK_DOWNLOADS { + let block = make_test_block(i as u8); + pipeline.queue([FilterMatchKey::new(i as u32, block.block_hash())]); + } + + // Take and mark as downloading up to limit + let to_send = pipeline.coordinator.available_to_send(); + let hashes = pipeline.coordinator.take_pending(to_send); + pipeline.coordinator.mark_sent(&hashes); + + assert_eq!(pipeline.coordinator.active_count(), MAX_CONCURRENT_BLOCK_DOWNLOADS); + assert_eq!(pipeline.coordinator.pending_count(), 1); + assert!(!pipeline.has_pending_requests()); + } + + #[test] + fn test_timeout_requeues() { + // Create pipeline with very short timeout for testing + let mut pipeline = BlocksPipeline { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_max_concurrent(MAX_CONCURRENT_BLOCK_DOWNLOADS) + .with_timeout(Duration::from_millis(10)), + ), + completed_count: 0, + pending_heights: BTreeSet::new(), + downloaded: BTreeMap::new(), + hash_to_height: HashMap::new(), + }; + + // Use coordinator directly to set up in-flight state + let hash = test_hash(1); + pipeline.coordinator.enqueue([hash]); + let hashes = pipeline.coordinator.take_pending(1); + pipeline.coordinator.mark_sent(&hashes); + + // Wait for timeout + std::thread::sleep(Duration::from_millis(20)); + + let timed_out = pipeline.handle_timeouts(); + + assert_eq!(timed_out.len(), 1); + assert_eq!(timed_out[0], hash); + assert_eq!(pipeline.coordinator.active_count(), 0); + assert_eq!(pipeline.coordinator.pending_count(), 1); + } + + #[test] + fn test_take_next_ordered_block_in_order() { + let mut pipeline = BlocksPipeline::new(); + let block1 = make_test_block(1); + let block2 = make_test_block(2); + let hash1 = block1.block_hash(); + let hash2 = block2.block_hash(); + + // Use add_from_storage to test ordering logic without network + // Add block 2 first (out of order) + pipeline.add_from_storage(block2.clone(), 101); + // Also track height 100 as pending to simulate waiting + pipeline.pending_heights.insert(100); + + // Cannot take block 2 yet - waiting for block at height 100 + assert!(pipeline.take_next_ordered_block().is_none()); + + // Add block 1 + pipeline.pending_heights.remove(&100); + pipeline.add_from_storage(block1.clone(), 100); + + // Now block 1 is ready (lowest height) + let (block, height) = pipeline.take_next_ordered_block().unwrap(); + assert_eq!(height, 100); + assert_eq!(block.block_hash(), hash1); + + // Block 2 is now ready + let (block, height) = pipeline.take_next_ordered_block().unwrap(); + assert_eq!(height, 101); + assert_eq!(block.block_hash(), hash2); + + // No more blocks + assert!(pipeline.take_next_ordered_block().is_none()); + } + + #[test] + fn test_take_next_ordered_block_waits_for_pending() { + let mut pipeline = BlocksPipeline::new(); + let block2 = make_test_block(2); + + // Add block at height 101, but height 100 is still pending + pipeline.pending_heights.insert(100); + pipeline.add_from_storage(block2.clone(), 101); + + // Cannot take block 2 - block at height 100 is still pending + assert!(pipeline.take_next_ordered_block().is_none()); + + // Clear the pending height + pipeline.pending_heights.remove(&100); + + // Now block 2 is ready + let (_, height) = pipeline.take_next_ordered_block().unwrap(); + assert_eq!(height, 101); + } + + #[test] + fn test_add_from_storage() { + let mut pipeline = BlocksPipeline::new(); + let block = make_test_block(1); + let hash = block.block_hash(); + + pipeline.add_from_storage(block.clone(), 100); + + assert_eq!(pipeline.downloaded.len(), 1); + + let (taken_block, height) = pipeline.take_next_ordered_block().unwrap(); + assert_eq!(height, 100); + assert_eq!(taken_block.block_hash(), hash); + } + + #[test] + fn test_is_complete() { + let mut pipeline = BlocksPipeline::new(); + assert!(pipeline.is_complete()); + + // Adding to downloaded makes it incomplete + let block = make_test_block(1); + pipeline.add_from_storage(block, 100); + assert!(!pipeline.is_complete()); + + // Take the block + pipeline.take_next_ordered_block(); + assert!(pipeline.is_complete()); + } + + #[test] + fn test_is_complete_with_pending_heights() { + let mut pipeline = BlocksPipeline::new(); + assert!(pipeline.is_complete()); + + // Pending heights make it incomplete + pipeline.pending_heights.insert(100); + assert!(!pipeline.is_complete()); + + pipeline.pending_heights.remove(&100); + assert!(pipeline.is_complete()); + } + + #[test] + fn test_handle_timeouts_with_multiple_retries() { + let mut pipeline = BlocksPipeline { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_max_concurrent(MAX_CONCURRENT_BLOCK_DOWNLOADS) + .with_timeout(Duration::from_millis(1)) + .with_max_retries(2), + ), + completed_count: 0, + pending_heights: BTreeSet::new(), + downloaded: BTreeMap::new(), + hash_to_height: HashMap::new(), + }; + + // Use coordinator to set up in-flight state + let hash = test_hash(1); + pipeline.coordinator.enqueue([hash]); + let hashes = pipeline.coordinator.take_pending(1); + pipeline.coordinator.mark_sent(&hashes); + + // First timeout - returns item (it's re-queued) + std::thread::sleep(Duration::from_millis(5)); + let timed_out = pipeline.handle_timeouts(); + assert_eq!(timed_out.len(), 1); + assert_eq!(pipeline.coordinator.pending_count(), 1); + + // Re-send the retry + let items = pipeline.coordinator.take_pending(1); + pipeline.coordinator.mark_sent(&items); + + // Second timeout - still re-queued + std::thread::sleep(Duration::from_millis(5)); + let timed_out = pipeline.handle_timeouts(); + assert_eq!(timed_out.len(), 1); + assert_eq!(pipeline.coordinator.pending_count(), 1); + + // Re-send + let items = pipeline.coordinator.take_pending(1); + pipeline.coordinator.mark_sent(&items); + + // Third timeout - exceeds max retries, NOT re-queued + std::thread::sleep(Duration::from_millis(5)); + let timed_out = pipeline.handle_timeouts(); + assert_eq!(timed_out.len(), 0); + assert_eq!(pipeline.coordinator.pending_count(), 0); + } + + #[test] + fn test_receive_block_duplicate() { + let mut pipeline = BlocksPipeline::new(); + let block = make_test_block(1); + + // Queue and mark as sent via coordinator + pipeline.queue([FilterMatchKey::new(100, block.block_hash())]); + let hashes = pipeline.coordinator.take_pending(1); + pipeline.coordinator.mark_sent(&hashes); + + // First receive + let result = pipeline.receive_block(&block); + assert!(result); + assert_eq!(pipeline.completed_count, 1); + assert_eq!(pipeline.downloaded.len(), 1); + + // Duplicate receive (not tracked anymore since already completed) + let result = pipeline.receive_block(&block); + assert!(!result); + assert_eq!(pipeline.completed_count, 1); + assert_eq!(pipeline.downloaded.len(), 1); + } +} diff --git a/dash-spv/src/sync/blocks/progress.rs b/dash-spv/src/sync/blocks/progress.rs new file mode 100644 index 000000000..3dee59bca --- /dev/null +++ b/dash-spv/src/sync/blocks/progress.rs @@ -0,0 +1,143 @@ +use crate::sync::SyncState; +use dashcore::prelude::CoreBlockHeight; +use std::fmt; +use std::time::Instant; + +/// Progress for blocks synchronization. +#[derive(Debug, Clone, PartialEq)] +pub struct BlocksProgress { + /// Current sync state. + state: SyncState, + /// Last processed block height. + last_processed: CoreBlockHeight, + /// Total blocks requested from filter matches in the current sync session. + requested: u32, + /// Blocks loaded from local storage in the current sync session. + from_storage: u32, + /// Blocks downloaded from the network in the current sync session. + downloaded: u32, + /// Total blocks processed through wallet in the current sync session. + processed: u32, + /// Blocks that contained wallet-relevant transactions in the current sync session. + relevant: u32, + /// Number of transactions found in the current sync session. + transactions: u32, + /// The last time a block was stored/processed or the last manager state change. + last_activity: Instant, +} + +impl Default for BlocksProgress { + fn default() -> Self { + Self { + state: Default::default(), + last_processed: 0, + requested: 0, + from_storage: 0, + downloaded: 0, + processed: 0, + relevant: 0, + transactions: 0, + last_activity: Instant::now(), + } + } +} + +impl BlocksProgress { + pub fn state(&self) -> SyncState { + self.state + } + + pub fn last_processed(&self) -> CoreBlockHeight { + self.last_processed + } + + pub fn requested(&self) -> u32 { + self.requested + } + + pub fn from_storage(&self) -> u32 { + self.from_storage + } + + pub fn downloaded(&self) -> u32 { + self.downloaded + } + + pub fn processed(&self) -> u32 { + self.processed + } + + pub fn relevant(&self) -> u32 { + self.relevant + } + + pub fn transactions(&self) -> u32 { + self.transactions + } + + pub fn last_activity(&self) -> Instant { + self.last_activity + } + + pub fn set_state(&mut self, state: SyncState) { + self.state = state; + self.bump_last_activity(); + } + + pub fn update_last_processed(&mut self, height: CoreBlockHeight) { + self.last_processed = height; + self.bump_last_activity(); + } + + pub fn add_requested(&mut self, count: u32) { + self.requested += count; + self.bump_last_activity(); + } + + pub fn add_from_storage(&mut self, count: u32) { + self.from_storage += count; + self.bump_last_activity(); + } + + pub fn add_downloaded(&mut self, count: u32) { + self.downloaded += count; + self.bump_last_activity(); + } + + pub fn add_processed(&mut self, count: u32) { + self.processed += count; + self.bump_last_activity(); + } + + pub fn add_relevant(&mut self, count: u32) { + self.relevant += count; + self.bump_last_activity(); + } + + pub fn add_transactions(&mut self, count: u32) { + self.transactions += count; + self.bump_last_activity(); + } + + pub fn bump_last_activity(&mut self) { + self.last_activity = Instant::now(); + } +} + +impl fmt::Display for BlocksProgress { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + write!( + f, + "{:?} last_relevant: {} | requested: {}, from_storage: {}, downloaded: {}, processed: {}, relevant: {}, transactions: {}, last_activity: {}s", + self.state, + self.last_processed, + self.requested, + self.from_storage, + self.downloaded, + self.processed, + self.relevant, + self.transactions, + self.last_activity.elapsed().as_secs(), + ) + } +} diff --git a/dash-spv/src/sync/blocks/sync_manager.rs b/dash-spv/src/sync/blocks/sync_manager.rs new file mode 100644 index 000000000..c0b93e7d8 --- /dev/null +++ b/dash-spv/src/sync/blocks/sync_manager.rs @@ -0,0 +1,211 @@ +use crate::error::SyncResult; +use crate::network::{Message, MessageType, RequestSender}; +use crate::storage::{BlockHeaderStorage, BlockStorage}; +use crate::sync::{ + BlocksManager, ManagerIdentifier, SyncEvent, SyncManager, SyncManagerProgress, SyncState, +}; +use crate::types::HashedBlock; +use crate::SyncError; +use async_trait::async_trait; +use dashcore::network::message::NetworkMessage; +use key_wallet_manager::wallet_interface::WalletInterface; + +#[async_trait] +impl SyncManager + for BlocksManager +{ + fn identifier(&self) -> ManagerIdentifier { + ManagerIdentifier::Block + } + + fn state(&self) -> SyncState { + self.progress.state() + } + + fn set_state(&mut self, state: SyncState) { + self.progress.set_state(state); + } + + fn wanted_message_types(&self) -> &'static [MessageType] { + &[MessageType::Block] + } + + async fn initialize(&mut self) -> SyncResult<()> { + // Get wallet state + let wallet = self.wallet.read().await; + let synced_height = wallet.synced_height(); + drop(wallet); + + self.progress.update_last_processed(synced_height); + self.progress.set_state(SyncState::WaitingForConnections); + + tracing::info!("BlocksManager initialized at height {}", self.progress.last_processed()); + + Ok(()) + } + + async fn start_sync(&mut self, _requests: &RequestSender) -> SyncResult> { + // Check if filters already completed (event received before start_sync) + if self.filters_sync_complete && self.pipeline.is_complete() { + self.progress.set_state(SyncState::Synced); + tracing::info!("BlocksManager: already synced (filters complete, no blocks needed)"); + return Ok(vec![]); + } + + // Otherwise wait for BlocksNeeded or FiltersSyncComplete events + self.set_state(SyncState::WaitForEvents); + Ok(vec![]) + } + + fn stop_sync(&mut self) { + self.progress.set_state(SyncState::WaitingForConnections); + self.filters_sync_complete = false; + } + + async fn handle_message( + &mut self, + msg: Message, + requests: &RequestSender, + ) -> SyncResult> { + let NetworkMessage::Block(block) = msg.inner() else { + return Ok(vec![]); + }; + + let hashed_block = HashedBlock::from(block); + + // Check if this is a block we requested (pipeline handles buffering with height) + if !self.pipeline.receive_block(block) { + tracing::debug!("Received unrequested block {}", hashed_block.hash()); + return Ok(vec![]); + } + + // Look up height for storage + let height = self + .header_storage + .read() + .await + .get_header_height_by_hash(hashed_block.hash()) + .await? + .ok_or_else(|| { + SyncError::InvalidState(format!( + "Block {} has no stored header - cannot determine height", + hashed_block.hash() + )) + })?; + + tracing::debug!("Received block {} at height {}", hashed_block.hash(), height); + + // Persist blocks to speed-up wallet rescans + self.block_storage.write().await.store_block(height, hashed_block).await?; + + self.progress.add_downloaded(1); + + // Process buffered blocks + let events = self.process_buffered_blocks().await?; + + if self.pipeline.has_pending_requests() { + self.send_pending(requests).await?; + } + + Ok(events) + } + + async fn handle_sync_event( + &mut self, + event: &SyncEvent, + requests: &RequestSender, + ) -> SyncResult> { + // React to BlocksNeeded events + if let SyncEvent::BlocksNeeded { + blocks, + } = event + { + if blocks.is_empty() { + return Ok(vec![]); + } + + tracing::debug!("Blocks needed: {} blocks", blocks.len()); + + let mut to_download = Vec::new(); + + let block_storage = self.block_storage.read().await; + for key in blocks { + // Check if block is already stored (from previous sync) + if let Ok(Some(hashed_block)) = block_storage.load_block(key.height()).await { + if hashed_block.hash() != key.hash() { + tracing::warn!( + "Stored block hash mismatch at height {}. expected: {}, got: {} ", + key.height(), + key.hash(), + hashed_block.hash(), + ); + return Err(SyncError::Validation(format!( + "Stored block hash mismatch. expected: {:?}, got {}", + key, + hashed_block.hash() + ))); + } + // Block loaded from storage, add to pipeline for processing + self.pipeline.add_from_storage(hashed_block.block().clone(), key.height()); + self.progress.add_from_storage(1); + continue; + } + + // Block not in storage, queue for download with height + to_download.push(key.clone()); + } + drop(block_storage); + + // Queue all blocks that need downloading + self.pipeline.queue(to_download); + + self.progress.set_state(SyncState::Syncing); + + // Send batched request for blocks not in storage + if self.pipeline.has_pending_requests() { + self.send_pending(requests).await?; + } + + // Process any blocks we loaded from storage + return self.process_buffered_blocks().await; + } + + // React to FiltersSyncComplete - filters are done, no more BlocksNeeded events coming + if let SyncEvent::FiltersSyncComplete { + .. + } = event + { + self.filters_sync_complete = true; + + // If pipeline is already empty, transition to Synced now + if self.pipeline.is_complete() + && matches!(self.state(), SyncState::Syncing | SyncState::WaitForEvents) + { + self.progress.set_state(SyncState::Synced); + tracing::info!( + "Block sync complete, processed {} blocks", + self.progress.processed() + ); + } + } + + Ok(vec![]) + } + + async fn tick(&mut self, requests: &RequestSender) -> SyncResult> { + // Handle timeouts + let timed_out = self.pipeline.handle_timeouts(); + if !timed_out.is_empty() { + tracing::debug!("Re-queued {} timed out block downloads", timed_out.len()); + } + + self.send_pending(requests).await?; + + // Try to process any buffered blocks + self.process_buffered_blocks().await + } + + fn progress(&self) -> SyncManagerProgress { + SyncManagerProgress::Blocks(self.progress.clone()) + } +} diff --git a/dash-spv/src/sync/chainlock/manager.rs b/dash-spv/src/sync/chainlock/manager.rs new file mode 100644 index 000000000..cab3c25b2 --- /dev/null +++ b/dash-spv/src/sync/chainlock/manager.rs @@ -0,0 +1,310 @@ +//! ChainLock manager for parallel sync. +//! +//! Handles ChainLock messages (clsig) from the network. Validates ChainLocks +//! only after masternode data is available. Since ChainLocks are cumulative +//! (all blocks below the best ChainLock are implicitly locked), we only track +//! the best validated ChainLock. + +use std::sync::Arc; + +use dashcore::ephemerealdata::chain_lock::ChainLock; +use dashcore::hash_types::ChainLockHash; +use dashcore::sml::masternode_list_engine::MasternodeListEngine; +use std::collections::HashSet; +use tokio::sync::RwLock; + +use crate::error::SyncResult; +use crate::storage::BlockHeaderStorage; +use crate::sync::{ChainLockProgress, SyncEvent}; + +/// ChainLock manager for the parallel sync coordinator. +/// +/// This manager: +/// - Subscribes to CLSig messages from the network +/// - Validates ChainLocks only after masternode sync is complete +/// - Tracks only the best (highest) validated ChainLock +/// - Emits ChainLockReceived events +pub struct ChainLockManager { + /// Current progress of the manager. + pub(super) progress: ChainLockProgress, + /// Block header storage for hash verification. + header_storage: Arc>, + /// Masternode engine for BLS signature validation. + masternode_engine: Arc>, + /// The best (highest height) validated ChainLock. + best_chainlock: Option, + /// ChainLock hashes that have been requested (to avoid duplicate requests). + pub(super) requested_chainlocks: HashSet, + /// Whether masternode sync is complete and we can validate signatures. + masternode_ready: bool, +} + +impl ChainLockManager { + /// Create a new ChainLock manager. + pub fn new( + header_storage: Arc>, + masternode_engine: Arc>, + ) -> Self { + Self { + progress: ChainLockProgress::default(), + header_storage, + masternode_engine, + best_chainlock: None, + requested_chainlocks: HashSet::new(), + masternode_ready: false, + } + } + + /// Notify the manager that masternode sync is complete. + pub(super) fn set_masternode_ready(&mut self) { + self.masternode_ready = true; + } + + /// Process an incoming ChainLock message. + pub(super) async fn process_chainlock( + &mut self, + chainlock: &ChainLock, + ) -> SyncResult> { + let height = chainlock.block_height; + let block_hash = chainlock.block_hash; + + tracing::info!("Processing ChainLock for height {} hash {}", height, block_hash); + + // Skip if we already have a better or equal ChainLock + if let Some(best) = &self.best_chainlock { + if height <= best.block_height { + tracing::debug!( + "Ignoring ChainLock at height {} (best is {})", + height, + best.block_height + ); + return Ok(vec![]); + } + } + + // Verify block hash matches our chain (if we have the header) + if !self.verify_block_hash(chainlock).await { + tracing::warn!("ChainLock hash mismatch at height {}, rejecting", height); + return Ok(vec![]); + } + + // Only validate if masternode sync is complete + if !self.masternode_ready { + tracing::debug!( + "Skipping ChainLock validation at height {} (masternode sync not complete)", + height + ); + return Ok(vec![SyncEvent::ChainLockReceived { + chain_lock: chainlock.clone(), + validated: false, + }]); + } + + // Validate with masternode engine + let validated = self.validate_signature(chainlock).await; + + if validated { + self.progress.add_valid(1); + self.progress.update_best_validated_height(height); + + // Update best ChainLock + self.best_chainlock = Some(chainlock.clone()); + } else { + self.progress.add_invalid(1); + } + + Ok(vec![SyncEvent::ChainLockReceived { + chain_lock: chainlock.clone(), + validated, + }]) + } + + /// Verify that the ChainLock block hash matches our stored header. + /// Returns true if the hash matches or we don't have the header yet. + /// Returns false if we have the header and the hash doesn't match. + async fn verify_block_hash(&self, chainlock: &ChainLock) -> bool { + let storage = self.header_storage.read().await; + match storage.get_header(chainlock.block_height).await { + Ok(Some(header)) => header.block_hash() == chainlock.block_hash, + Ok(None) => { + // Don't reject if we don't have the header yet + true + } + Err(e) => { + tracing::warn!( + "Storage error checking ChainLock header at height {}: {}", + chainlock.block_height, + e + ); + // Accept since we can't verify - will validate when header arrives + true + } + } + } + + /// Validate the ChainLock BLS signature using the masternode engine. + async fn validate_signature(&self, chainlock: &ChainLock) -> bool { + let engine = self.masternode_engine.read().await; + + match engine.verify_chain_lock(chainlock) { + Ok(()) => { + tracing::info!( + "ChainLock signature verified for height {}", + chainlock.block_height + ); + true + } + Err(e) => { + tracing::warn!( + "ChainLock signature verification failed for height {}: {}", + chainlock.block_height, + e + ); + false + } + } + } + + /// Get the best validated ChainLock. + pub fn best_chainlock(&self) -> Option<&ChainLock> { + self.best_chainlock.as_ref() + } + + /// Check if a block at the given height is chainlocked. + /// All blocks at or below the best validated ChainLock height are considered locked. + pub fn is_block_chainlocked(&self, height: u32) -> bool { + self.best_chainlock.as_ref().map(|cl| height <= cl.block_height).unwrap_or(false) + } +} + +impl std::fmt::Debug for ChainLockManager { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("ChainLockManager") + .field("progress", &self.progress) + .field("best_height", &self.best_chainlock.as_ref().map(|cl| cl.block_height)) + .field("masternode_ready", &self.masternode_ready) + .finish() + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::network::MessageType; + use crate::storage::{DiskStorageManager, PersistentBlockHeaderStorage}; + use crate::sync::{ManagerIdentifier, SyncManager, SyncManagerProgress, SyncState}; + use crate::Network; + use dashcore::bls_sig_utils::BLSSignature; + use dashcore::hashes::Hash; + use dashcore::BlockHash; + + type TestChainLockManager = ChainLockManager; + + async fn create_test_manager() -> TestChainLockManager { + let storage = DiskStorageManager::with_temp_dir().await.unwrap(); + let engine = + Arc::new(RwLock::new(MasternodeListEngine::default_for_network(Network::Testnet))); + ChainLockManager::new(storage.header_storage(), engine) + } + + fn create_test_chainlock(height: u32) -> ChainLock { + ChainLock { + block_height: height, + block_hash: BlockHash::all_zeros(), + signature: BLSSignature::from([0u8; 96]), + } + } + + #[tokio::test] + async fn test_chainlock_manager_new() { + let manager = create_test_manager().await; + assert_eq!(manager.identifier(), ManagerIdentifier::ChainLock); + assert_eq!(manager.state(), SyncState::Initializing); + assert_eq!(manager.wanted_message_types(), vec![MessageType::CLSig, MessageType::Inv]); + } + + #[tokio::test] + async fn test_chainlock_skips_validation_before_masternode_ready() { + let mut manager = create_test_manager().await; + + // Before masternode sync, ChainLocks should not be validated + let chainlock = create_test_chainlock(100); + let events = manager.process_chainlock(&chainlock).await.unwrap(); + + assert_eq!(events.len(), 1); + assert_eq!(manager.progress.valid(), 0); + assert_eq!(manager.progress.invalid(), 0); + assert!(manager.best_chainlock().is_none()); + } + + #[tokio::test] + async fn test_chainlock_validates_after_masternode_ready() { + let mut manager = create_test_manager().await; + manager.set_masternode_ready(); + + // After masternode sync, ChainLocks should be validated (will fail with empty engine) + let chainlock = create_test_chainlock(100); + let _ = manager.process_chainlock(&chainlock).await.unwrap(); + + assert_eq!(manager.progress.invalid(), 1); + assert_eq!(manager.progress.valid(), 0); + } + + #[tokio::test] + async fn test_chainlock_keeps_only_best() { + let mut manager = create_test_manager().await; + + // Manually set a best chainlock + manager.best_chainlock = Some(create_test_chainlock(200)); + + // Lower height should be ignored + let chainlock_lower = create_test_chainlock(150); + let events = manager.process_chainlock(&chainlock_lower).await.unwrap(); + assert_eq!(events.len(), 0); + + // Equal height should also be ignored + let chainlock_equal = create_test_chainlock(200); + let events = manager.process_chainlock(&chainlock_equal).await.unwrap(); + assert_eq!(events.len(), 0); + + // Higher height should be processed + let chainlock_higher = create_test_chainlock(300); + let events = manager.process_chainlock(&chainlock_higher).await.unwrap(); + assert_eq!(events.len(), 1); + } + + #[tokio::test] + async fn test_chainlock_progress() { + let mut manager = create_test_manager().await; + manager.set_state(SyncState::Syncing); + manager.progress.update_best_validated_height(500); + manager.progress.add_valid(8); + manager.progress.add_invalid(2); + + let progress = manager.progress(); + if let SyncManagerProgress::ChainLock(cp) = progress { + assert_eq!(cp.state(), SyncState::Syncing); + assert_eq!(cp.best_validated_height(), 500); + assert_eq!(cp.valid(), 8); + assert_eq!(cp.invalid(), 2); + } else { + panic!("Expected SyncManagerProgress::ChainLock"); + } + } + + #[tokio::test] + async fn test_is_block_chainlocked() { + let mut manager = create_test_manager().await; + + // No ChainLock yet + assert!(!manager.is_block_chainlocked(100)); + + // Manually set best chainlock for testing + manager.best_chainlock = Some(create_test_chainlock(500)); + + // All blocks at or below 500 should be chainlocked + assert!(manager.is_block_chainlocked(1)); + assert!(manager.is_block_chainlocked(500)); + assert!(!manager.is_block_chainlocked(501)); + } +} diff --git a/dash-spv/src/sync/chainlock/mod.rs b/dash-spv/src/sync/chainlock/mod.rs new file mode 100644 index 000000000..40cd5b334 --- /dev/null +++ b/dash-spv/src/sync/chainlock/mod.rs @@ -0,0 +1,6 @@ +mod manager; +mod progress; +mod sync_manager; + +pub use manager::ChainLockManager; +pub use progress::ChainLockProgress; diff --git a/dash-spv/src/sync/chainlock/progress.rs b/dash-spv/src/sync/chainlock/progress.rs new file mode 100644 index 000000000..553d3216e --- /dev/null +++ b/dash-spv/src/sync/chainlock/progress.rs @@ -0,0 +1,91 @@ +use crate::sync::SyncState; +use std::fmt; +use std::time::Instant; + +/// Progress for ChainLock synchronization. +#[derive(Debug, Clone, PartialEq)] +pub struct ChainLockProgress { + /// Current sync state. + state: SyncState, + /// The highest block height of a valid ChainLock. + best_validated_height: u32, + /// Number of ChainLocks successfully verified. + valid: u32, + /// Number of ChainLocks that failed validation. + invalid: u32, + /// The last time a ChainLock was processed or the last manager state change. + last_activity: Instant, +} + +impl Default for ChainLockProgress { + fn default() -> Self { + Self { + state: Default::default(), + best_validated_height: 0, + valid: 0, + invalid: 0, + last_activity: Instant::now(), + } + } +} + +impl ChainLockProgress { + /// Get the current sync state. + pub fn state(&self) -> SyncState { + self.state + } + /// Get the highest block height of a valid ChainLock. + pub fn best_validated_height(&self) -> u32 { + self.best_validated_height + } + /// Number of ChainLocks successfully verified. + pub fn valid(&self) -> u32 { + self.valid + } + /// Number of ChainLocks dropped after max retries. + pub fn invalid(&self) -> u32 { + self.invalid + } + /// The last time a ChainLock was processed or the last manager state change. + pub fn last_activity(&self) -> Instant { + self.last_activity + } + /// Update the sync state and bump the last activity time. + pub fn set_state(&mut self, state: SyncState) { + self.state = state; + self.bump_last_activity(); + } + /// Update the highest block height of a valid ChainLock. + pub fn update_best_validated_height(&mut self, height: u32) { + self.best_validated_height = height; + self.bump_last_activity(); + } + /// Add a number to the valid counter. + pub fn add_valid(&mut self, count: u32) { + self.valid += count; + self.bump_last_activity(); + } + /// Add a number to the invalid counter. + pub fn add_invalid(&mut self, count: u32) { + self.invalid += count; + self.bump_last_activity(); + } + /// Bump the last activity time. + pub fn bump_last_activity(&mut self) { + self.last_activity = Instant::now(); + } +} + +impl fmt::Display for ChainLockProgress { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + write!( + f, + "{:?} best_validated_height: {} | valid: {}, invalid: {}, last_activity: {}s", + self.state, + self.best_validated_height, + self.valid, + self.invalid, + self.last_activity.elapsed().as_secs() + ) + } +} diff --git a/dash-spv/src/sync/chainlock/sync_manager.rs b/dash-spv/src/sync/chainlock/sync_manager.rs new file mode 100644 index 000000000..8082f3450 --- /dev/null +++ b/dash-spv/src/sync/chainlock/sync_manager.rs @@ -0,0 +1,98 @@ +use crate::error::SyncResult; +use crate::network::{Message, MessageType, RequestSender}; +use crate::storage::BlockHeaderStorage; +use crate::sync::{ + ChainLockManager, ManagerIdentifier, SyncEvent, SyncManager, SyncManagerProgress, SyncState, +}; +use async_trait::async_trait; +use dashcore::network::message::NetworkMessage; +use dashcore::network::message_blockdata::Inventory; + +#[async_trait] +impl SyncManager for ChainLockManager { + fn identifier(&self) -> ManagerIdentifier { + ManagerIdentifier::ChainLock + } + + fn state(&self) -> SyncState { + self.progress.state() + } + + fn set_state(&mut self, state: SyncState) { + self.progress.set_state(state); + } + + fn wanted_message_types(&self) -> &'static [MessageType] { + &[MessageType::CLSig, MessageType::Inv] + } + + async fn handle_message( + &mut self, + msg: Message, + requests: &RequestSender, + ) -> SyncResult> { + match msg.inner() { + NetworkMessage::CLSig(chainlock) => self.process_chainlock(chainlock).await, + NetworkMessage::Inv(inv) => { + // Check for ChainLock inventory items, filtering out already-requested ones + let chainlocks_to_request: Vec = inv + .iter() + .filter(|item| { + if let Inventory::ChainLock(hash) = item { + // Only request if we haven't already requested this ChainLock + !self.requested_chainlocks.contains(hash) + } else { + false + } + }) + .cloned() + .collect(); + + if !chainlocks_to_request.is_empty() { + tracing::info!( + "Received {} ChainLock announcements, requesting via getdata", + chainlocks_to_request.len() + ); + requests.request_inventory(chainlocks_to_request.clone())?; + + for item in &chainlocks_to_request { + if let Inventory::ChainLock(hash) = item { + self.requested_chainlocks.insert(*hash); + } + } + } + Ok(vec![]) + } + _ => Ok(vec![]), + } + } + + async fn handle_sync_event( + &mut self, + event: &SyncEvent, + _requests: &RequestSender, + ) -> SyncResult> { + // Enable ChainLock validation when masternode state is available + if let SyncEvent::MasternodeStateUpdated { + .. + } = event + { + self.set_masternode_ready(); + if matches!(self.state(), SyncState::Syncing | SyncState::WaitForEvents) { + self.set_state(SyncState::Synced); + tracing::info!("ChainLock manager synced (masternode data available)"); + } + } + + Ok(vec![]) + } + + async fn tick(&mut self, _requests: &RequestSender) -> SyncResult> { + // No periodic work needed + Ok(vec![]) + } + + fn progress(&self) -> SyncManagerProgress { + SyncManagerProgress::ChainLock(self.progress.clone()) + } +} diff --git a/dash-spv/src/sync/download_coordinator.rs b/dash-spv/src/sync/download_coordinator.rs new file mode 100644 index 000000000..29572f817 --- /dev/null +++ b/dash-spv/src/sync/download_coordinator.rs @@ -0,0 +1,415 @@ +//! Generic download coordinator for pipelined downloads. +//! +//! Provides a single abstraction for managing concurrent downloads with: +//! - Pending queue management +//! - In-flight tracking with timestamps +//! - Timeout detection and retry logic +//! - Configurable concurrency limits + +use std::collections::{HashMap, VecDeque}; +use std::hash::Hash; +use std::time::{Duration, Instant}; + +/// Configuration for download coordination. +#[derive(Debug, Clone)] +pub struct DownloadConfig { + /// Maximum concurrent in-flight requests. + max_concurrent: usize, + /// Timeout duration for requests. + timeout: Duration, + /// Maximum retry attempts before giving up. + max_retries: u32, +} + +impl Default for DownloadConfig { + fn default() -> Self { + Self { + max_concurrent: 10, + timeout: Duration::from_secs(30), + max_retries: 3, + } + } +} + +impl DownloadConfig { + /// Create config with custom max concurrent. + pub(crate) fn with_max_concurrent(mut self, max: usize) -> Self { + self.max_concurrent = max; + self + } + + /// Create config with custom timeout. + pub(crate) fn with_timeout(mut self, timeout: Duration) -> Self { + self.timeout = timeout; + self + } + + /// Create config with custom max retries. + pub(crate) fn with_max_retries(mut self, max: u32) -> Self { + self.max_retries = max; + self + } +} + +/// Generic download coordinator. +/// +/// Handles the common mechanics of pipelined downloads: +/// - Queue management (pending items) +/// - In-flight tracking with timestamps +/// - Timeout detection and retry +/// - Concurrency limits +/// +/// Generic over the key type `K` which identifies download items. +/// Use `u32` for height-based downloads, `BlockHash` for hash-based. +#[derive(Debug)] +pub(crate) struct DownloadCoordinator { + /// Items waiting to be requested. + pending: VecDeque, + /// Items currently in-flight (key -> sent time). + in_flight: HashMap, + /// Retry counts per key. + retry_counts: HashMap, + /// Configuration. + config: DownloadConfig, + /// Last time progress was made. + last_progress: Instant, +} + +impl Default for DownloadCoordinator { + fn default() -> Self { + Self::new(DownloadConfig::default()) + } +} + +impl DownloadCoordinator { + /// Create a new coordinator with the given configuration. + pub(crate) fn new(config: DownloadConfig) -> Self { + Self { + pending: VecDeque::new(), + in_flight: HashMap::new(), + retry_counts: HashMap::new(), + config, + last_progress: Instant::now(), + } + } + + /// Clear all state. + pub(crate) fn clear(&mut self) { + self.pending.clear(); + self.in_flight.clear(); + self.retry_counts.clear(); + self.last_progress = Instant::now(); + } + + /// Queue items for download. + pub(crate) fn enqueue(&mut self, items: impl IntoIterator) { + for item in items { + self.pending.push_back(item); + } + } + + /// Queue an item for retry (goes to front of queue). + /// + /// Returns false if max retries exceeded. + pub(crate) fn enqueue_retry(&mut self, item: K) -> bool { + let count = self.retry_counts.entry(item.clone()).or_insert(0); + if *count >= self.config.max_retries { + tracing::warn!("Max retries ({}) exceeded, giving up", self.config.max_retries); + return false; + } + *count += 1; + self.pending.push_front(item); + true + } + + /// Get the number of items available to send (respecting concurrency limit). + pub(crate) fn available_to_send(&self) -> usize { + self.config.max_concurrent.saturating_sub(self.in_flight.len()).min(self.pending.len()) + } + + /// Take items from the pending queue (up to count). + /// + /// Items are removed from pending but NOT yet marked as in-flight. + /// Call `mark_sent` after successfully sending the request. + pub(crate) fn take_pending(&mut self, count: usize) -> Vec { + let actual = count.min(self.pending.len()); + let mut items = Vec::with_capacity(actual); + for _ in 0..actual { + if let Some(item) = self.pending.pop_front() { + items.push(item); + } + } + items + } + + /// Mark items as sent (now in-flight). + pub(crate) fn mark_sent(&mut self, items: &[K]) { + let now = Instant::now(); + for item in items { + self.in_flight.insert(item.clone(), now); + } + } + + /// Handle a received item. + /// + /// Returns true if the item was being tracked, false if unexpected. + pub(crate) fn receive(&mut self, key: &K) -> bool { + if self.in_flight.remove(key).is_some() { + self.last_progress = Instant::now(); + true + } else { + false + } + } + + /// Check if an item is currently in-flight. + pub(crate) fn is_in_flight(&self, key: &K) -> bool { + self.in_flight.contains_key(key) + } + + /// Check for timed-out items. + /// + /// Returns items that have timed out. They are removed from in-flight tracking. + /// Caller should call `enqueue_retry` for items that should be retried. + pub(crate) fn check_timeouts(&mut self) -> Vec { + let now = Instant::now(); + let timed_out: Vec = self + .in_flight + .iter() + .filter(|(_, sent_time)| now.duration_since(**sent_time) > self.config.timeout) + .map(|(key, _)| key.clone()) + .collect(); + + for key in &timed_out { + self.in_flight.remove(key); + } + + if !timed_out.is_empty() { + tracing::debug!("{} items timed out after {:?}", timed_out.len(), self.config.timeout); + } + + timed_out + } + + /// Check for timed-out items and re-enqueue them for retry. + /// + /// Combines `check_timeouts()` and `enqueue_retry()` in one call. + /// Returns only items that were successfully re-queued. Items that + /// exceeded their max retry count are excluded from the result. + pub(crate) fn check_and_retry_timeouts(&mut self) -> Vec { + let timed_out = self.check_timeouts(); + timed_out.into_iter().filter(|item| self.enqueue_retry(item.clone())).collect() + } + + /// Check if the coordinator has no work (empty pending and in-flight). + pub(crate) fn is_empty(&self) -> bool { + self.pending.is_empty() && self.in_flight.is_empty() + } + + /// Get the number of pending items. + pub(crate) fn pending_count(&self) -> usize { + self.pending.len() + } + + /// Get the number of in-flight items. + pub(crate) fn active_count(&self) -> usize { + self.in_flight.len() + } + + /// Get the total remaining items (pending + in-flight). + pub(crate) fn remaining(&self) -> usize { + self.pending.len() + self.in_flight.len() + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_new_coordinator() { + let coord: DownloadCoordinator = DownloadCoordinator::default(); + assert!(coord.is_empty()); + assert_eq!(coord.pending_count(), 0); + assert_eq!(coord.active_count(), 0); + } + + #[test] + fn test_enqueue() { + let mut coord: DownloadCoordinator = DownloadCoordinator::default(); + coord.enqueue([1, 2, 3, 4, 5]); + + assert_eq!(coord.pending_count(), 5); + } + + #[test] + fn test_enqueue_retry_goes_to_front() { + let mut coord: DownloadCoordinator = DownloadCoordinator::default(); + coord.enqueue([1, 2]); + coord.enqueue_retry(99); + + let items = coord.take_pending(3); + assert_eq!(items, vec![99, 1, 2]); + } + + #[test] + fn test_max_retries() { + let mut coord: DownloadCoordinator = + DownloadCoordinator::new(DownloadConfig::default().with_max_retries(2)); + + assert!(coord.enqueue_retry(1)); + assert!(coord.enqueue_retry(1)); + assert!(!coord.enqueue_retry(1)); // Exceeds max + } + + #[test] + fn test_take_pending() { + let mut coord: DownloadCoordinator = DownloadCoordinator::default(); + coord.enqueue([1, 2, 3, 4, 5]); + + let items = coord.take_pending(3); + assert_eq!(items, vec![1, 2, 3]); + assert_eq!(coord.pending_count(), 2); + } + + #[test] + fn test_mark_sent() { + let mut coord: DownloadCoordinator = DownloadCoordinator::default(); + coord.enqueue([1, 2, 3]); + + let items = coord.take_pending(2); + coord.mark_sent(&items); + + assert_eq!(coord.pending_count(), 1); + assert_eq!(coord.active_count(), 2); + assert!(coord.is_in_flight(&1)); + assert!(coord.is_in_flight(&2)); + assert!(!coord.is_in_flight(&3)); + } + + #[test] + fn test_receive() { + let mut coord: DownloadCoordinator = DownloadCoordinator::default(); + coord.mark_sent(&[1]); + coord.mark_sent(&[2]); + + assert!(coord.receive(&1)); + assert_eq!(coord.active_count(), 1); + + assert!(!coord.receive(&99)); // Not tracked + assert_eq!(coord.active_count(), 1); + } + + #[test] + fn test_available_to_send() { + let mut coord: DownloadCoordinator = + DownloadCoordinator::new(DownloadConfig::default().with_max_concurrent(3)); + + coord.enqueue([1, 2, 3, 4, 5]); + assert_eq!(coord.available_to_send(), 3); + + coord.mark_sent(&[1]); + coord.mark_sent(&[2]); + assert_eq!(coord.available_to_send(), 1); + + coord.mark_sent(&[3]); + assert_eq!(coord.available_to_send(), 0); + } + + #[test] + fn test_check_timeouts() { + let mut coord: DownloadCoordinator = DownloadCoordinator::new( + DownloadConfig::default().with_timeout(Duration::from_millis(10)), + ); + + coord.mark_sent(&[1]); + coord.mark_sent(&[2]); + + // Immediately, nothing timed out + let timed_out = coord.check_timeouts(); + assert!(timed_out.is_empty()); + + // Wait for timeout + std::thread::sleep(Duration::from_millis(20)); + + let timed_out = coord.check_timeouts(); + assert_eq!(timed_out.len(), 2); + assert!(coord.in_flight.is_empty()); + } + + #[test] + fn test_clear() { + let mut coord: DownloadCoordinator = DownloadCoordinator::default(); + coord.enqueue([1, 2, 3]); + coord.mark_sent(&[4]); + coord.enqueue_retry(5); + + coord.clear(); + + assert!(coord.is_empty()); + assert_eq!(coord.pending_count(), 0); + assert_eq!(coord.active_count(), 0); + } + + #[test] + fn test_remaining() { + let mut coord: DownloadCoordinator = DownloadCoordinator::default(); + coord.enqueue([1, 2, 3]); + coord.mark_sent(&[4]); + coord.mark_sent(&[5]); + + assert_eq!(coord.remaining(), 5); + } + + #[test] + fn test_config_builders() { + let config = DownloadConfig::default() + .with_max_concurrent(20) + .with_timeout(Duration::from_secs(60)) + .with_max_retries(5); + + assert_eq!(config.max_concurrent, 20); + assert_eq!(config.timeout, Duration::from_secs(60)); + assert_eq!(config.max_retries, 5); + } + + #[test] + fn test_check_and_retry_timeouts_excludes_exceeded_retries() { + let mut coord: DownloadCoordinator = DownloadCoordinator::new( + DownloadConfig::default().with_timeout(Duration::from_millis(10)).with_max_retries(1), + ); + + // Send two items and let them time out + coord.mark_sent(&[1, 2]); + std::thread::sleep(Duration::from_millis(20)); + + // First round: both should be re-queued successfully + let requeued = coord.check_and_retry_timeouts(); + assert_eq!(requeued.len(), 2); + assert!(requeued.contains(&1)); + assert!(requeued.contains(&2)); + + // Drain pending and send again so they can time out a second time + let items = coord.take_pending(2); + coord.mark_sent(&items); + std::thread::sleep(Duration::from_millis(20)); + + // Second round: both have exceeded max_retries (1), so neither should be returned + let requeued = coord.check_and_retry_timeouts(); + assert!(requeued.is_empty()); + // Items should not have been re-added to pending + assert_eq!(coord.pending_count(), 0); + } + + #[test] + fn test_with_string_keys() { + let mut coord: DownloadCoordinator = DownloadCoordinator::default(); + coord.enqueue(["block_a".to_string(), "block_b".to_string()]); + + let items = coord.take_pending(1); + coord.mark_sent(&items); + + assert!(coord.receive(&"block_a".to_string())); + assert!(!coord.receive(&"block_c".to_string())); + } +} diff --git a/dash-spv/src/sync/events.rs b/dash-spv/src/sync/events.rs new file mode 100644 index 000000000..11c9f66ac --- /dev/null +++ b/dash-spv/src/sync/events.rs @@ -0,0 +1,243 @@ +use crate::sync::ManagerIdentifier; +use dashcore::ephemerealdata::chain_lock::ChainLock; +use dashcore::ephemerealdata::instant_lock::InstantLock; +use dashcore::{Address, BlockHash}; +use key_wallet_manager::wallet_manager::FilterMatchKey; +use std::collections::BTreeSet; + +/// Events that managers can emit and subscribe to. +/// +/// Each event represents a meaningful state change that other managers +/// may need to react to. +#[derive(Debug, Clone)] +pub enum SyncEvent { + /// A sync manager has started a sync operation. + /// + /// Emitted by: any sync manager via its `start()` implementation + SyncStart { + /// Identifies which manager started syncing. + identifier: ManagerIdentifier, + }, + /// New block headers have been stored. + /// + /// Emitted by: `BlockHeadersManager` + /// Consumed by: `MasternodesManager`, `FilterHeadersManager` + BlockHeadersStored { + /// New tip height after storage + tip_height: u32, + }, + + /// Headers have reached the chain tip (initial sync complete). + /// + /// Emitted by: `BlockHeadersManager` + /// Consumed by: `MasternodesManager` (to start masternode sync) + BlockHeaderSyncComplete { + /// Tip height when sync completed + tip_height: u32, + }, + + /// New filter headers have been stored. + /// + /// Emitted by: `FilterHeadersManager` + /// Consumed by: `FiltersManager` + FilterHeadersStored { + /// Lowest height stored in this batch + start_height: u32, + /// Highest height stored in this batch + end_height: u32, + /// New tip height after storage + tip_height: u32, + }, + + /// Filter headers have reached the chain tip (initial sync complete). + /// + /// Emitted by: `FilterHeadersManager` + /// Consumed by: `FiltersManager` + FilterHeadersSyncComplete { + /// Tip height when sync completed + tip_height: u32, + }, + + /// Filters have been stored and are ready for matching. + /// + /// Emitted by: `FiltersManager` + /// Consumed by: (informational, used for progress tracking) + FiltersStored { + /// Lowest height stored + start_height: u32, + /// Highest height stored + end_height: u32, + }, + + /// Filter sync has reached the chain tip (all filters processed). + /// + /// Emitted by: `FiltersManager` + /// Consumed by: `BlocksManager` (to transition to Synced) + FiltersSyncComplete { + /// Tip height when sync completed + tip_height: u32, + }, + + /// Filters matched the wallet, blocks need downloading. + /// + /// Emitted by: `FiltersManager` + /// Consumed by: `BlocksManager` + BlocksNeeded { + /// Blocks to download (sorted by height) + blocks: BTreeSet, + }, + + /// Block downloaded and processed through wallet. + /// + /// Emitted by: `BlocksManager` + /// Consumed by: `FiltersManager` (for gap limit rescanning) + BlockProcessed { + /// Hash of the processed block + block_hash: BlockHash, + /// Height of the processed block + height: u32, + /// New addresses discovered from wallet gap limit maintenance + new_addresses: Vec
, + }, + + /// Masternode state updated to a new height. + /// + /// Emitted by: `MasternodesManager` + /// Consumed by: (informational, may be used for ChainLock validation) + MasternodeStateUpdated { + /// New masternode state height + height: u32, + }, + + /// A manager encountered a recoverable error. + /// + /// Emitted by: Any manager + /// Consumed by: Coordinator (for logging/monitoring) + ManagerError { + /// Which manager encountered the error + manager: ManagerIdentifier, + /// Error description + error: String, + }, + + /// ChainLock received and processed. + /// + /// Emitted by: `ChainLockManager` + /// Consumed by: External listeners, wallet state updates + ChainLockReceived { + /// The complete ChainLock data + chain_lock: ChainLock, + /// Whether the BLS signature was validated + validated: bool, + }, + + /// InstantSend lock received and processed. + /// + /// Emitted by: `InstantSendManager` + /// Consumed by: External listeners, mempool state updates + InstantLockReceived { + /// The complete InstantLock data + instant_lock: InstantLock, + /// Whether the BLS signature was validated + validated: bool, + }, + + /// Sync has reached the chain tip (all managers idle). + /// + /// Emitted by: Coordinator + /// Consumed by: External listeners + SyncComplete { + /// Final header tip height + header_tip: u32, + }, +} + +impl SyncEvent { + /// Get a short description of this event for logging. + pub fn description(&self) -> String { + match self { + SyncEvent::SyncStart { + identifier, + } => { + format!("SyncStart(identifier={})", identifier) + } + SyncEvent::BlockHeadersStored { + tip_height, + } => { + format!("BlockHeadersStored(tip={})", tip_height) + } + SyncEvent::BlockHeaderSyncComplete { + tip_height, + } => { + format!("BlockHeaderSyncComplete(tip={})", tip_height) + } + SyncEvent::FilterHeadersStored { + start_height, + end_height, + tip_height, + } => { + format!("FilterHeadersStored({}-{}, tip={})", start_height, end_height, tip_height) + } + SyncEvent::FilterHeadersSyncComplete { + tip_height, + } => { + format!("FilterHeadersSyncComplete(tip={})", tip_height) + } + SyncEvent::FiltersStored { + start_height, + end_height, + } => { + format!("FiltersStored({}-{})", start_height, end_height) + } + SyncEvent::FiltersSyncComplete { + tip_height, + } => { + format!("FiltersSyncComplete(tip={})", tip_height) + } + SyncEvent::BlocksNeeded { + blocks, + } => { + format!("BlocksNeeded(count={})", blocks.len()) + } + SyncEvent::BlockProcessed { + height, + new_addresses, + .. + } => { + format!("BlockProcessed(height={}, new_addrs={})", height, new_addresses.len()) + } + SyncEvent::MasternodeStateUpdated { + height, + } => { + format!("MasternodeStateUpdated(height={})", height) + } + SyncEvent::ManagerError { + manager, + error, + .. + } => { + format!("ManagerError({}, {})", manager, error) + } + SyncEvent::ChainLockReceived { + chain_lock, + validated, + } => { + format!( + "ChainLockReceived(height={}, validated={})", + chain_lock.block_height, validated + ) + } + SyncEvent::InstantLockReceived { + instant_lock, + validated, + } => { + format!("InstantLockReceived(txid={}, validated={})", instant_lock.txid, validated) + } + SyncEvent::SyncComplete { + header_tip, + } => { + format!("SyncComplete(tip={})", header_tip) + } + } + } +} diff --git a/dash-spv/src/sync/filter_headers/manager.rs b/dash-spv/src/sync/filter_headers/manager.rs new file mode 100644 index 000000000..9f051acc8 --- /dev/null +++ b/dash-spv/src/sync/filter_headers/manager.rs @@ -0,0 +1,274 @@ +//! Filter headers manager for parallel sync. +//! +//! Downloads compact block filter headers (BIP 157/158). Reacts to BlockHeadersStored +//! events to know when new headers are available. Emits FilterHeadersStored events. + +use std::sync::Arc; + +use dashcore::network::message_filter::CFHeaders; +use tokio::sync::RwLock; + +use super::pipeline::FilterHeadersPipeline; +use crate::error::SyncResult; +use crate::network::RequestSender; +use crate::storage::{BlockHeaderStorage, FilterHeaderStorage}; +use crate::sync::filter_headers::util::compute_filter_headers; +use crate::sync::{FilterHeadersProgress, SyncEvent, SyncManager, SyncState}; + +/// Filter headers manager for downloading compact block filter headers. +/// +/// This manager: +/// - Subscribes to BlockHeadersStored events to know when to start/resume +/// - Downloads filter headers using pipelined requests +/// - Emits FilterHeadersStored events for FiltersManager +/// +/// Generic over: +/// - `H: BlockHeaderStorage` for reading block headers +/// - `FH: FilterHeaderStorage` for storing filter headers +pub struct FilterHeadersManager { + /// Current progress of the manager. + pub(super) progress: FilterHeadersProgress, + /// Block header storage (for reading headers). + header_storage: Arc>, + /// Filter header storage (for storing filter headers). + pub(super) filter_header_storage: Arc>, + /// Pipeline for downloading filter headers. + pub(super) pipeline: FilterHeadersPipeline, + /// Checkpoint start height - set when syncing from checkpoint to store prev header once. + checkpoint_start_height: Option, +} + +impl FilterHeadersManager { + /// Create a new filter headers manager with the given storage references. + pub fn new(header_storage: Arc>, filter_header_storage: Arc>) -> Self { + Self { + progress: FilterHeadersProgress::default(), + header_storage, + filter_header_storage, + pipeline: FilterHeadersPipeline::default(), + checkpoint_start_height: None, + } + } + + /// Process a CFHeaders response - store headers and update state. + pub(super) async fn process_cfheaders( + &mut self, + cfheaders: &CFHeaders, + start_height: u32, + ) -> SyncResult { + let filter_headers = compute_filter_headers(cfheaders); + let count = filter_headers.len() as u32; + + let mut storage = self.filter_header_storage.write().await; + + // For checkpoint sync, store previous_filter_header at start_height - 1 + // so filter verification can chain correctly. Only on first batch. + if let Some(checkpoint_height) = self.checkpoint_start_height { + if start_height == checkpoint_height && start_height > 0 { + storage + .store_filter_headers_at_height( + &[cfheaders.previous_filter_header], + start_height - 1, + ) + .await?; + tracing::debug!( + "Stored checkpoint previous filter header at height {}", + start_height - 1 + ); + // Clear so we don't check again + self.checkpoint_start_height = None; + } + } + + storage.store_filter_headers_at_height(&filter_headers, start_height).await?; + + drop(storage); + + self.progress.add_processed(count); + + Ok(count) + } + + /// Start or resume filter header download. + async fn start_download(&mut self, requests: &RequestSender) -> SyncResult> { + // Get current filter tip + let filter_headers_tip = + self.filter_header_storage.read().await.get_filter_tip_height().await?.unwrap_or(0); + + // Get header start height (for checkpoint sync) + let header_start_height = + self.header_storage.read().await.get_start_height().await.unwrap_or(0); + + // Calculate start height + let start_height = match filter_headers_tip { + 0 => header_start_height, + n => (n + 1).max(header_start_height), + }; + + self.progress.update_current_height(filter_headers_tip); + + // Check if already at target (nothing to download) + if start_height > self.progress.block_header_tip_height() { + // Only emit FilterHeadersSyncComplete if we've also reached the chain tip + // This prevents premature sync complete while block headers are still syncing + if self.progress.current_height() >= self.progress.target_height() { + if self.state() == SyncState::Synced { + tracing::debug!( + "Filter headers already synced to {}, no state change", + self.progress.target_height() + ); + return Ok(vec![]); + } + self.set_state(SyncState::Synced); + tracing::info!( + "Filter headers synced to {}, emitting sync complete", + self.progress.target_height() + ); + return Ok(vec![SyncEvent::FilterHeadersSyncComplete { + tip_height: self.progress.current_height(), + }]); + } + // Caught up to available headers but chain tip not reached yet + return Ok(vec![]); + } + + tracing::info!( + "Starting filter header sync from {} to {}", + start_height, + self.progress.block_header_tip_height() + ); + + // Track checkpoint start height for storing prev header on first batch + if start_height > 0 { + self.checkpoint_start_height = Some(start_height); + } + + // Initialize pipeline with storage references + let header_storage = self.header_storage.read().await; + self.pipeline + .init(&*header_storage, start_height, self.progress.block_header_tip_height()) + .await?; + drop(header_storage); + + // Send initial requests + self.pipeline.send_pending(requests)?; + + self.set_state(SyncState::Syncing); + + Ok(vec![]) + } + + /// Handle notification that new headers are available. + /// + /// Unified handler for both BlockHeaderSyncComplete and BlockHeadersStored events. + /// Uses pipeline state to determine whether to init or extend. + pub(super) async fn handle_new_headers( + &mut self, + tip_height: u32, + requests: &RequestSender, + ) -> SyncResult> { + self.progress.update_block_header_tip_height(tip_height); + self.update_target_height(tip_height); + + // Nothing to do if caught up to available headers + if self.progress.current_height() >= self.progress.block_header_tip_height() { + let mut events = Vec::new(); + // Only emit SyncComplete if we've also reached the chain tip + if self.state() == SyncState::WaitForEvents + && self.progress.current_height() >= self.progress.target_height() + { + events.push(SyncEvent::FilterHeadersSyncComplete { + tip_height, + }); + self.set_state(SyncState::Synced); + } + return Ok(events); + } + + match self.state() { + SyncState::Synced | SyncState::Syncing => { + // Configure pipeline based on its current state + let header_storage = self.header_storage.read().await; + if self.pipeline.is_complete() { + // Pipeline done/empty, need fresh init + self.pipeline + .init( + &*header_storage, + self.progress.current_height() + 1, + self.progress.block_header_tip_height(), + ) + .await?; + } else { + // Pipeline active, extend it + self.pipeline + .extend_target(&*header_storage, self.progress.block_header_tip_height()) + .await?; + } + drop(header_storage); + self.pipeline.send_pending(requests)?; + Ok(vec![]) + } + SyncState::WaitingForConnections | SyncState::WaitForEvents => { + // Need full startup (calculates start from storage, handles checkpoints) + self.start_download(requests).await + } + _ => Ok(vec![]), + } + } +} + +impl std::fmt::Debug + for FilterHeadersManager +{ + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("FilterHeadersManager").field("progress", &self.progress).finish() + } +} +#[cfg(test)] +mod tests { + use super::*; + use crate::network::MessageType; + use crate::storage::{ + DiskStorageManager, PersistentBlockHeaderStorage, PersistentFilterHeaderStorage, + }; + use crate::sync::{ManagerIdentifier, SyncManagerProgress}; + + type TestFilterHeadersManager = + FilterHeadersManager; + type TestSyncManager = dyn SyncManager; + + async fn create_test_manager() -> TestFilterHeadersManager { + let storage = DiskStorageManager::with_temp_dir().await.unwrap(); + FilterHeadersManager::new(storage.header_storage(), storage.filter_header_storage()) + } + + #[tokio::test] + async fn test_filter_headers_manager_new() { + let manager = create_test_manager().await; + assert_eq!(manager.identifier(), ManagerIdentifier::FilterHeader); + assert_eq!(manager.state(), SyncState::Initializing); + assert_eq!(manager.wanted_message_types(), vec![MessageType::CFHeaders]); + } + + #[tokio::test] + async fn test_filter_headers_manager_progress() { + let mut manager = create_test_manager().await; + manager.progress.update_current_height(500); + manager.progress.update_target_height(2000); + manager.progress.update_block_header_tip_height(1000); + manager.progress.add_processed(500); + + let manager_ref: &TestSyncManager = &manager; + let progress = manager_ref.progress(); + if let SyncManagerProgress::FilterHeaders(progress) = progress { + assert_eq!(progress.state(), SyncState::Initializing); + assert_eq!(progress.current_height(), 500); + assert_eq!(progress.target_height(), 2000); + assert_eq!(progress.block_header_tip_height(), 1000); + assert_eq!(progress.processed(), 500); + assert!(progress.last_activity().elapsed().as_secs() < 1); + } else { + panic!("Expected SyncManagerProgress::FilterHeaders"); + } + } +} diff --git a/dash-spv/src/sync/filter_headers/mod.rs b/dash-spv/src/sync/filter_headers/mod.rs new file mode 100644 index 000000000..417e62660 --- /dev/null +++ b/dash-spv/src/sync/filter_headers/mod.rs @@ -0,0 +1,8 @@ +mod manager; +mod pipeline; +mod progress; +mod sync_manager; +mod util; + +pub use manager::FilterHeadersManager; +pub use progress::FilterHeadersProgress; diff --git a/dash-spv/src/sync/filter_headers/pipeline.rs b/dash-spv/src/sync/filter_headers/pipeline.rs new file mode 100644 index 000000000..9f7dac1bf --- /dev/null +++ b/dash-spv/src/sync/filter_headers/pipeline.rs @@ -0,0 +1,510 @@ +//! CFHeaders pipeline implementation. +//! +//! Handles pipelined download of compact block filter headers (BIP 157/158). +//! Uses DownloadCoordinator for batch tracking with out-of-order buffering. + +use dashcore::network::message::NetworkMessage; +use dashcore::network::message_filter::CFHeaders; +use dashcore::BlockHash; +use std::collections::HashMap; +use std::time::Duration; + +use crate::error::{SyncError, SyncResult}; +use crate::network::RequestSender; +use crate::storage::BlockHeaderStorage; +use crate::sync::download_coordinator::{DownloadConfig, DownloadCoordinator}; + +/// Batch size for filter header requests. +const FILTER_HEADERS_BATCH_SIZE: u32 = 2000; + +/// Maximum concurrent CFHeaders requests. +const MAX_CONCURRENT_CFHEADERS_REQUESTS: usize = 10; + +/// Timeout for CFHeaders requests (shorter for faster retry on multi-peer). +/// Timeout for CFHeaders requests. Single response but allow time for network latency. +const FILTER_HEADERS_TIMEOUT: Duration = Duration::from_secs(20); + +/// Maximum number of retries for CFHeaders requests. +const FILTER_HEADERS_MAX_RETRIES: u32 = 3; + +/// Pipeline for downloading compact block filter headers. +/// +/// Uses DownloadCoordinator for batch-level tracking (keyed by stop_hash), +/// with a HashMap buffer for out-of-order responses that need sequential processing. +#[derive(Debug)] +pub(super) struct FilterHeadersPipeline { + /// Core coordinator tracks batches by stop_hash. + coordinator: DownloadCoordinator, + /// Maps stop_hash -> start_height for each batch. + batch_starts: HashMap, + /// Out-of-order response buffer (start_height -> data). + buffered: HashMap, + /// Next height to process sequentially. + next_expected: u32, + /// Target height for sync. + target_height: u32, +} + +impl Default for FilterHeadersPipeline { + fn default() -> Self { + Self::new() + } +} + +impl FilterHeadersPipeline { + /// Create a new CFHeaders pipeline. + pub(super) fn new() -> Self { + Self { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_max_concurrent(MAX_CONCURRENT_CFHEADERS_REQUESTS) + .with_timeout(FILTER_HEADERS_TIMEOUT) + .with_max_retries(FILTER_HEADERS_MAX_RETRIES), + ), + batch_starts: HashMap::new(), + buffered: HashMap::new(), + next_expected: 0, + target_height: 0, + } + } + + /// Extend the pipeline to a new target height. + /// + /// Queues additional batches from the current target to the new target. + pub(super) async fn extend_target( + &mut self, + storage: &impl BlockHeaderStorage, + new_target: u32, + ) -> SyncResult<()> { + let old_target = self.target_height; + if new_target <= old_target { + return Ok(()); + } + + self.target_height = new_target; + + // Queue batches from (old_target + 1) to new_target + let mut current = old_target + 1; + let mut added = 0; + + while current <= new_target { + let batch_end = (current + FILTER_HEADERS_BATCH_SIZE - 1).min(new_target); + + // Get stop hash for this batch + let stop_hash = storage + .get_header(batch_end) + .await? + .ok_or_else(|| { + SyncError::Storage(format!("Missing header at height {}", batch_end)) + })? + .block_hash(); + + self.coordinator.enqueue([stop_hash]); + self.batch_starts.insert(stop_hash, current); + added += 1; + + current = batch_end + 1; + } + + if added > 0 { + tracing::info!( + "Extended CFHeaders queue: +{} batches for heights {} to {}", + added, + old_target + 1, + new_target + ); + } + + Ok(()) + } + + /// Get the next expected height for sequential processing. + pub(super) fn next_expected(&self) -> u32 { + self.next_expected + } + + /// Check if the pipeline is complete. + pub(super) fn is_complete(&self) -> bool { + self.coordinator.is_empty() + && self.buffered.is_empty() + && (self.target_height == 0 || self.next_expected > self.target_height) + } + + /// Initialize the pipeline for a sync range. + pub(super) async fn init( + &mut self, + storage: &impl BlockHeaderStorage, + start_height: u32, + target_height: u32, + ) -> SyncResult<()> { + self.coordinator.clear(); + self.batch_starts.clear(); + self.buffered.clear(); + self.next_expected = start_height; + self.target_height = target_height; + + // Build request queue + let mut current = start_height; + while current <= target_height { + let batch_end = (current + FILTER_HEADERS_BATCH_SIZE - 1).min(target_height); + + // Get stop hash for this batch + let stop_hash = storage + .get_header(batch_end) + .await? + .ok_or_else(|| { + SyncError::Storage(format!("Missing header at height {}", batch_end)) + })? + .block_hash(); + + self.coordinator.enqueue([stop_hash]); + self.batch_starts.insert(stop_hash, current); + + current = batch_end + 1; + } + + tracing::info!( + "Built CFHeaders request queue: {} batches for heights {} to {}", + self.coordinator.pending_count(), + start_height, + target_height + ); + + Ok(()) + } + + /// Send pending requests using a RequestSender (synchronous). + pub(super) fn send_pending(&mut self, requests: &RequestSender) -> SyncResult { + let count = self.coordinator.available_to_send(); + if count == 0 { + return Ok(0); + } + + let stop_hashes = self.coordinator.take_pending(count); + let mut sent = 0; + + for stop_hash in stop_hashes { + let Some(&start_height) = self.batch_starts.get(&stop_hash) else { + return Err(SyncError::InvalidState(format!( + "No batch_starts entry for pending stop_hash {}", + stop_hash + ))); + }; + + requests.request_filter_headers(start_height, stop_hash)?; + + self.coordinator.mark_sent(&[stop_hash]); + + tracing::debug!( + "Sent GetCFHeaders: start={}, stop={} ({} active, {} pending)", + start_height, + stop_hash, + self.coordinator.active_count(), + self.coordinator.pending_count() + ); + + sent += 1; + } + + Ok(sent) + } + + /// Try to match an incoming message to a pipeline response. + /// + /// Returns `Some((start_height, data))` if matched, `None` otherwise. + pub(super) fn match_response(&self, msg: &NetworkMessage) -> Option<(u32, CFHeaders)> { + let NetworkMessage::CFHeaders(cfheaders) = msg else { + return None; + }; + + if cfheaders.filter_hashes.is_empty() { + return None; + } + + // Match by stop_hash - the response includes it + if !self.coordinator.is_in_flight(&cfheaders.stop_hash) { + return None; + } + + let start_height = *self.batch_starts.get(&cfheaders.stop_hash)?; + Some((start_height, cfheaders.clone())) + } + + /// Handle a received response. + /// + /// Returns `Some(data)` if this response is the next expected and should + /// be processed immediately. Returns `None` if buffered for later. + pub(super) fn receive(&mut self, start_height: u32, data: CFHeaders) -> Option { + self.coordinator.receive(&data.stop_hash); + self.batch_starts.remove(&data.stop_hash); + + if start_height == self.next_expected { + Some(data) + } else if start_height > self.next_expected { + // Out-of-order - buffer for later + self.buffered.insert(start_height, data); + None + } else { + // Already processed (duplicate) + None + } + } + + /// Advance to the next expected height after processing. + /// + /// Returns any buffered responses that are now ready. + pub(super) fn advance(&mut self, processed_count: u32) -> Vec<(u32, CFHeaders)> { + self.next_expected += processed_count; + + // Check if next_expected is now in the buffer + let mut ready = Vec::new(); + if let Some(data) = self.buffered.remove(&self.next_expected) { + ready.push((self.next_expected, data)); + } + ready + } + + /// Re-enqueue timed out requests for retry. + /// + /// Returns heights that exceeded max retries and were permanently dropped. + pub(super) fn handle_timeouts(&mut self) -> Vec { + let mut failed = Vec::new(); + for stop_hash in self.coordinator.check_timeouts() { + if !self.coordinator.enqueue_retry(stop_hash) { + if let Some(start_height) = self.batch_starts.remove(&stop_hash) { + tracing::warn!( + "CFHeaders batch at height {} exceeded max retries, dropping", + start_height + ); + failed.push(start_height); + } + } + } + failed + } +} + +#[cfg(test)] +mod tests { + use dashcore_hashes::Hash; + + use super::*; + + #[test] + fn test_cfheaders_pipeline_new() { + let pipeline = FilterHeadersPipeline::new(); + assert!(pipeline.is_complete()); + } + + #[test] + fn test_match_response_empty() { + let pipeline = FilterHeadersPipeline::new(); + + let empty_cfheaders = CFHeaders { + filter_type: 0, + stop_hash: dashcore::BlockHash::all_zeros(), + previous_filter_header: dashcore::hash_types::FilterHeader::all_zeros(), + filter_hashes: vec![], + }; + + // Empty response should return None + assert!(pipeline.match_response(&NetworkMessage::CFHeaders(empty_cfheaders)).is_none()); + } + + #[test] + fn test_match_response_wrong_message() { + let pipeline = FilterHeadersPipeline::new(); + + // Wrong message type should return None + assert!(pipeline.match_response(&NetworkMessage::Verack).is_none()); + } + + #[test] + fn test_receive_in_order() { + use dashcore::hash_types::FilterHash; + + let mut pipeline = FilterHeadersPipeline::new(); + pipeline.next_expected = 1; + pipeline.target_height = 100; + + let stop_hash = BlockHash::all_zeros(); + + // Mark batch as in-flight (by stop_hash) + pipeline.coordinator.mark_sent(&[stop_hash]); + pipeline.batch_starts.insert(stop_hash, 1); + + let cfheaders = CFHeaders { + filter_type: 0, + stop_hash, + previous_filter_header: dashcore::hash_types::FilterHeader::all_zeros(), + filter_hashes: vec![FilterHash::all_zeros()], + }; + + // Should return data immediately + let result = pipeline.receive(1, cfheaders.clone()); + assert!(result.is_some()); + } + + #[test] + fn test_receive_out_of_order() { + use dashcore::hash_types::FilterHash; + + let mut pipeline = FilterHeadersPipeline::new(); + pipeline.next_expected = 1; + pipeline.target_height = 4000; + + let stop_hash = BlockHash::all_zeros(); + + // Mark batch as in-flight (by stop_hash) + pipeline.coordinator.mark_sent(&[stop_hash]); + pipeline.batch_starts.insert(stop_hash, 2000); + + let cfheaders = CFHeaders { + filter_type: 0, + stop_hash, + previous_filter_header: dashcore::hash_types::FilterHeader::all_zeros(), + filter_hashes: vec![FilterHash::all_zeros()], + }; + + // Should buffer (out of order) + let result = pipeline.receive(2000, cfheaders); + assert!(result.is_none()); + assert_eq!(pipeline.buffered.len(), 1); + } + + #[test] + fn test_advance_returns_buffered() { + use dashcore::hash_types::FilterHash; + + let mut pipeline = FilterHeadersPipeline::new(); + pipeline.next_expected = 1; + pipeline.target_height = 4000; + + // Buffer a response at height 2000 + let cfheaders = CFHeaders { + filter_type: 0, + stop_hash: BlockHash::all_zeros(), + previous_filter_header: dashcore::hash_types::FilterHeader::all_zeros(), + filter_hashes: vec![FilterHash::all_zeros()], + }; + pipeline.buffered.insert(2000, cfheaders); + + // Advance to 2000 + let ready = pipeline.advance(1999); + assert_eq!(ready.len(), 1); + assert_eq!(ready[0].0, 2000); + assert_eq!(pipeline.buffered.len(), 0); + } + + #[test] + fn test_handle_timeouts_basic_retry() { + use std::time::Duration; + + let mut pipeline = FilterHeadersPipeline { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_timeout(Duration::from_millis(1)) + .with_max_retries(3), + ), + batch_starts: HashMap::new(), + buffered: HashMap::new(), + next_expected: 1, + target_height: 2000, + }; + + let stop_hash = BlockHash::all_zeros(); + pipeline.coordinator.mark_sent(&[stop_hash]); + pipeline.batch_starts.insert(stop_hash, 1); + + std::thread::sleep(Duration::from_millis(5)); + + let failed = pipeline.handle_timeouts(); + assert!(failed.is_empty()); // First retry succeeds + assert_eq!(pipeline.coordinator.pending_count(), 1); + } + + #[test] + fn test_handle_timeouts_max_retries_exceeded() { + use std::time::Duration; + + let mut pipeline = FilterHeadersPipeline { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_timeout(Duration::from_millis(1)) + .with_max_retries(1), + ), + batch_starts: HashMap::new(), + buffered: HashMap::new(), + next_expected: 100, + target_height: 2000, + }; + + let stop_hash = BlockHash::from_byte_array([0x01; 32]); + pipeline.batch_starts.insert(stop_hash, 100); + + // First timeout + retry + pipeline.coordinator.mark_sent(&[stop_hash]); + std::thread::sleep(Duration::from_millis(5)); + let failed = pipeline.handle_timeouts(); + assert!(failed.is_empty()); + + // Re-send retry + let items = pipeline.coordinator.take_pending(1); + pipeline.coordinator.mark_sent(&items); + + // Second timeout exceeds max + std::thread::sleep(Duration::from_millis(5)); + let failed = pipeline.handle_timeouts(); + assert_eq!(failed, vec![100]); + } + + #[test] + fn test_send_pending_errors_on_missing_batch_starts() { + let mut pipeline = FilterHeadersPipeline::new(); + pipeline.next_expected = 1; + pipeline.target_height = 2000; + + let hash_without_entry = BlockHash::from_byte_array([0x02; 32]); + + // Enqueue a stop_hash without a corresponding batch_starts entry + pipeline.coordinator.enqueue([hash_without_entry]); + + let (tx, _rx) = tokio::sync::mpsc::unbounded_channel(); + let requests = RequestSender::new(tx); + + let err = pipeline.send_pending(&requests).unwrap_err(); + assert!(matches!(err, SyncError::InvalidState(_))); + } + + #[test] + fn test_handle_timeouts_multiple_batches() { + use std::time::Duration; + + let mut pipeline = FilterHeadersPipeline { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_timeout(Duration::from_millis(1)) + .with_max_retries(0), + ), + batch_starts: HashMap::new(), + buffered: HashMap::new(), + next_expected: 1, + target_height: 4000, + }; + + let hash1 = BlockHash::from_byte_array([0x01; 32]); + let hash2 = BlockHash::from_byte_array([0x02; 32]); + + pipeline.coordinator.mark_sent(&[hash1, hash2]); + pipeline.batch_starts.insert(hash1, 1); + pipeline.batch_starts.insert(hash2, 2001); + + std::thread::sleep(Duration::from_millis(5)); + + let mut failed = pipeline.handle_timeouts(); + failed.sort(); + assert_eq!(failed.len(), 2); + assert!(failed.contains(&1)); + assert!(failed.contains(&2001)); + } +} diff --git a/dash-spv/src/sync/filter_headers/progress.rs b/dash-spv/src/sync/filter_headers/progress.rs new file mode 100644 index 000000000..b1caa685c --- /dev/null +++ b/dash-spv/src/sync/filter_headers/progress.rs @@ -0,0 +1,129 @@ +use std::fmt; +use std::time::Instant; + +use crate::sync::SyncState; + +/// Progress for filter-header synchronization. +#[derive(Debug, Clone, PartialEq)] +pub struct FilterHeadersProgress { + /// Current sync state. + state: SyncState, + /// The tip height of the filter-header storage. + current_height: u32, + /// Target height (peer's best height). Used for progress display. + target_height: u32, + /// The tip height of the block-header storage (the download limit for filter headers). + block_header_tip_height: u32, + /// Number of filter-headers processed (stored) in the current sync session. + processed: u32, + /// The last time a filter-header was stored to disk or the last manager state change. + last_activity: Instant, +} + +impl Default for FilterHeadersProgress { + fn default() -> Self { + Self { + state: SyncState::default(), + current_height: 0, + target_height: 0, + block_header_tip_height: 0, + processed: 0, + last_activity: Instant::now(), + } + } +} + +impl FilterHeadersProgress { + /// Get completion percentage (0.0 to 1.0). + /// Uses target_height (peer's best height) for accurate progress display. + pub fn percentage(&self) -> f64 { + if self.target_height == 0 { + return 1.0; + } + (self.current_height as f64 / self.target_height as f64).min(1.0) + } + + /// Get the current sync state. + pub fn state(&self) -> SyncState { + self.state + } + + /// Get the current height (last successfully processed filter-header height). + pub fn current_height(&self) -> u32 { + self.current_height + } + + /// Get the target height (peer's best height, for progress display). + pub fn target_height(&self) -> u32 { + self.target_height + } + + /// Get the block-header tip height (the download limit for filter headers). + pub fn block_header_tip_height(&self) -> u32 { + self.block_header_tip_height + } + + /// Number of filter-headers processed (stored) in the current sync session. + pub fn processed(&self) -> u32 { + self.processed + } + + /// The last time a filter-header was stored to disk or the last manager state change. + pub fn last_activity(&self) -> Instant { + self.last_activity + } + + /// Update the sync state and bump the last activity time. + pub fn set_state(&mut self, state: SyncState) { + self.state = state; + self.bump_last_activity(); + } + + /// Update the current height (last successfully processed filter-header height). + pub fn update_current_height(&mut self, height: u32) { + self.current_height = height; + self.bump_last_activity(); + } + + /// Update the target height (peer's best height, for progress display). + /// Only updates if the new height is greater than the current target (monotonic increase). + pub fn update_target_height(&mut self, height: u32) { + if height > self.target_height { + self.target_height = height; + self.bump_last_activity(); + } + } + + /// Update the block-header tip height (called when new block headers are stored). + pub fn update_block_header_tip_height(&mut self, height: u32) { + self.block_header_tip_height = height; + self.bump_last_activity(); + } + + /// Add a number to the processed counter. + pub fn add_processed(&mut self, count: u32) { + self.processed += count; + self.bump_last_activity(); + } + + /// Bump the last activity time. + pub fn bump_last_activity(&mut self) { + self.last_activity = Instant::now(); + } +} + +impl fmt::Display for FilterHeadersProgress { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + let pct = self.percentage() * 100.0; + write!( + f, + "{:?} {}/{} ({:.1}%) processed: {}, last_activity: {}s", + self.state, + self.current_height, + self.target_height, + pct, + self.processed, + self.last_activity.elapsed().as_secs() + ) + } +} diff --git a/dash-spv/src/sync/filter_headers/sync_manager.rs b/dash-spv/src/sync/filter_headers/sync_manager.rs new file mode 100644 index 000000000..dd2c876a8 --- /dev/null +++ b/dash-spv/src/sync/filter_headers/sync_manager.rs @@ -0,0 +1,190 @@ +use crate::error::SyncResult; +use crate::network::{Message, MessageType, RequestSender}; +use crate::storage::{BlockHeaderStorage, FilterHeaderStorage}; +use crate::sync::{ + FilterHeadersManager, ManagerIdentifier, SyncEvent, SyncManager, SyncManagerProgress, SyncState, +}; +use crate::SyncError; +use async_trait::async_trait; + +#[async_trait] +impl SyncManager for FilterHeadersManager { + fn identifier(&self) -> ManagerIdentifier { + ManagerIdentifier::FilterHeader + } + + fn state(&self) -> SyncState { + self.progress.state() + } + + fn set_state(&mut self, state: SyncState) { + self.progress.set_state(state); + } + + fn update_target_height(&mut self, height: u32) { + self.progress.update_target_height(height); + } + + fn wanted_message_types(&self) -> &'static [MessageType] { + &[MessageType::CFHeaders] + } + + async fn initialize(&mut self) -> SyncResult<()> { + // Load current filter tip + let filter_tip = + self.filter_header_storage.read().await.get_filter_tip_height().await?.unwrap_or(0); + + self.progress.update_current_height(filter_tip); + self.set_state(SyncState::WaitingForConnections); + + tracing::info!( + "FilterHeadersManager initialized at height {}, waiting for headers", + self.progress.current_height() + ); + + Ok(()) + } + + async fn handle_message( + &mut self, + msg: Message, + requests: &RequestSender, + ) -> SyncResult> { + // Match response to get start height + let Some((start_height, cfheaders)) = self.pipeline.match_response(msg.inner()) else { + // Only mark as Synced if pipeline is complete AND we've reached the chain tip + if self.pipeline.is_complete() + && self.state() == SyncState::Syncing + && self.progress.current_height() >= self.progress.target_height() + { + self.set_state(SyncState::Synced); + tracing::info!( + "Filter header sync complete at height {}", + self.progress.current_height() + ); + return Ok(vec![SyncEvent::FilterHeadersSyncComplete { + tip_height: self.progress.current_height(), + }]); + } + return Ok(vec![]); + }; + + let mut events = Vec::new(); + + // Try to receive (may buffer if out of order) + if let Some(data) = self.pipeline.receive(start_height, cfheaders) { + // In order - process immediately + let count = self.process_cfheaders(&data, start_height).await?; + if count == 0 { + return Err(SyncError::Network("CFHeaders batch contained no headers".to_string())); + } + let batch_start = start_height; + let batch_end = start_height + count.saturating_sub(1); + + // Advance and capture any buffered batches that are now ready + let mut ready_batches = self.pipeline.advance(count); + self.progress.update_current_height(self.pipeline.next_expected().saturating_sub(1)); + + tracing::debug!( + "Processed {} filter headers at {}, now at {}/{}", + count, + start_height, + self.progress.current_height(), + self.progress.block_header_tip_height() + ); + + // Emit event for this batch + events.push(SyncEvent::FilterHeadersStored { + start_height: batch_start, + end_height: batch_end, + tip_height: self.progress.current_height(), + }); + + // Process buffered responses (including any returned by first advance) + while !ready_batches.is_empty() { + // Take ownership and process each batch + for (height, data) in std::mem::take(&mut ready_batches) { + let count = self.process_cfheaders(&data, height).await?; + if count == 0 { + return Err(SyncError::Network( + "CFHeaders batch contained no headers".to_string(), + )); + } + // Get more ready batches (advance returns any that are now ready) + let more_ready = self.pipeline.advance(count); + ready_batches.extend(more_ready); + self.progress + .update_current_height(self.pipeline.next_expected().saturating_sub(1)); + + events.push(SyncEvent::FilterHeadersStored { + start_height: height, + end_height: height + count.saturating_sub(1), + tip_height: self.progress.current_height(), + }); + } + } + } else { + tracing::debug!( + "Buffered out-of-order CFHeaders at {} (expecting {})", + start_height, + self.pipeline.next_expected() + ); + } + + // Send more requests + self.pipeline.send_pending(requests)?; + + // Check if complete - use target_height (peer's best) to ensure we've reached chain tip + if self.pipeline.is_complete() + && self.state() == SyncState::Syncing + && self.progress.current_height() >= self.progress.target_height() + { + self.set_state(SyncState::Synced); + tracing::info!( + "Filter header sync complete at height {}", + self.progress.current_height() + ); + events.push(SyncEvent::FilterHeadersSyncComplete { + tip_height: self.progress.current_height(), + }); + } + + Ok(events) + } + + async fn handle_sync_event( + &mut self, + event: &SyncEvent, + requests: &RequestSender, + ) -> SyncResult> { + match event { + SyncEvent::BlockHeaderSyncComplete { + tip_height, + } => self.handle_new_headers(*tip_height, requests).await, + SyncEvent::BlockHeadersStored { + tip_height, + } => self.handle_new_headers(*tip_height, requests).await, + _ => Ok(vec![]), + } + } + + async fn tick(&mut self, requests: &RequestSender) -> SyncResult> { + // Handle timed out requests + let failed = self.pipeline.handle_timeouts(); + if !failed.is_empty() { + return Err(SyncError::Timeout(format!( + "CFHeaders batches exceeded max retries at heights: {:?}", + failed + ))); + } + + // Send pending requests (including retries) + self.pipeline.send_pending(requests)?; + + Ok(vec![]) + } + + fn progress(&self) -> SyncManagerProgress { + SyncManagerProgress::FilterHeaders(self.progress.clone()) + } +} diff --git a/dash-spv/src/sync/filter_headers/util.rs b/dash-spv/src/sync/filter_headers/util.rs new file mode 100644 index 000000000..dbd616afc --- /dev/null +++ b/dash-spv/src/sync/filter_headers/util.rs @@ -0,0 +1,23 @@ +use dashcore::hash_types::FilterHeader; +use dashcore::network::message_filter::CFHeaders; +use dashcore_hashes::{sha256d, Hash}; + +/// Compute filter headers from a CFHeaders message. +/// +/// Each filter header is computed by chaining: +/// `header[i] = sha256d(filter_hash[i] || header[i-1])` +pub(super) fn compute_filter_headers(cfheaders: &CFHeaders) -> Vec { + let mut prev_header = cfheaders.previous_filter_header; + let mut computed_headers = Vec::with_capacity(cfheaders.filter_hashes.len()); + + for filter_hash in &cfheaders.filter_hashes { + let mut data = [0u8; 64]; + data[..32].copy_from_slice(filter_hash.as_byte_array()); + data[32..].copy_from_slice(prev_header.as_byte_array()); + let header = FilterHeader::from_byte_array(sha256d::Hash::hash(&data).to_byte_array()); + computed_headers.push(header); + prev_header = header; + } + + computed_headers +} diff --git a/dash-spv/src/sync/filters/batch.rs b/dash-spv/src/sync/filters/batch.rs new file mode 100644 index 000000000..687c641f0 --- /dev/null +++ b/dash-spv/src/sync/filters/batch.rs @@ -0,0 +1,200 @@ +use dashcore::bip158::BlockFilter; +use dashcore::Address; +use key_wallet_manager::wallet_manager::FilterMatchKey; +use std::collections::{HashMap, HashSet}; + +/// A completed batch of compact block filters ready for verification. +/// +/// Represents a contiguous range of filters that have all been received +/// and can now be verified against their expected filter headers. +/// Ordered by start_height for sequential processing. +#[derive(Debug)] +pub(super) struct FiltersBatch { + /// Start height of this batch (inclusive). + start_height: u32, + /// Ending height of this batch (inclusive). + end_height: u32, + /// Filters of this batch. + filters: HashMap, + /// Whether this batch was verified already (loaded from storage). + verified: bool, + /// Whether this batch was scanned already. + scanned: bool, + /// Number of blocks still being downloaded for this batch. + pending_blocks: u32, + /// Whether rescan has been completed for this batch. + rescan_complete: bool, + /// Addresses discovered during block processing that need rescan. + collected_addresses: HashSet
, +} + +impl FiltersBatch { + /// Create a new batch with given filter data. + pub(super) fn new( + start_height: u32, + end_height: u32, + filters: HashMap, + ) -> Self { + Self { + start_height, + end_height, + filters, + verified: false, + scanned: false, + pending_blocks: 0, + rescan_complete: false, + collected_addresses: HashSet::new(), + } + } + /// Start height of this batch (inclusive). + pub(super) fn start_height(&self) -> u32 { + self.start_height + } + /// Ending height of this batch (inclusive). + pub(super) fn end_height(&self) -> u32 { + self.end_height + } + /// Reference to the loaded filters map of this batch. + pub(super) fn filters(&self) -> &HashMap { + &self.filters + } + /// Mutable reference to the loaded filters map of this batch. + pub(super) fn filters_mut(&mut self) -> &mut HashMap { + &mut self.filters + } + /// Returns whether this batch is verified (filters verified against their headers). + pub(super) fn verified(&self) -> bool { + self.verified + } + /// Mark this batch as verified (filters matched their expected headers). + pub(super) fn mark_verified(&mut self) { + self.verified = true; + } + /// Mark this batch as scanned (filters have been matched against the wallet addresses). + pub(super) fn mark_scanned(&mut self) { + self.scanned = true; + } + /// Returns whether this batch was scanned already. + pub(super) fn scanned(&self) -> bool { + self.scanned + } + /// Returns the number of pending blocks for this batch. + pub(super) fn pending_blocks(&self) -> u32 { + self.pending_blocks + } + /// Set the number of pending blocks for this batch. + pub(super) fn set_pending_blocks(&mut self, count: u32) { + self.pending_blocks = count; + } + /// Decrement pending blocks count, returning the new count. + pub(super) fn decrement_pending_blocks(&mut self) -> u32 { + self.pending_blocks = self.pending_blocks.saturating_sub(1); + self.pending_blocks + } + /// Returns whether rescan has been completed for this batch. + pub(super) fn rescan_complete(&self) -> bool { + self.rescan_complete + } + /// Mark rescan as complete for this batch. + pub(super) fn mark_rescan_complete(&mut self) { + self.rescan_complete = true; + } + /// Add addresses discovered during block processing for later rescan. + pub(super) fn add_addresses(&mut self, addresses: impl IntoIterator) { + self.collected_addresses.extend(addresses); + } + /// Take collected addresses for rescan, leaving the set empty. + pub(super) fn take_collected_addresses(&mut self) -> HashSet
{ + std::mem::take(&mut self.collected_addresses) + } +} + +impl PartialEq for FiltersBatch { + fn eq(&self, other: &Self) -> bool { + self.start_height == other.start_height + } +} + +impl Eq for FiltersBatch {} + +impl PartialOrd for FiltersBatch { + fn partial_cmp(&self, other: &Self) -> Option { + Some(self.cmp(other)) + } +} + +impl Ord for FiltersBatch { + fn cmp(&self, other: &Self) -> std::cmp::Ordering { + self.start_height.cmp(&other.start_height) + } +} + +#[cfg(test)] +mod tests { + use crate::sync::filters::batch::FiltersBatch; + use dashcore::bip158::BlockFilter; + use dashcore::Header; + use key_wallet_manager::wallet_manager::FilterMatchKey; + use std::collections::{BTreeSet, HashMap}; + + #[test] + fn test_filters_batch_new() { + let filters = HashMap::new(); + let batch = FiltersBatch::new(100, 199, filters); + + assert_eq!(batch.start_height(), 100); + assert_eq!(batch.end_height(), 199); + assert!(!batch.verified()); + } + + #[test] + fn test_filters_batch_mark_verified() { + let mut batch = FiltersBatch::new(100, 199, HashMap::new()); + assert!(!batch.verified()); + batch.mark_verified(); + assert!(batch.verified()); + } + + #[test] + fn test_filters_batch_getters() { + let mut filters = HashMap::new(); + let key = FilterMatchKey::new(100, Header::dummy(100).block_hash()); + filters.insert(key, BlockFilter::new(&[0x01])); + + let batch = FiltersBatch::new(100, 100, filters); + + assert_eq!(batch.start_height(), 100); + assert_eq!(batch.end_height(), 100); + assert_eq!(batch.filters().len(), 1); + assert!(!batch.verified()); + } + + #[test] + fn test_filters_batch_ordering() { + let batch1 = FiltersBatch::new(0, 99, HashMap::new()); + let batch2 = FiltersBatch::new(100, 199, HashMap::new()); + let batch3 = FiltersBatch::new(200, 299, HashMap::new()); + + let mut set = BTreeSet::new(); + set.insert(batch2); + set.insert(batch1); + set.insert(batch3); + + let heights: Vec<_> = set.iter().map(|b| b.start_height()).collect(); + assert_eq!(heights, vec![0, 100, 200]); + } + + #[test] + fn test_filters_batch_equality() { + let batch1 = FiltersBatch::new(100, 199, HashMap::new()); + let mut filters = HashMap::new(); + filters.insert( + FilterMatchKey::new(100, Header::dummy(100).block_hash()), + BlockFilter::new(&[0x01]), + ); + let batch2 = FiltersBatch::new(100, 199, filters); + + // Equal based on start_height only + assert_eq!(batch1, batch2); + } +} diff --git a/dash-spv/src/sync/filters/batch_tracker.rs b/dash-spv/src/sync/filters/batch_tracker.rs new file mode 100644 index 000000000..2a535e103 --- /dev/null +++ b/dash-spv/src/sync/filters/batch_tracker.rs @@ -0,0 +1,164 @@ +use dashcore::bip158::BlockFilter; +use dashcore::BlockHash; +use key_wallet_manager::wallet_manager::FilterMatchKey; +use std::collections::{HashMap, HashSet}; + +/// Tracks individual filters within a batch. +/// +/// CFilter are requested in batches and requests result in one response per filter. +/// This struct tracks which heights of batch have been received and buffers the filter data for batch processing. +#[derive(Debug)] +pub(super) struct BatchTracker { + /// Ending height of this batch (inclusive). + end_height: u32, + /// Heights within this batch that have been received. + received: HashSet, + /// Buffered filters of this batch. + filters: HashMap, +} + +impl BatchTracker { + /// Create a new batch tracker. + pub(super) fn new(end_height: u32) -> Self { + Self { + end_height, + received: HashSet::new(), + filters: HashMap::new(), + } + } + + /// Insert a filter with its data. + pub(super) fn insert_filter(&mut self, height: u32, block_hash: BlockHash, filter_data: &[u8]) { + self.received.insert(height); + let key = FilterMatchKey::new(height, block_hash); + let filter = BlockFilter::new(filter_data); + self.filters.insert(key, filter); + } + + /// Take the buffered filters. + pub(super) fn take_filters(&mut self) -> HashMap { + std::mem::take(&mut self.filters) + } + + /// Check if all filters in this batch have been received. + pub(super) fn is_complete(&self, start_height: u32) -> bool { + if start_height > self.end_height { + return false; + } + (start_height..=self.end_height).all(|h| self.received.contains(&h)) + } + /// Ending height of this batch (inclusive). + pub(super) fn end_height(&self) -> u32 { + self.end_height + } + /// Number of filters received in this batch. + pub(super) fn received(&self) -> u32 { + self.received.len() as u32 + } +} + +#[cfg(test)] +mod tests { + use crate::sync::filters::batch_tracker::BatchTracker; + use dashcore::Header; + + /// Generate dummy filter data for testing. + fn dummy_filter_data(height: u32) -> Vec { + vec![height as u8, (height >> 8) as u8, 0x01, 0x02] + } + + #[test] + fn test_batch_tracker_new() { + let tracker = BatchTracker::new(999); + assert_eq!(tracker.end_height(), 999); + assert_eq!(tracker.received(), 0); + assert!(tracker.filters.is_empty()); + } + + #[test] + fn test_batch_tracker_insert_filter() { + let mut tracker = BatchTracker::new(10); + let hash = Header::dummy(5).block_hash(); + let data = dummy_filter_data(5); + + tracker.insert_filter(5, hash, &data); + + assert_eq!(tracker.received(), 1); + assert!(tracker.received.contains(&5)); + assert_eq!(tracker.filters.len(), 1); + } + + #[test] + fn test_batch_tracker_is_complete() { + let mut tracker = BatchTracker::new(2); + let start_height = 0; + + // Not complete initially + assert!(!tracker.is_complete(start_height)); + + // Add filters + for h in 0..=2 { + let hash = Header::dummy(h).block_hash(); + tracker.insert_filter(h, hash, &dummy_filter_data(h)); + } + + // Now complete (3 filters: 0, 1, 2) + assert!(tracker.is_complete(start_height)); + } + + #[test] + fn test_batch_tracker_is_complete_inverted_range() { + let tracker = BatchTracker::new(5); + // start_height > end_height should return false, not underflow + assert!(!tracker.is_complete(10)); + } + + #[test] + fn test_batch_tracker_is_complete_out_of_range_entries() { + let mut tracker = BatchTracker::new(2); + // Insert filters outside the expected range + for h in [10, 20, 30] { + tracker.insert_filter(h, Header::dummy(h).block_hash(), &dummy_filter_data(h)); + } + // Has 3 entries in received, which would pass the old count-based check + // for start_height=0..=2 (expected 3), but none are in range + assert!(!tracker.is_complete(0)); + } + + #[test] + fn test_batch_tracker_is_complete_boundary() { + let mut tracker = BatchTracker::new(5); + // Insert all but the last height + for h in 3..=4 { + tracker.insert_filter(h, Header::dummy(h).block_hash(), &dummy_filter_data(h)); + } + assert!(!tracker.is_complete(3)); + + // Insert the final height + tracker.insert_filter(5, Header::dummy(5).block_hash(), &dummy_filter_data(5)); + assert!(tracker.is_complete(3)); + } + + #[test] + fn test_batch_tracker_is_complete_single_height() { + let mut tracker = BatchTracker::new(7); + assert!(!tracker.is_complete(7)); + + tracker.insert_filter(7, Header::dummy(7).block_hash(), &dummy_filter_data(7)); + assert!(tracker.is_complete(7)); + } + + #[test] + fn test_batch_tracker_take_filters() { + let mut tracker = BatchTracker::new(1); + + tracker.insert_filter(0, Header::dummy(0).block_hash(), &dummy_filter_data(0)); + tracker.insert_filter(1, Header::dummy(1).block_hash(), &dummy_filter_data(1)); + + assert_eq!(tracker.filters.len(), 2); + + let taken = tracker.take_filters(); + assert_eq!(taken.len(), 2); + assert!(tracker.filters.is_empty()); + } +} diff --git a/dash-spv/src/sync/filters/manager.rs b/dash-spv/src/sync/filters/manager.rs new file mode 100644 index 000000000..5d4b2a5f1 --- /dev/null +++ b/dash-spv/src/sync/filters/manager.rs @@ -0,0 +1,938 @@ +//! Filters manager for parallel sync. +//! +//! Downloads compact block filters (BIP 157/158), verifies them against headers, +//! and matches against wallet to identify blocks for download. +//! Emits FiltersStored, FiltersSyncComplete and BlocksNeeded events. + +use std::collections::{btree_map, BTreeMap, BTreeSet, HashMap, HashSet}; +use std::sync::Arc; + +use dashcore::bip158::BlockFilter; +use dashcore::{Address, BlockHash}; + +use super::batch::FiltersBatch; +use super::pipeline::FiltersPipeline; +use crate::error::SyncResult; +use crate::network::RequestSender; +use crate::storage::{BlockHeaderStorage, FilterHeaderStorage, FilterStorage}; +use crate::sync::filters::util::get_prev_filter_header; +use crate::sync::{FiltersProgress, SyncEvent, SyncManager, SyncState}; +use crate::validation::{FilterValidationInput, FilterValidator, Validator}; + +use dashcore::hash_types::FilterHeader; +use key_wallet_manager::wallet_interface::WalletInterface; +use key_wallet_manager::wallet_manager::{check_compact_filters_for_addresses, FilterMatchKey}; +use tokio::sync::RwLock; + +/// Batch size for processing filters. +const BATCH_PROCESSING_SIZE: u32 = 5000; + +/// Maximum number of batches to scan ahead while waiting for blocks. +const MAX_LOOKAHEAD_BATCHES: usize = 3; + +/// Filters manager for downloading and matching compact block filters. +/// +/// Generic over: +/// - `H: BlockHeaderStorage` for block hash lookups +/// - `FH: FilterHeaderStorage` for filter header verification +/// - `F: FilterStorage` for storing and loading filters +/// - `W: WalletInterface` for wallet operations +pub struct FiltersManager< + H: BlockHeaderStorage, + FH: FilterHeaderStorage, + F: FilterStorage, + W: WalletInterface, +> { + /// Current progress of the manager. + pub(super) progress: FiltersProgress, + /// Block header storage (for block hash lookups). + pub(super) header_storage: Arc>, + /// Filter header storage (for verification). + filter_header_storage: Arc>, + /// Filter storage (for storing filters). + pub(super) filter_storage: Arc>, + /// Wallet for matching filters. + pub(super) wallet: Arc>, + /// Pipeline for downloading filters. + pub(super) filter_pipeline: FiltersPipeline, + /// Completed batches waiting for verification and storage. + pending_batches: BTreeSet, + /// Next batch start height to store (for filter verification/storage). + next_batch_to_store: u32, + + // === Multi-batch processing state === + /// Active batches being processed (keyed by start_height). + pub(super) active_batches: BTreeMap, + /// Height that has been committed to wallet (all blocks up to this height processed). + committed_height: u32, + /// Current block height being processed (for progress tracking). + processing_height: u32, + /// Blocks remaining that need to be processed. + /// Maps block_hash -> (height, batch_start) for batch association. + pub(super) blocks_remaining: BTreeMap, + /// Block hashes that have been matched and queued for download. + filters_matched: HashSet, +} + +impl + FiltersManager +{ + /// Create a new filters manager with the given storage references. + pub fn new( + wallet: Arc>, + header_storage: Arc>, + filter_header_storage: Arc>, + filter_storage: Arc>, + ) -> Self { + Self { + progress: FiltersProgress::default(), + header_storage, + filter_header_storage, + filter_storage, + wallet, + filter_pipeline: FiltersPipeline::new(), + pending_batches: BTreeSet::new(), + next_batch_to_store: 0, + // Multi-batch processing + active_batches: BTreeMap::new(), + committed_height: 0, + processing_height: 0, + blocks_remaining: BTreeMap::new(), + filters_matched: HashSet::new(), + } + } + + async fn load_filters( + &self, + start_height: u32, + end_height: u32, + ) -> SyncResult> { + let loaded_filters = + self.filter_storage.read().await.load_filters(start_height..end_height + 1).await?; + + let loaded_headers = + self.header_storage.read().await.load_headers(start_height..end_height + 1).await?; + + let mut filters = HashMap::new(); + for (idx, (filter_data, header)) in + loaded_filters.iter().zip(loaded_headers.iter()).enumerate() + { + let height = start_height + idx as u32; + let key = FilterMatchKey::new(height, header.block_hash()); + let filter = BlockFilter::new(filter_data); + filters.insert(key, filter); + } + Ok(filters) + } + + /// Start or resume filter download. + pub(super) async fn start_download( + &mut self, + requests: &RequestSender, + ) -> SyncResult> { + self.set_state(SyncState::Syncing); + // Get wallet state + let (wallet_birth_height, wallet_synced_height) = { + let wallet = self.wallet.read().await; + (wallet.earliest_required_height().await, wallet.synced_height()) + }; + + // Get stored filters tip + let stored_filters_tip = self.filter_storage.read().await.filter_tip_height().await?; + + // Get header start height (for checkpoint sync) + let header_start_height = + self.header_storage.read().await.get_start_height().await.unwrap_or(0); + + // Calculate scan start (where we need to start processing) + // Must be at least header_start_height for checkpoint-based sync + let scan_start = if wallet_synced_height > 0 { + wallet_birth_height.max(wallet_synced_height + 1) + } else { + wallet_birth_height + } + .max(header_start_height); + + // Check if already at target (nothing to download) + if scan_start > self.progress.filter_header_tip_height() { + // Only emit FiltersSyncComplete if we've also reached the chain tip + // This prevents premature sync complete while filter headers are still syncing + if self.progress.current_height() >= self.progress.target_height() { + self.set_state(SyncState::Synced); + tracing::info!("Filters already synced to {}", self.progress.target_height()); + return Ok(vec![SyncEvent::FiltersSyncComplete { + tip_height: self.progress.current_height(), + }]); + } + // Caught up to available filter headers but chain tip not reached yet + return Ok(vec![]); + } + + // Determine download start (where we need to download from) + // Must be at least header_start_height for checkpoint-based sync + let download_start = if stored_filters_tip > 0 { + (stored_filters_tip + 1).max(header_start_height) + } else { + scan_start + }; + + // Initialize storage tracking + // If we have pending batches from a previous run, continue from their boundaries + // instead of recalculating from storage (which might not reflect in-flight batches) + if !self.pending_batches.is_empty() { + let first_pending = self.pending_batches.first().unwrap().start_height(); + tracing::info!( + "Resuming with {} pending batches, next_batch_to_store staying at {} (first pending: {})", + self.pending_batches.len(), + self.next_batch_to_store, + first_pending + ); + // Don't reset next_batch_to_store - keep the existing value + } else { + tracing::info!( + "Initializing next_batch_to_store to {} (stored_filters_tip={}, scan_start={})", + download_start, + stored_filters_tip, + scan_start + ); + self.next_batch_to_store = download_start; + } + + self.processing_height = scan_start; + + // Initialize download pipeline for all remaining filters + if download_start <= self.progress.filter_header_tip_height() { + // Only reinitialize if pipeline is empty - avoid losing in-flight batches + if self.filter_pipeline.active_count() == 0 && self.pending_batches.is_empty() { + self.filter_pipeline.init(download_start, self.progress.filter_header_tip_height()); + tracing::info!( + "Starting filter download from {} to {} (batch-based processing)", + download_start, + self.progress.filter_header_tip_height() + ); + } else { + // Extend target without resetting state - batches still in flight + self.filter_pipeline.extend_target(self.progress.filter_header_tip_height()); + tracing::info!( + "Resuming filter download to {} (active batches: {}, pending: {})", + self.progress.filter_header_tip_height(), + self.filter_pipeline.active_count(), + self.pending_batches.len() + ); + } + + let header_storage = self.header_storage.read().await; + self.filter_pipeline.send_pending(requests, &*header_storage).await?; + drop(header_storage); + } else { + // No new filters to download - initialize pipeline to a "complete" state + // so it doesn't try to download from its default start height + self.filter_pipeline.init(download_start, download_start.saturating_sub(1)); + tracing::info!("Rescan mode: no new filters to download, scanning stored filters only"); + } + + // Initialize the first processing batch + let batch_end = + (scan_start + BATCH_PROCESSING_SIZE - 1).min(self.progress.filter_header_tip_height()); + + // Load any already-stored filters into the current batch, or create empty batch + let filters = if stored_filters_tip > 0 && scan_start <= stored_filters_tip { + let end_height = stored_filters_tip.min(batch_end); + tracing::info!( + "Loading stored filters {} to {} into current batch", + scan_start, + end_height + ); + // Update current_height to reflect stored filters are available + self.progress.update_current_height(stored_filters_tip); + self.load_filters(scan_start, end_height).await? + } else { + HashMap::new() + }; + + let mut batch = FiltersBatch::new(scan_start, batch_end, filters); + if stored_filters_tip >= batch_end { + batch.mark_verified(); + } + self.active_batches.insert(scan_start, batch); + self.committed_height = scan_start.saturating_sub(1); + + // Only scan if all filters for the batch are already loaded + if self.progress.current_height() >= batch_end { + self.scan_batch(scan_start).await + } else { + tracing::debug!( + "Initial batch {}-{}: waiting for filters (current_height={})", + scan_start, + batch_end, + self.progress.current_height() + ); + Ok(vec![]) + } + } + + /// Store completed filter batches to disk and do speculative matching. + /// This is decoupled from block processing - we store and match as fast as possible. + pub(super) async fn store_and_match_batches(&mut self) -> SyncResult> { + // Collect newly completed batches from pipeline + let completed = self.filter_pipeline.take_completed_batches(); + // Filter out batches that have already been stored (can happen with retries) + for batch in completed { + if batch.start_height() < self.next_batch_to_store { + tracing::debug!( + "Discarding duplicate batch {}-{} (already stored, next_batch_to_store={})", + batch.start_height(), + batch.end_height(), + self.next_batch_to_store + ); + continue; + } + self.pending_batches.insert(batch); + } + + let mut events = Vec::new(); + + // Store batches in order (for filter verification chain) + while let Some(batch) = self.pending_batches.first() { + if batch.start_height() != self.next_batch_to_store { + tracing::trace!( + "Waiting for batch {}, first pending is {} ({} pending)", + self.next_batch_to_store, + batch.start_height(), + self.pending_batches.len() + ); + break; + } + + let mut batch = self.pending_batches.pop_first().unwrap(); + + tracing::debug!( + "Storing filter batch {} to {} ({} filters)", + batch.start_height(), + batch.end_height(), + batch.filters().len() + ); + + // Verify and store filters + if !batch.verified() { + // Load filter headers for verification + let filter_headers = self + .filter_header_storage + .read() + .await + .load_filter_headers(batch.start_height()..batch.end_height() + 1) + .await?; + + let filter_headers_map: HashMap = filter_headers + .into_iter() + .enumerate() + .map(|(idx, header)| (batch.start_height() + idx as u32, header)) + .collect(); + + let filter_header_storage = self.filter_header_storage.read().await; + let prev_filter_header = + get_prev_filter_header(&*filter_header_storage, batch.start_height()).await?; + drop(filter_header_storage); + + let validator = FilterValidator::new(); + let validation_input = FilterValidationInput { + filters: batch.filters(), + expected_headers: &filter_headers_map, + prev_filter_header, + }; + validator.validate(validation_input)?; + + // Store verified filters to disk + let mut filter_storage = self.filter_storage.write().await; + for (key, filter) in batch.filters() { + filter_storage.store_filter(key.height(), &filter.content).await?; + } + drop(filter_storage); + + events.push(SyncEvent::FiltersStored { + start_height: batch.start_height(), + end_height: batch.end_height(), + }); + } + + // === Load filters into all active batches that overlap === + for active_batch in self.active_batches.values_mut() { + if batch.start_height() <= active_batch.end_height() + && batch.end_height() >= active_batch.start_height() + { + // This batch overlaps with active batch, load into memory + let load_start = batch.start_height().max(active_batch.start_height()); + let load_end = batch.end_height().min(active_batch.end_height()); + + let mut loaded_count = 0; + for (key, filter) in batch.filters_mut() { + if key.height() >= load_start && key.height() <= load_end { + active_batch.filters_mut().insert(key.clone(), filter.clone()); + loaded_count += 1; + } + } + tracing::debug!( + "Loaded {} filters from batch {}-{} into active_batch {}-{} (active_batch now has {} filters)", + loaded_count, + batch.start_height(), + batch.end_height(), + active_batch.start_height(), + active_batch.end_height(), + active_batch.filters().len() + ); + } + } + + self.progress.add_processed(batch.end_height() - batch.start_height() + 1); + self.progress.update_current_height(batch.end_height()); + self.next_batch_to_store = batch.end_height() + 1; + } + + // If we stored any batches, try to process the batch containing the current processing height. + // This is called only when batches complete, not on every filter + if !events.is_empty() { + tracing::debug!( + "Calling try_process_batch after storing batches (current_height={}, target_height={})", + self.progress.current_height(), + self.progress.target_height() + ); + events.extend(self.try_process_batch().await?); + } + + Ok(events) + } + + /// Try to process batches - commit completed, scan ready, create lookahead. + /// Returns events for blocks that need to be downloaded. + pub(super) async fn try_process_batch(&mut self) -> SyncResult> { + let mut events = Vec::new(); + + // Phase 1: Commit completed batches in order + events.extend(self.try_commit_batches().await?); + + // Phase 2: Scan any ready batches where filters are available + events.extend(self.scan_ready_batches().await?); + + // Phase 3: Create lookahead batches up to MAX_LOOKAHEAD_BATCHES + events.extend(self.try_create_lookahead_batches().await?); + + // If no active batches and all filters downloaded, check if we can transition to Synced + // Only emit SyncComplete if we've also reached the chain tip (target_height) + if self.active_batches.is_empty() + && self.state() == SyncState::Syncing + && self.progress.current_height() >= self.progress.filter_header_tip_height() + && self.progress.current_height() >= self.progress.target_height() + { + self.set_state(SyncState::Synced); + tracing::info!("Filter sync complete at height {}", self.progress.current_height()); + events.push(SyncEvent::FiltersSyncComplete { + tip_height: self.progress.current_height(), + }); + } + + Ok(events) + } + + /// Commit completed batches in order (lowest batch_start first). + async fn try_commit_batches(&mut self) -> SyncResult> { + let mut events = Vec::new(); + + loop { + // Get the lowest batch + let Some((&batch_start, batch)) = self.active_batches.first_key_value() else { + break; + }; + + // Check if batch was scanned - can't commit until scanned + if !batch.scanned() { + break; + } + + // Check if batch has pending blocks + if batch.pending_blocks() > 0 { + break; + } + + // Check if rescan is needed and not done + if !batch.rescan_complete() { + // Take collected addresses from the batch + let addresses = self + .active_batches + .get_mut(&batch_start) + .map(|b| b.take_collected_addresses()) + .unwrap_or_default(); + + if !addresses.is_empty() { + // Rescan current batch + events.extend(self.rescan_batch(batch_start, addresses.clone()).await?); + + // Also rescan later batches that are already scanned + let later_batches: Vec = self + .active_batches + .iter() + .filter(|(&start, batch)| start > batch_start && batch.scanned()) + .map(|(&start, _)| start) + .collect(); + + for later_start in later_batches { + events.extend(self.rescan_batch(later_start, addresses.clone()).await?); + } + + // Check if rescan found more blocks + if let Some(batch) = self.active_batches.get(&batch_start) { + if batch.pending_blocks() > 0 { + // Found more blocks, can't commit yet + break; + } + } + } + // Mark rescan as complete + if let Some(batch) = self.active_batches.get_mut(&batch_start) { + batch.mark_rescan_complete(); + } + } + + // Commit this batch + let batch = self.active_batches.remove(&batch_start).unwrap(); + self.committed_height = batch.end_height(); + self.wallet.write().await.update_synced_height(batch.end_height()); + self.processing_height = batch.end_height() + 1; + + tracing::info!( + "Committed batch {}-{}, committed_height now {}", + batch.start_height(), + batch.end_height(), + self.committed_height + ); + } + + Ok(events) + } + + /// Scan any active batches where filters are available but not yet scanned. + async fn scan_ready_batches(&mut self) -> SyncResult> { + let mut events = Vec::new(); + + // Collect batch starts that need scanning + let batch_starts: Vec = self + .active_batches + .iter() + .filter(|(_, batch)| { + !batch.scanned() && self.progress.current_height() >= batch.end_height() + }) + .map(|(&start, _)| start) + .collect(); + + for batch_start in batch_starts { + events.extend(self.scan_batch(batch_start).await?); + } + + Ok(events) + } + + /// Create lookahead batches up to MAX_LOOKAHEAD_BATCHES. + async fn try_create_lookahead_batches(&mut self) -> SyncResult> { + let mut events = Vec::new(); + + while self.active_batches.len() < MAX_LOOKAHEAD_BATCHES { + // Find where next batch should start + let next_start = if let Some((&_, last_batch)) = self.active_batches.last_key_value() { + last_batch.end_height() + 1 + } else { + self.processing_height + }; + + // Check if we've reached the target + if next_start > self.progress.filter_header_tip_height() { + break; + } + + let next_end = (next_start + BATCH_PROCESSING_SIZE - 1) + .min(self.progress.filter_header_tip_height()); + + tracing::info!( + "Creating lookahead batch {}-{} (active_batches={})", + next_start, + next_end, + self.active_batches.len() + ); + + // Load available filters into the new batch + let available_end = self.progress.current_height().min(next_end); + let filters = if next_start <= available_end { + self.load_filters(next_start, available_end).await? + } else { + HashMap::new() + }; + + let mut batch = FiltersBatch::new(next_start, next_end, filters); + if self.progress.current_height() >= next_end { + batch.mark_verified(); + } + self.active_batches.insert(next_start, batch); + + // Scan immediately if filters are available + if self.progress.current_height() >= next_end { + events.extend(self.scan_batch(next_start).await?); + } + } + + Ok(events) + } + + /// Rescan a specific batch for newly discovered addresses. + pub(super) async fn rescan_batch( + &mut self, + batch_start: u32, + new_addresses: HashSet
, + ) -> SyncResult> { + if new_addresses.is_empty() { + return Ok(vec![]); + } + + let Some(batch) = self.active_batches.get_mut(&batch_start) else { + return Ok(vec![]); + }; + + tracing::info!( + "Rescan filters ({}-{}) for {} new addresses", + batch.start_height(), + batch.end_height(), + new_addresses.len() + ); + + if batch.filters().is_empty() { + return Ok(vec![]); + } + + // Match filters against new addresses only + let addresses_vec: Vec<_> = new_addresses.into_iter().collect(); + let matches = check_compact_filters_for_addresses(batch.filters(), addresses_vec); + let mut events = Vec::new(); + let mut blocks_needed = BTreeSet::new(); + let mut new_blocks_count = 0; + + if !matches.is_empty() { + self.progress.add_matched(matches.len() as u32); + } + for key in matches { + // Skip blocks that were already matched (even if already processed) + if self.filters_matched.contains(key.hash()) { + continue; + } + // Queue blocks discovered by rescan for download + if let btree_map::Entry::Vacant(e) = self.blocks_remaining.entry(*key.hash()) { + e.insert((key.height(), batch_start)); + self.filters_matched.insert(*key.hash()); + blocks_needed.insert(key); + new_blocks_count += 1; + } + } + + // Update batch pending_blocks count + if new_blocks_count > 0 { + if let Some(batch) = self.active_batches.get_mut(&batch_start) { + batch.set_pending_blocks(batch.pending_blocks() + new_blocks_count); + } + tracing::info!("Rescan found {} additional blocks", new_blocks_count); + events.push(SyncEvent::BlocksNeeded { + blocks: blocks_needed, + }); + } + + Ok(events) + } + + /// Scan a specific batch with wallet's current addresses. + async fn scan_batch(&mut self, batch_start: u32) -> SyncResult> { + let mut events = Vec::new(); + + let Some(batch) = self.active_batches.get_mut(&batch_start) else { + tracing::debug!("scan_batch: batch {} not found", batch_start); + return Ok(events); + }; + + tracing::debug!( + "scan_batch: batch {}-{} has {} filters", + batch.start_height(), + batch.end_height(), + batch.filters().len() + ); + + batch.mark_scanned(); + + // Get all filters in the batch + if batch.filters().is_empty() { + tracing::debug!("scan_batch: batch filters are empty, returning early"); + return Ok(events); + } + + // Match against wallet's current addresses + let wallet = self.wallet.read().await; + let addresses = wallet.monitored_addresses(); + let matches = check_compact_filters_for_addresses(batch.filters(), addresses); + drop(wallet); + + tracing::info!( + "Batch {}-{}: found {} matching blocks", + batch.start_height(), + batch.end_height(), + matches.len() + ); + + if matches.is_empty() { + return Ok(events); + } + + self.progress.add_matched(matches.len() as u32); + + // Filter out already-processed blocks and track the new ones + let mut blocks_needed = BTreeSet::new(); + let mut new_blocks_count = 0; + for key in matches { + if self.filters_matched.contains(key.hash()) { + continue; + } + if self.blocks_remaining.contains_key(key.hash()) { + continue; + } + self.blocks_remaining.insert(*key.hash(), (key.height(), batch_start)); + self.filters_matched.insert(*key.hash()); + blocks_needed.insert(key); + new_blocks_count += 1; + } + + // Update batch pending_blocks count + if let Some(batch) = self.active_batches.get_mut(&batch_start) { + batch.set_pending_blocks(batch.pending_blocks() + new_blocks_count); + } + + if !blocks_needed.is_empty() { + events.push(SyncEvent::BlocksNeeded { + blocks: blocks_needed, + }); + } + + Ok(events) + } + + /// Handle notification that new filter headers are available. + /// Used by both FilterHeadersSyncComplete and FilterHeadersStored events. + pub(super) async fn handle_new_filter_headers( + &mut self, + tip_height: u32, + requests: &RequestSender, + ) -> SyncResult> { + self.progress.update_filter_header_tip_height(tip_height); + self.update_target_height(tip_height); + + match self.state() { + SyncState::Syncing | SyncState::Synced + if self.progress.current_height() < self.progress.filter_header_tip_height() => + { + self.filter_pipeline.extend_target(tip_height); + { + let header_storage = self.header_storage.read().await; + self.filter_pipeline.send_pending(requests, &*header_storage).await?; + } + + if self.state() == SyncState::Synced && self.active_batches.is_empty() { + tracing::debug!("Processing new filter (target: {})", tip_height); + return self.try_create_lookahead_batches().await; + } + } + SyncState::WaitingForConnections | SyncState::WaitForEvents + if self.progress.current_height() < self.progress.filter_header_tip_height() => + { + return self.start_download(requests).await; + } + _ => {} + } + Ok(vec![]) + } +} + +impl + std::fmt::Debug for FiltersManager +{ + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("FiltersManager").field("progress", &self.progress).finish() + } +} +#[cfg(test)] +mod tests { + use super::*; + use crate::network::MessageType; + use crate::storage::{ + DiskStorageManager, PersistentBlockHeaderStorage, PersistentFilterHeaderStorage, + PersistentFilterStorage, + }; + use crate::sync::{ManagerIdentifier, SyncManagerProgress}; + use key_wallet_manager::test_utils::MockWallet; + + type TestFiltersManager = FiltersManager< + PersistentBlockHeaderStorage, + PersistentFilterHeaderStorage, + PersistentFilterStorage, + MockWallet, + >; + type TestSyncManager = dyn SyncManager; + + async fn create_test_manager() -> TestFiltersManager { + let storage = DiskStorageManager::with_temp_dir().await.unwrap(); + let wallet = Arc::new(RwLock::new(MockWallet::new())); + FiltersManager::new( + wallet, + storage.header_storage(), + storage.filter_header_storage(), + storage.filter_storage(), + ) + } + + #[tokio::test] + async fn test_filters_manager_new() { + let manager = create_test_manager().await; + assert_eq!(manager.identifier(), ManagerIdentifier::Filter); + assert_eq!(manager.state(), SyncState::Initializing); + assert_eq!(manager.wanted_message_types(), vec![MessageType::CFilter]); + } + + #[tokio::test] + async fn test_filters_manager_progress() { + let mut manager = create_test_manager().await; + manager.set_state(SyncState::Syncing); + manager.progress.update_current_height(500); + manager.progress.update_target_height(1000); + manager.progress.add_processed(350); + manager.progress.add_downloaded(250); + manager.progress.add_matched(150); + + let manager_ref: &TestSyncManager = &manager; + let progress = manager_ref.progress(); + if let SyncManagerProgress::Filters(progress) = progress { + assert_eq!(progress.state(), SyncState::Syncing); + assert_eq!(progress.current_height(), 500); + assert_eq!(progress.target_height(), 1000); + assert_eq!(progress.processed(), 350); + assert_eq!(progress.downloaded(), 250); + assert_eq!(progress.matched(), 150); + assert!(progress.last_activity().elapsed().as_secs() < 1); + } else { + panic!("Expected SyncManagerProgress::Filters"); + } + } + + #[tokio::test] + async fn test_max_lookahead_constant() { + // Verify the constant is set to expected value + assert_eq!(MAX_LOOKAHEAD_BATCHES, 3); + } + + #[tokio::test] + async fn test_batch_commit_blocks_on_pending() { + let mut manager = create_test_manager().await; + manager.set_state(SyncState::Syncing); + + // Manually create two batches + let mut batch1 = FiltersBatch::new(0, 4999, HashMap::new()); + let batch2 = FiltersBatch::new(5000, 9999, HashMap::new()); + + // batch1 has pending blocks, batch2 does not + batch1.set_pending_blocks(1); + + manager.active_batches.insert(0, batch1); + manager.active_batches.insert(5000, batch2); + + // Try to commit - should not commit anything since batch1 has pending blocks + manager.try_commit_batches().await.unwrap(); + assert_eq!(manager.active_batches.len(), 2); + // committed_height stays at initial value since nothing was committed + assert!(manager.active_batches.contains_key(&0)); + } + + #[tokio::test] + async fn test_batch_commit_succeeds_when_ready() { + let mut manager = create_test_manager().await; + manager.set_state(SyncState::Syncing); + + // Create a batch with no pending blocks, scanned, and rescan complete + let mut batch1 = FiltersBatch::new(0, 4999, HashMap::new()); + batch1.set_pending_blocks(0); + batch1.mark_scanned(); + batch1.mark_rescan_complete(); + + manager.active_batches.insert(0, batch1); + + // Commit should work + manager.try_commit_batches().await.unwrap(); + assert_eq!(manager.active_batches.len(), 0); + assert_eq!(manager.committed_height, 4999); + } + + #[tokio::test] + async fn test_batch_commit_order_preserved() { + let mut manager = create_test_manager().await; + manager.set_state(SyncState::Syncing); + + // Create two batches, both ready to commit + let mut batch1 = FiltersBatch::new(0, 4999, HashMap::new()); + batch1.set_pending_blocks(0); + batch1.mark_scanned(); + batch1.mark_rescan_complete(); + + let mut batch2 = FiltersBatch::new(5000, 9999, HashMap::new()); + batch2.set_pending_blocks(0); + batch2.mark_scanned(); + batch2.mark_rescan_complete(); + + manager.active_batches.insert(5000, batch2); // Insert higher one first + manager.active_batches.insert(0, batch1); + + // Commit should commit both in order + manager.try_commit_batches().await.unwrap(); + assert_eq!(manager.active_batches.len(), 0); + assert_eq!(manager.committed_height, 9999); // Both committed + } + + #[tokio::test] + async fn test_blocks_remaining_tracks_batch() { + let mut manager = create_test_manager().await; + manager.set_state(SyncState::Syncing); + + // Add blocks from different batches + let hash1 = dashcore::block::Header::dummy(0).block_hash(); + let hash2 = dashcore::block::Header::dummy(1).block_hash(); + + manager.blocks_remaining.insert(hash1, (100, 0)); // batch 0 + manager.blocks_remaining.insert(hash2, (5100, 5000)); // batch 5000 + + // Verify batch association + assert_eq!(manager.blocks_remaining.get(&hash1), Some(&(100, 0))); + assert_eq!(manager.blocks_remaining.get(&hash2), Some(&(5100, 5000))); + } + + #[tokio::test] + async fn test_batch_collects_addresses() { + use crate::sync::filters::batch::FiltersBatch; + use dashcore::Network; + + let mut batch = FiltersBatch::new(0, 4999, HashMap::new()); + + // Initially empty + assert!(batch.take_collected_addresses().is_empty()); + + // Add addresses using test utility + let addr1 = dashcore::Address::dummy(Network::Testnet, 1); + let addr2 = dashcore::Address::dummy(Network::Testnet, 2); + + batch.add_addresses([addr1.clone(), addr2.clone()]); + + let collected = batch.take_collected_addresses(); + assert_eq!(collected.len(), 2); + assert!(collected.contains(&addr1)); + assert!(collected.contains(&addr2)); + + // After take, should be empty + assert!(batch.take_collected_addresses().is_empty()); + } +} diff --git a/dash-spv/src/sync/filters/mod.rs b/dash-spv/src/sync/filters/mod.rs new file mode 100644 index 000000000..a930e87da --- /dev/null +++ b/dash-spv/src/sync/filters/mod.rs @@ -0,0 +1,10 @@ +mod batch; +mod batch_tracker; +mod manager; +mod pipeline; +mod progress; +mod sync_manager; +mod util; + +pub use manager::FiltersManager; +pub use progress::FiltersProgress; diff --git a/dash-spv/src/sync/filters/pipeline.rs b/dash-spv/src/sync/filters/pipeline.rs new file mode 100644 index 000000000..cc480e8f8 --- /dev/null +++ b/dash-spv/src/sync/filters/pipeline.rs @@ -0,0 +1,910 @@ +//! CFilters pipeline implementation. +//! +//! Handles pipelined download of compact block filters (BIP 157/158). +//! Uses DownloadCoordinator for batch-level tracking, with additional +//! per-batch tracking for individual filter responses. +//! +//! Filters are buffered in a HashMap until the entire batch +//! is complete, enabling batch verification and direct wallet matching. + +use std::collections::{BTreeSet, HashMap}; +use std::time::Duration; + +use dashcore::BlockHash; + +use crate::error::{SyncError, SyncResult}; +use crate::network::RequestSender; +use crate::storage::BlockHeaderStorage; +use crate::sync::download_coordinator::{DownloadConfig, DownloadCoordinator}; +use crate::sync::filters::batch::FiltersBatch; +use crate::sync::filters::batch_tracker::BatchTracker; + +/// Batch size for filter requests. +const FILTER_BATCH_SIZE: u32 = 1000; + +/// Maximum concurrent filter batch requests. +const MAX_CONCURRENT_FILTER_BATCHES: usize = 20; + +/// Timeout for filter batch requests. +/// Each batch requires 1000 individual filter messages, so allow plenty of time. +const FILTER_TIMEOUT: Duration = Duration::from_secs(30); + +/// Maximum number of retries for CFilter requests. +const FILTERS_MAX_RETRIES: u32 = 3; + +/// Pipeline for downloading compact block filters. +/// +/// Uses DownloadCoordinator for batch-level download mechanics, +/// with BatchTracker for tracking individual filters within +/// each batch. +/// +/// Filters are buffered until the entire batch is complete, then returned +/// via `take_completed_batches()` for verification and matching. +#[derive(Debug)] +pub(super) struct FiltersPipeline { + /// Core coordinator tracks batch start heights. + coordinator: DownloadCoordinator, + /// Tracks individual filter receipts per batch (start_height -> tracker). + batch_trackers: HashMap, + /// Completed filter batches. + completed_batches: BTreeSet, + /// Target height for sync. + target_height: u32, + /// Total filters received. + filters_received: u32, + /// Highest filter height received. + highest_received: u32, +} + +impl Default for FiltersPipeline { + fn default() -> Self { + Self::new() + } +} + +impl FiltersPipeline { + /// Create a new CFilters pipeline. + pub(super) fn new() -> Self { + Self { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_max_concurrent(MAX_CONCURRENT_FILTER_BATCHES) + .with_timeout(FILTER_TIMEOUT) + .with_max_retries(FILTERS_MAX_RETRIES), + ), + batch_trackers: HashMap::new(), + completed_batches: BTreeSet::new(), + target_height: 0, + filters_received: 0, + highest_received: 0, + } + } + + /// Get the number of active batches. + pub(super) fn active_count(&self) -> usize { + self.coordinator.active_count() + } + + /// Take completed batches with their buffered filter data for processing. + pub(super) fn take_completed_batches(&mut self) -> BTreeSet { + std::mem::take(&mut self.completed_batches) + } + + /// Initialize the pipeline for a sync range. + /// + /// Pre-queues all batches for the range using the coordinator's pending queue. + pub(super) fn init(&mut self, start_height: u32, target_height: u32) { + self.coordinator.clear(); + self.batch_trackers.clear(); + self.completed_batches.clear(); + self.target_height = target_height; + self.highest_received = start_height.saturating_sub(1); + self.filters_received = 0; + + // Pre-queue all batches + let mut current = start_height; + while current <= target_height { + self.coordinator.enqueue([current]); + let batch_end = (current + FILTER_BATCH_SIZE - 1).min(target_height); + current = batch_end + 1; + } + } + + /// Extend the target height without resetting pipeline state. + /// + /// Queues additional batches from the old target to the new target. + pub(super) fn extend_target(&mut self, new_target: u32) { + if new_target <= self.target_height { + return; + } + + let old_target = self.target_height; + self.target_height = new_target; + + // Queue new batches from (old_target + 1) to new_target + let mut current = old_target + 1; + while current <= new_target { + self.coordinator.enqueue([current]); + let batch_end = (current + FILTER_BATCH_SIZE - 1).min(new_target); + current = batch_end + 1; + } + } + + /// Send pending filter requests up to the concurrency limit. + pub(super) async fn send_pending( + &mut self, + requests: &RequestSender, + storage: &impl BlockHeaderStorage, + ) -> SyncResult { + let count = self.coordinator.available_to_send(); + if count == 0 { + return Ok(0); + } + + let start_heights = self.coordinator.take_pending(count); + let mut sent = 0; + + for start_height in start_heights { + let batch_end = (start_height + FILTER_BATCH_SIZE - 1).min(self.target_height); + + // Get stop hash for this batch + let stop_hash = storage + .get_header(batch_end) + .await? + .ok_or_else(|| { + SyncError::Storage(format!("Missing header at height {}", batch_end)) + })? + .block_hash(); + + requests.request_filters(start_height, stop_hash)?; + + // Track in coordinator and batch tracker (reuse existing tracker if present) + self.coordinator.mark_sent(&[start_height]); + self.batch_trackers.entry(start_height).or_insert_with(|| BatchTracker::new(batch_end)); + + tracing::debug!( + "Sent GetCFilters: {} to {} ({} active batches)", + start_height, + batch_end, + self.coordinator.active_count() + ); + + sent += 1; + } + + Ok(sent) + } + + /// Handle a received CFilter message with filter data. + /// + /// Buffers the filter data for batch verification and wallet matching. + /// Returns `Some(height)` when a batch completes, `None` otherwise. + pub(super) fn receive_with_data( + &mut self, + height: u32, + block_hash: BlockHash, + filter_data: &[u8], + ) -> Option { + // Find which batch this filter belongs to + let batch_start = self.find_batch_for_height(height)?; + + let tracker = self.batch_trackers.get_mut(&batch_start)?; + tracker.insert_filter(height, block_hash, filter_data); + self.filters_received += 1; + self.highest_received = self.highest_received.max(height); + + // Check if batch is complete + if !tracker.is_complete(batch_start) { + // Log progress toward completion + let received = tracker.received(); + let expected = (tracker.end_height() - batch_start + 1) as usize; + if received > 0 && received % 100 == 0 { + tracing::debug!( + "Filter batch {} progress: {}/{} filters received", + batch_start, + received, + expected + ); + } + return None; + } + + let end_height = tracker.end_height(); + // Take the filters before removing the tracker + let filters = + self.batch_trackers.get_mut(&batch_start).map(|t| t.take_filters()).unwrap_or_default(); + + self.batch_trackers.remove(&batch_start); + self.coordinator.receive(&batch_start); + + tracing::info!( + "Filter batch {}-{} complete ({} filters)", + batch_start, + end_height, + filters.len() + ); + let batch = FiltersBatch::new(batch_start, end_height, filters); + self.completed_batches.insert(batch); + + Some(height) + } + + /// Find which batch a filter height belongs to. + fn find_batch_for_height(&self, height: u32) -> Option { + for (&start, tracker) in &self.batch_trackers { + if height >= start && height <= tracker.end_height() { + return Some(start); + } + } + None + } + + /// Check for timed out batches and handle retries. + /// + /// Returns batch starts that timed out and were re-queued. + /// Uses coordinator's retry mechanism to avoid duplicate requests. + /// Note: Does not remove batch trackers - keeps them to receive any late-arriving filters. + pub(super) fn handle_timeouts(&mut self) -> Vec { + let mut timed_out_starts = Vec::new(); + + for start in self.coordinator.check_timeouts() { + if self.coordinator.enqueue_retry(start) { + tracing::warn!("Filter batch at {} timed out, queued for retry", start); + timed_out_starts.push(start); + } else { + // Max retries exceeded - remove tracker, log error + tracing::error!("Filter batch at {} exceeded max retries, giving up", start); + self.batch_trackers.remove(&start); + } + } + + timed_out_starts + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::network::{NetworkRequest, RequestSender}; + use crate::storage::{PersistentBlockHeaderStorage, PersistentStorage}; + use dashcore::bip158::BlockFilter; + use dashcore::block::Header; + use dashcore::network::message::NetworkMessage; + use dashcore_hashes::Hash; + use key_wallet_manager::wallet_manager::FilterMatchKey; + use std::time::Duration; + use tempfile::TempDir; + use tokio::sync::mpsc::unbounded_channel; + // ========================================================================= + // Helper functions + // ========================================================================= + + /// Create a pipeline with short timeout for testing timeouts. + fn create_pipeline_with_short_timeout() -> FiltersPipeline { + FiltersPipeline { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_timeout(Duration::from_millis(1)) + .with_max_retries(3), + ), + batch_trackers: HashMap::new(), + completed_batches: BTreeSet::new(), + target_height: 0, + filters_received: 0, + highest_received: 0, + } + } + + /// Create a test request sender with its receiver. + fn create_test_request_sender( + ) -> (RequestSender, tokio::sync::mpsc::UnboundedReceiver) { + let (tx, rx) = unbounded_channel(); + (RequestSender::new(tx), rx) + } + + /// Generate dummy filter data for testing. + fn dummy_filter_data(height: u32) -> Vec { + vec![height as u8, (height >> 8) as u8, 0x01, 0x02] + } + + // ========================================================================= + // FiltersPipeline Construction Tests + // ========================================================================= + + #[test] + fn test_pipeline_new() { + let pipeline = FiltersPipeline::new(); + + assert_eq!(pipeline.active_count(), 0); + assert!(pipeline.batch_trackers.is_empty()); + assert!(pipeline.completed_batches.is_empty()); + assert_eq!(pipeline.target_height, 0); + assert_eq!(pipeline.filters_received, 0); + assert_eq!(pipeline.highest_received, 0); + } + + #[test] + fn test_pipeline_default_trait() { + let default_pipeline = FiltersPipeline::default(); + let new_pipeline = FiltersPipeline::new(); + + assert_eq!(default_pipeline.active_count(), new_pipeline.active_count()); + assert_eq!(default_pipeline.target_height, new_pipeline.target_height); + } + + #[test] + fn test_pipeline_init() { + let mut pipeline = FiltersPipeline::new(); + + pipeline.init(100, 500); + + // Should have 1 batch queued (100-500 is 401 filters, fits in 1 batch) + assert_eq!(pipeline.coordinator.pending_count(), 1); + assert_eq!(pipeline.target_height, 500); + assert_eq!(pipeline.highest_received, 99); + assert_eq!(pipeline.filters_received, 0); + } + + #[test] + fn test_pipeline_init_resets_state() { + let mut pipeline = FiltersPipeline::new(); + + // Add some state + pipeline.batch_trackers.insert(0, BatchTracker::new(99)); + pipeline.completed_batches.insert(FiltersBatch::new(100, 199, HashMap::new())); + pipeline.coordinator.mark_sent(&[0]); + pipeline.filters_received = 50; + + // Init should clear everything + pipeline.init(200, 300); + + assert!(pipeline.batch_trackers.is_empty()); + assert!(pipeline.completed_batches.is_empty()); + assert_eq!(pipeline.active_count(), 0); + assert_eq!(pipeline.filters_received, 0); + // 1 batch queued for heights 200-300 + assert_eq!(pipeline.coordinator.pending_count(), 1); + assert_eq!(pipeline.target_height, 300); + } + + // ========================================================================= + // Target Extension Tests + // ========================================================================= + + #[test] + fn test_extend_target_increases() { + let mut pipeline = FiltersPipeline::new(); + pipeline.init(0, 100); + + pipeline.extend_target(200); + + assert_eq!(pipeline.target_height, 200); + } + + #[test] + fn test_extend_target_ignores_lower() { + let mut pipeline = FiltersPipeline::new(); + pipeline.init(0, 100); + + pipeline.extend_target(50); + + assert_eq!(pipeline.target_height, 100); + + pipeline.extend_target(100); + + assert_eq!(pipeline.target_height, 100); + } + + // ========================================================================= + // Receive Tests + // ========================================================================= + + #[test] + fn test_receive_single_filter() { + let mut pipeline = FiltersPipeline::new(); + pipeline.target_height = 99; + + // Set up batch tracker manually (simulating an in-flight batch) + pipeline.batch_trackers.insert(0, BatchTracker::new(99)); + pipeline.coordinator.mark_sent(&[0]); + + let height = 50; + let hash = Header::dummy(height).block_hash(); + let result = pipeline.receive_with_data(height, hash, &dummy_filter_data(height)); + + // Returns None since batch is not complete (only 1 of 100 filters received) + assert_eq!(result, None); + // But counters are updated + assert_eq!(pipeline.filters_received, 1); + assert_eq!(pipeline.highest_received, 50); + } + + #[test] + fn test_receive_unknown_height() { + let mut pipeline = FiltersPipeline::new(); + pipeline.target_height = 99; + + // No batch tracker set up - filter is unexpected + let hash = Header::dummy(50).block_hash(); + let result = pipeline.receive_with_data(50, hash, &dummy_filter_data(50)); + + assert_eq!(result, None); + assert_eq!(pipeline.filters_received, 0); + } + + #[test] + fn test_receive_batch_completion() { + let mut pipeline = FiltersPipeline::new(); + pipeline.target_height = 2; + + // Set up a small batch (3 filters: 0, 1, 2) + pipeline.batch_trackers.insert(0, BatchTracker::new(2)); + pipeline.coordinator.mark_sent(&[0]); + + // Receive all filters + for h in 0..=2 { + let hash = Header::dummy(h).block_hash(); + pipeline.receive_with_data(h, hash, &dummy_filter_data(h)); + } + + // Batch should be complete and moved to completed_batches + assert!(pipeline.batch_trackers.is_empty()); + assert_eq!(pipeline.completed_batches.len(), 1); + + let completed = pipeline.take_completed_batches(); + assert_eq!(completed.len(), 1); + let batch = completed.into_iter().next().unwrap(); + assert_eq!(batch.start_height(), 0); + assert_eq!(batch.end_height(), 2); + assert_eq!(batch.filters().len(), 3); + } + + #[test] + fn test_receive_out_of_order() { + let mut pipeline = FiltersPipeline::new(); + pipeline.target_height = 4; + + pipeline.batch_trackers.insert(0, BatchTracker::new(4)); + pipeline.coordinator.mark_sent(&[0]); + + // Receive out of order + for h in [3, 1, 4, 0, 2] { + let hash = Header::dummy(h).block_hash(); + pipeline.receive_with_data(h, hash, &dummy_filter_data(h)); + } + + // Should complete successfully + assert!(pipeline.batch_trackers.is_empty()); + assert_eq!(pipeline.completed_batches.len(), 1); + } + + #[test] + fn test_receive_updates_counters() { + let mut pipeline = FiltersPipeline::new(); + pipeline.target_height = 99; + + pipeline.batch_trackers.insert(0, BatchTracker::new(99)); + pipeline.coordinator.mark_sent(&[0]); + + // Receive some filters + for h in [10, 5, 20, 15] { + let hash = Header::dummy(h).block_hash(); + pipeline.receive_with_data(h, hash, &dummy_filter_data(h)); + } + + assert_eq!(pipeline.filters_received, 4); + assert_eq!(pipeline.highest_received, 20); + } + + #[test] + fn test_receive_small_batch_at_target() { + let mut pipeline = FiltersPipeline::new(); + pipeline.target_height = 1005; + + // Small batch of 6 filters (1000-1005) + pipeline.batch_trackers.insert(1000, BatchTracker::new(1005)); + pipeline.coordinator.mark_sent(&[1000]); + + // Receive all 6 filters + for h in 1000..=1005 { + let hash = Header::dummy(h).block_hash(); + pipeline.receive_with_data(h, hash, &dummy_filter_data(h)); + } + + assert_eq!(pipeline.completed_batches.len(), 1); + let batch = pipeline.completed_batches.iter().next().unwrap(); + assert_eq!(batch.filters().len(), 6); + } + + #[test] + fn test_receive_multiple_batches() { + let mut pipeline = FiltersPipeline::new(); + pipeline.target_height = 9; + + // Set up two batches manually + pipeline.batch_trackers.insert(0, BatchTracker::new(4)); + pipeline.batch_trackers.insert(5, BatchTracker::new(9)); + pipeline.coordinator.mark_sent(&[0, 5]); + + // Receive first batch + for h in 0..=4 { + let hash = Header::dummy(h).block_hash(); + pipeline.receive_with_data(h, hash, &dummy_filter_data(h)); + } + + assert_eq!(pipeline.completed_batches.len(), 1); + assert_eq!(pipeline.batch_trackers.len(), 1); + + // Receive second batch + for h in 5..=9 { + let hash = Header::dummy(h).block_hash(); + pipeline.receive_with_data(h, hash, &dummy_filter_data(h)); + } + + assert_eq!(pipeline.completed_batches.len(), 2); + assert!(pipeline.batch_trackers.is_empty()); + } + + // ========================================================================= + // find_batch_for_height Tests + // ========================================================================= + + #[test] + fn test_find_batch_for_height_found() { + let mut pipeline = FiltersPipeline::new(); + pipeline.batch_trackers.insert(0, BatchTracker::new(999)); + pipeline.batch_trackers.insert(1000, BatchTracker::new(1999)); + + assert_eq!(pipeline.find_batch_for_height(500), Some(0)); + assert_eq!(pipeline.find_batch_for_height(1500), Some(1000)); + } + + #[test] + fn test_find_batch_for_height_none() { + let mut pipeline = FiltersPipeline::new(); + pipeline.batch_trackers.insert(100, BatchTracker::new(199)); + + // Below range + assert_eq!(pipeline.find_batch_for_height(50), None); + // Above range + assert_eq!(pipeline.find_batch_for_height(250), None); + } + + #[test] + fn test_find_batch_for_height_boundary() { + let mut pipeline = FiltersPipeline::new(); + pipeline.batch_trackers.insert(100, BatchTracker::new(199)); + + // First height in batch + assert_eq!(pipeline.find_batch_for_height(100), Some(100)); + // Last height in batch + assert_eq!(pipeline.find_batch_for_height(199), Some(100)); + } + + // ========================================================================= + // Timeout Tests + // ========================================================================= + + #[test] + fn test_handle_timeouts_no_batches() { + let mut pipeline = FiltersPipeline::new(); + let timed_out = pipeline.handle_timeouts(); + assert!(timed_out.is_empty()); + } + + #[test] + fn test_handle_timeouts_requeue() { + let mut pipeline = create_pipeline_with_short_timeout(); + pipeline.target_height = 999; + + // Set up batch and mark as in-flight (simulating a sent request) + pipeline.batch_trackers.insert(0, BatchTracker::new(999)); + pipeline.coordinator.mark_sent(&[0]); + + // Wait for timeout + std::thread::sleep(Duration::from_millis(5)); + + let timed_out = pipeline.handle_timeouts(); + + assert_eq!(timed_out, vec![0]); + // Batch should be re-queued in coordinator's pending queue + assert_eq!(pipeline.coordinator.pending_count(), 1); + assert_eq!(pipeline.active_count(), 0); + } + + #[test] + fn test_handle_timeouts_keeps_tracker() { + let mut pipeline = create_pipeline_with_short_timeout(); + pipeline.target_height = 99; + + pipeline.batch_trackers.insert(0, BatchTracker::new(99)); + pipeline.coordinator.mark_sent(&[0]); + + // Receive some filters before timeout + for h in 0..10 { + let hash = Header::dummy(h).block_hash(); + pipeline.receive_with_data(h, hash, &dummy_filter_data(h)); + } + + std::thread::sleep(Duration::from_millis(5)); + + let timed_out = pipeline.handle_timeouts(); + + // Should timeout but tracker is preserved for late arrivals + assert_eq!(timed_out, vec![0]); + assert!(pipeline.batch_trackers.contains_key(&0)); + assert_eq!(pipeline.batch_trackers.get(&0).unwrap().received(), 10); + } + + #[test] + fn test_timeout_does_not_duplicate_inflight_batches() { + // This test verifies the bug fix: when an early batch times out, + // only that batch is re-queued, not later in-flight batches. + let mut pipeline = FiltersPipeline { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_timeout(Duration::from_millis(1)) + .with_max_retries(3) + .with_max_concurrent(10), + ), + batch_trackers: HashMap::new(), + completed_batches: BTreeSet::new(), + target_height: 2999, + filters_received: 0, + highest_received: 0, + }; + + // Simulate 3 in-flight batches: 0-999, 1000-1999, 2000-2999 + pipeline.batch_trackers.insert(0, BatchTracker::new(999)); + pipeline.batch_trackers.insert(1000, BatchTracker::new(1999)); + pipeline.batch_trackers.insert(2000, BatchTracker::new(2999)); + pipeline.coordinator.mark_sent(&[0, 1000, 2000]); + + assert_eq!(pipeline.active_count(), 3); + assert_eq!(pipeline.coordinator.pending_count(), 0); + + // Wait for timeout + std::thread::sleep(Duration::from_millis(5)); + + // Handle timeouts - all 3 should timeout and be re-queued + let timed_out = pipeline.handle_timeouts(); + assert_eq!(timed_out.len(), 3); + + // All 3 batches should be in the pending queue, not duplicated + assert_eq!(pipeline.coordinator.pending_count(), 3); + assert_eq!(pipeline.active_count(), 0); + + // Take pending items - should get exactly 3, not more + let pending = pipeline.coordinator.take_pending(10); + assert_eq!(pending.len(), 3); + assert!(pending.contains(&0)); + assert!(pending.contains(&1000)); + assert!(pending.contains(&2000)); + } + + // ========================================================================= + // send_pending Tests + // ========================================================================= + + #[tokio::test] + async fn test_send_pending_single_batch() { + let headers = Header::dummy_batch(0..1000); + let tmp_dir = TempDir::new().unwrap(); + let mut storage = PersistentBlockHeaderStorage::open(tmp_dir.path()).await.unwrap(); + storage.store_headers(&headers).await.unwrap(); + + let mut pipeline = FiltersPipeline::new(); + pipeline.init(0, 999); + + let (sender, mut rx) = create_test_request_sender(); + + let count = pipeline.send_pending(&sender, &storage).await.unwrap(); + + assert_eq!(count, 1); + assert_eq!(pipeline.active_count(), 1); + assert!(pipeline.batch_trackers.contains_key(&0)); + // No more pending since the single batch was sent + assert_eq!(pipeline.coordinator.pending_count(), 0); + + // Verify message was sent + let request = rx.try_recv().unwrap(); + let NetworkRequest::SendMessage(msg) = request; + if let NetworkMessage::GetCFilters(gcf) = msg { + assert_eq!(gcf.start_height, 0); + assert_eq!(gcf.filter_type, 0); + } else { + panic!("Expected GetCFilters message"); + } + } + + #[tokio::test] + async fn test_send_pending_respects_limit() { + // Create enough headers for many batches + let headers = Header::dummy_batch(0..25000); + let tmp_dir = TempDir::new().unwrap(); + let mut storage = PersistentBlockHeaderStorage::open(tmp_dir.path()).await.unwrap(); + storage.store_headers(&headers).await.unwrap(); + + let mut pipeline = FiltersPipeline::new(); + pipeline.init(0, 24999); + + let (sender, _rx) = create_test_request_sender(); + + let count = pipeline.send_pending(&sender, &storage).await.unwrap(); + + // Should respect MAX_CONCURRENT_FILTER_BATCHES (20) + // 25 batches needed, but only 20 can be in-flight at once + assert_eq!(count, MAX_CONCURRENT_FILTER_BATCHES); + assert_eq!(pipeline.active_count(), MAX_CONCURRENT_FILTER_BATCHES); + assert_eq!(pipeline.batch_trackers.len(), MAX_CONCURRENT_FILTER_BATCHES); + // 5 batches still pending + assert_eq!(pipeline.coordinator.pending_count(), 5); + } + + #[tokio::test] + async fn test_send_pending_calculates_end() { + let headers = Header::dummy_batch(0..1500); + let tmp_dir = TempDir::new().unwrap(); + let mut storage = PersistentBlockHeaderStorage::open(tmp_dir.path()).await.unwrap(); + storage.store_headers(&headers).await.unwrap(); + + let mut pipeline = FiltersPipeline::new(); + // Target is 1200, so second batch ends at 1200 not 1999 + pipeline.init(0, 1200); + + let (sender, _rx) = create_test_request_sender(); + + let count = pipeline.send_pending(&sender, &storage).await.unwrap(); + + assert_eq!(count, 2); + + // First batch: 0-999 + assert!(pipeline.batch_trackers.contains_key(&0)); + assert_eq!(pipeline.batch_trackers.get(&0).unwrap().end_height(), 999); + + // Second batch: 1000-1200 (capped by target) + assert!(pipeline.batch_trackers.contains_key(&1000)); + assert_eq!(pipeline.batch_trackers.get(&1000).unwrap().end_height(), 1200); + } + + #[tokio::test] + async fn test_send_pending_sends_all_queued() { + let headers = Header::dummy_batch(0..3000); + let tmp_dir = TempDir::new().unwrap(); + let mut storage = PersistentBlockHeaderStorage::open(tmp_dir.path()).await.unwrap(); + storage.store_headers(&headers).await.unwrap(); + + let mut pipeline = FiltersPipeline::new(); + pipeline.init(0, 2500); + + let (sender, _rx) = create_test_request_sender(); + + let count = pipeline.send_pending(&sender, &storage).await.unwrap(); + + // Should send all 3 batches: 0-999, 1000-1999, 2000-2500 + assert_eq!(count, 3); + assert_eq!(pipeline.active_count(), 3); + assert_eq!(pipeline.coordinator.pending_count(), 0); + } + + #[tokio::test] + async fn test_send_pending_no_work_when_queue_empty() { + let headers = Header::dummy_batch(0..100); + let tmp_dir = TempDir::new().unwrap(); + let mut storage = PersistentBlockHeaderStorage::open(tmp_dir.path()).await.unwrap(); + storage.store_headers(&headers).await.unwrap(); + + let mut pipeline = FiltersPipeline::new(); + pipeline.init(0, 50); + + let (sender, _rx) = create_test_request_sender(); + + // First send exhausts the queue + let count = pipeline.send_pending(&sender, &storage).await.unwrap(); + assert_eq!(count, 1); + + // Second send has nothing to do + let count = pipeline.send_pending(&sender, &storage).await.unwrap(); + assert_eq!(count, 0); + } + + // ========================================================================= + // Integration Tests + // ========================================================================= + + #[tokio::test] + async fn test_full_batch_lifecycle() { + let headers = Header::dummy_batch(0..100); + let tmp_dir = TempDir::new().unwrap(); + let mut storage = PersistentBlockHeaderStorage::open(tmp_dir.path()).await.unwrap(); + storage.store_headers(&headers).await.unwrap(); + + let mut pipeline = FiltersPipeline::new(); + pipeline.init(0, 99); + + let (sender, _rx) = create_test_request_sender(); + + // Send request + let sent = pipeline.send_pending(&sender, &storage).await.unwrap(); + assert_eq!(sent, 1); + assert_eq!(pipeline.active_count(), 1); + + // Receive all filters + for h in 0..=99 { + let hash = Header::dummy(h).block_hash(); + pipeline.receive_with_data(h, hash, &dummy_filter_data(h)); + } + + // Batch should be complete + assert_eq!(pipeline.active_count(), 0); + assert_eq!(pipeline.completed_batches.len(), 1); + assert_eq!(pipeline.filters_received, 100); + assert_eq!(pipeline.highest_received, 99); + + // Take completed + let completed = pipeline.take_completed_batches(); + assert_eq!(completed.len(), 1); + assert!(pipeline.completed_batches.is_empty()); + } + + #[tokio::test] + async fn test_timeout_and_retry_flow() { + let headers = Header::dummy_batch(0..1000); + let tmp_dir = TempDir::new().unwrap(); + let mut storage = PersistentBlockHeaderStorage::open(tmp_dir.path()).await.unwrap(); + storage.store_headers(&headers).await.unwrap(); + + let mut pipeline = create_pipeline_with_short_timeout(); + pipeline.init(0, 999); + + let (sender, _rx) = create_test_request_sender(); + + // Send initial request + pipeline.send_pending(&sender, &storage).await.unwrap(); + assert_eq!(pipeline.active_count(), 1); + assert_eq!(pipeline.coordinator.pending_count(), 0); + + // Wait for timeout + std::thread::sleep(Duration::from_millis(5)); + + // Handle timeout - should re-queue the batch via coordinator + let timed_out = pipeline.handle_timeouts(); + assert_eq!(timed_out.len(), 1); + assert_eq!(pipeline.coordinator.pending_count(), 1); + assert_eq!(pipeline.active_count(), 0); + + // Tracker should still exist for late arrivals + assert!(pipeline.batch_trackers.contains_key(&0)); + + // Can retry by sending again + pipeline.send_pending(&sender, &storage).await.unwrap(); + assert_eq!(pipeline.active_count(), 1); + + // Existing tracker is reused (not replaced) + assert!(pipeline.batch_trackers.contains_key(&0)); + } + + #[test] + fn test_take_completed_batches_clears() { + let mut pipeline = FiltersPipeline::new(); + + // Add some completed batches + pipeline.completed_batches.insert(FiltersBatch::new(0, 99, HashMap::new())); + pipeline.completed_batches.insert(FiltersBatch::new(100, 199, HashMap::new())); + + let taken = pipeline.take_completed_batches(); + assert_eq!(taken.len(), 2); + assert!(pipeline.completed_batches.is_empty()); + } + + #[test] + fn test_filters_batch_filters_mut() { + let mut batch = FiltersBatch::new(0, 0, HashMap::new()); + + batch + .filters_mut() + .insert(FilterMatchKey::new(0, BlockHash::all_zeros()), BlockFilter::new(&[0x01])); + + assert_eq!(batch.filters().len(), 1); + } +} diff --git a/dash-spv/src/sync/filters/progress.rs b/dash-spv/src/sync/filters/progress.rs new file mode 100644 index 000000000..ad93fb8b3 --- /dev/null +++ b/dash-spv/src/sync/filters/progress.rs @@ -0,0 +1,159 @@ +use crate::sync::SyncState; +use std::fmt; +use std::time::Instant; + +/// Progress for filters synchronization. +#[derive(Debug, Clone, PartialEq)] +pub struct FiltersProgress { + /// Current sync state. + state: SyncState, + /// Tip height of the filter storage. + current_height: u32, + /// Target height (peer's best height). Used for progress display. + target_height: u32, + /// The tip height of the filter header storage (the download limit for filters). + /// Filters can only be downloaded up to this height. + filter_header_tip_height: u32, + /// Number of filters downloaded in the current sync session. + downloaded: u32, + /// Number of filters processed in the current sync session. + processed: u32, + /// Number of filters matched in the current sync session. + matched: u32, + /// The last time a filter was stored to disk or the last manager state change. + last_activity: Instant, +} + +impl Default for FiltersProgress { + fn default() -> Self { + Self { + state: Default::default(), + current_height: 0, + target_height: 0, + filter_header_tip_height: 0, + downloaded: 0, + processed: 0, + matched: 0, + last_activity: Instant::now(), + } + } +} + +impl FiltersProgress { + /// Get completion percentage (0.0 to 1.0). + /// Uses target_height (peer's best height) for accurate progress display. + pub fn percentage(&self) -> f64 { + if self.target_height == 0 { + return 1.0; + } + (self.current_height as f64 / self.target_height as f64).min(1.0) + } + + /// Get the current sync state. + pub fn state(&self) -> SyncState { + self.state + } + + /// Get the current height (last successfully processed height). + pub fn current_height(&self) -> u32 { + self.current_height + } + + /// Get the target height (peer's best height, for progress display). + pub fn target_height(&self) -> u32 { + self.target_height + } + + /// Get the filter header tip height (the download limit for filters). + pub fn filter_header_tip_height(&self) -> u32 { + self.filter_header_tip_height + } + + /// Number of filters downloaded in the current sync session. + pub fn downloaded(&self) -> u32 { + self.downloaded + } + + /// Number of filters processed in the current sync session. + pub fn processed(&self) -> u32 { + self.processed + } + + /// Number of filters matched in the current sync session. + pub fn matched(&self) -> u32 { + self.matched + } + + /// The last time a filter was stored to disk or the last manager state change. + pub fn last_activity(&self) -> Instant { + self.last_activity + } + + /// Update the sync state and bump the last activity time. + pub fn set_state(&mut self, state: SyncState) { + self.state = state; + self.bump_last_activity(); + } + + /// Update the current height (last successfully processed height). + pub fn update_current_height(&mut self, height: u32) { + self.current_height = height; + self.bump_last_activity(); + } + + /// Update the target height (peer's best height, for progress display). + /// Only updates if the new height is greater than the current target (monotonic increase). + pub fn update_target_height(&mut self, height: u32) { + if height > self.target_height { + self.target_height = height; + self.bump_last_activity(); + } + } + + /// Update the filter header tip height (called when new filter headers are stored). + pub fn update_filter_header_tip_height(&mut self, height: u32) { + self.filter_header_tip_height = height; + self.bump_last_activity(); + } + + /// Add a number to the downloaded counter. + pub fn add_downloaded(&mut self, count: u32) { + self.downloaded += count; + self.bump_last_activity(); + } + + /// Add a number to the processed counter. + pub fn add_processed(&mut self, count: u32) { + self.processed += count; + self.bump_last_activity(); + } + + /// Add a number to the matched counter. + pub fn add_matched(&mut self, count: u32) { + self.matched += count; + self.bump_last_activity(); + } + + /// Bump the last activity time. + pub fn bump_last_activity(&mut self) { + self.last_activity = Instant::now(); + } +} + +impl fmt::Display for FiltersProgress { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + let pct = self.percentage() * 100.0; + write!( + f, + "{:?} {}/{} ({:.1}%) downloaded: {}, processed: {}, matched: {}, last_activity: {}s", + self.state, + self.current_height, + self.target_height, + pct, + self.downloaded, + self.processed, + self.matched, + self.last_activity.elapsed().as_secs(), + ) + } +} diff --git a/dash-spv/src/sync/filters/sync_manager.rs b/dash-spv/src/sync/filters/sync_manager.rs new file mode 100644 index 000000000..a2e3fe084 --- /dev/null +++ b/dash-spv/src/sync/filters/sync_manager.rs @@ -0,0 +1,237 @@ +use crate::error::{SyncError, SyncResult}; +use crate::network::{Message, MessageType, RequestSender}; +use crate::storage::{BlockHeaderStorage, FilterHeaderStorage, FilterStorage}; +use crate::sync::{ + FiltersManager, ManagerIdentifier, SyncEvent, SyncManager, SyncManagerProgress, SyncState, +}; +use async_trait::async_trait; +use dashcore::network::message::NetworkMessage; +use key_wallet_manager::wallet_interface::WalletInterface; + +#[async_trait] +impl< + H: BlockHeaderStorage, + FH: FilterHeaderStorage, + F: FilterStorage, + W: WalletInterface + 'static, + > SyncManager for FiltersManager +{ + fn identifier(&self) -> ManagerIdentifier { + ManagerIdentifier::Filter + } + + fn state(&self) -> SyncState { + self.progress.state() + } + + fn set_state(&mut self, state: SyncState) { + self.progress.set_state(state); + } + + fn update_target_height(&mut self, height: u32) { + self.progress.update_target_height(height); + } + + fn wanted_message_types(&self) -> &'static [MessageType] { + &[MessageType::CFilter] + } + + async fn initialize(&mut self) -> SyncResult<()> { + let wallet = self.wallet.read().await; + let synced_height = wallet.synced_height(); + drop(wallet); + + self.progress.update_current_height(synced_height); + self.set_state(SyncState::WaitingForConnections); + + tracing::info!( + "FiltersManager initialized at height {}, waiting for filter headers", + self.progress.current_height() + ); + + Ok(()) + } + + async fn start_sync(&mut self, requests: &RequestSender) -> SyncResult> { + if self.state() != SyncState::WaitingForConnections { + tracing::warn!("{} sync already started.", self.identifier()); + return Ok(vec![]); + } + + // Check if there are already stored filters we need to process + // This handles restart where filters are persisted but wallet state isn't + let stored_filters_tip = self.filter_storage.read().await.filter_tip_height().await?; + + if stored_filters_tip > self.progress.current_height() { + tracing::info!( + "FiltersManager: wallet at height {}, stored filters at {} - starting rescan of stored filters", + self.progress.current_height(), + stored_filters_tip + ); + // Set filter header tip to stored filters tip - we only scan what's already stored + self.progress.update_filter_header_tip_height(stored_filters_tip); + let mut events = vec![SyncEvent::SyncStart { + identifier: self.identifier(), + }]; + events.extend(self.start_download(requests).await?); + return Ok(events); + } + + // Already at or beyond stored filters tip - check if fully synced + if stored_filters_tip > 0 && stored_filters_tip == self.progress.current_height() { + self.progress.update_filter_header_tip_height(stored_filters_tip); + // Only emit SyncComplete if we've also reached the chain tip + if self.progress.current_height() >= self.progress.target_height() { + self.set_state(SyncState::Synced); + tracing::info!( + "FiltersManager: already synced at height {}", + self.progress.current_height() + ); + return Ok(vec![SyncEvent::FiltersSyncComplete { + tip_height: stored_filters_tip, + }]); + } + // Caught up to stored filters but chain tip not reached yet + self.set_state(SyncState::WaitForEvents); + return Ok(vec![]); + } + + // No stored filters to process - wait for FilterHeadersSyncComplete events + self.set_state(SyncState::WaitForEvents); + Ok(vec![]) + } + + async fn handle_message( + &mut self, + msg: Message, + requests: &RequestSender, + ) -> SyncResult> { + let NetworkMessage::CFilter(cfilter) = msg.inner() else { + return Ok(vec![]); + }; + + // Find height for this filter + let height = + self.header_storage.read().await.get_header_height_by_hash(&cfilter.block_hash).await?; + + let Some(h) = height else { + tracing::warn!( + block_hash = %cfilter.block_hash, + peer = %msg.peer_address(), + "Received CFilter for unknown block hash, rejecting as invalid" + ); + // TODO: should we penalize the peer a bit? + return Err(SyncError::Validation(format!( + "CFilter references unknown block hash {}", + cfilter.block_hash + ))); + }; + + // Buffer filter in pipeline + self.filter_pipeline.receive_with_data(h, cfilter.block_hash, &cfilter.filter); + + // Send more requests if there are free slots + let header_storage = self.header_storage.read().await; + self.filter_pipeline.send_pending(requests, &*header_storage).await?; + drop(header_storage); + + Ok(self.store_and_match_batches().await?) + } + + async fn handle_sync_event( + &mut self, + event: &SyncEvent, + requests: &RequestSender, + ) -> SyncResult> { + match event { + SyncEvent::FilterHeadersSyncComplete { + tip_height, + } => { + return self.handle_new_filter_headers(*tip_height, requests).await; + } + + SyncEvent::FilterHeadersStored { + tip_height, + .. + } => { + return self.handle_new_filter_headers(*tip_height, requests).await; + } + + // React to BlockProcessed events from the BlocksManager + SyncEvent::BlockProcessed { + block_hash, + height, + new_addresses, + .. + } => { + // Check if this block is part of our tracked blocks + if let Some((_, batch_start)) = self.blocks_remaining.remove(block_hash) { + // Decrement this batch's pending_blocks count + if let Some(batch) = self.active_batches.get_mut(&batch_start) { + batch.decrement_pending_blocks(); + tracing::debug!( + "Block {} at height {} processed, batch {} has {} blocks remaining", + block_hash, + height, + batch_start, + batch.pending_blocks() + ); + } + + // Collect new addresses in the batch for deferred rescan at commit time. + // This batches rescans for efficiency and ensures all blocks from + // a BlocksNeeded event are processed before triggering new rescans. + if !new_addresses.is_empty() { + if let Some(batch) = self.active_batches.get_mut(&batch_start) { + batch.add_addresses(new_addresses.iter().cloned()); + } + } + + // Try to commit/scan/create batches + return self.try_process_batch().await; + } + } + + _ => {} + } + + Ok(vec![]) + } + + async fn tick(&mut self, requests: &RequestSender) -> SyncResult> { + // TODO: Get rid of the send pending in here? Or decouple it from the header storage? + // Run tick when Syncing OR when Synced with pending work (new blocks arriving) + let has_pending_work = !self.active_batches.is_empty(); + let should_tick = match self.state() { + SyncState::Syncing => true, + SyncState::Synced => has_pending_work, + _ => false, + }; + if !should_tick { + return Ok(vec![]); + } + + // Handle timeouts + let timed_out = self.filter_pipeline.handle_timeouts(); + if !timed_out.is_empty() { + tracing::debug!("Re-queued {} timed out filter batches", timed_out.len()); + } + + // Send pending requests (decoupled from processing) + let header_storage = self.header_storage.read().await; + self.filter_pipeline.send_pending(requests, &*header_storage).await?; + drop(header_storage); + + // Store completed batches and do speculative matching + let mut events = self.store_and_match_batches().await?; + + // Try to process blocks in current batch + events.extend(self.try_process_batch().await?); + + Ok(events) + } + + fn progress(&self) -> SyncManagerProgress { + SyncManagerProgress::Filters(self.progress.clone()) + } +} diff --git a/dash-spv/src/sync/filters/util.rs b/dash-spv/src/sync/filters/util.rs new file mode 100644 index 000000000..63eeb2b6e --- /dev/null +++ b/dash-spv/src/sync/filters/util.rs @@ -0,0 +1,24 @@ +use crate::error::SyncResult; +use crate::storage::FilterHeaderStorage; +use crate::SyncError; +use dashcore::hash_types::FilterHeader; +use dashcore_hashes::Hash; + +/// Get previous filter header for verification. +/// +/// Returns `FilterHeader::all_zeros()` for height 0, otherwise loads from storage. +pub(super) async fn get_prev_filter_header( + storage: &S, + height: u32, +) -> SyncResult { + if height == 0 { + return Ok(FilterHeader::all_zeros()); + } + + storage.get_filter_header(height - 1).await?.ok_or_else(|| { + SyncError::InvalidState(format!( + "Missing filter header at height {} for verification", + height - 1 + )) + }) +} diff --git a/dash-spv/src/sync/identifier.rs b/dash-spv/src/sync/identifier.rs new file mode 100644 index 000000000..a09e3e8fd --- /dev/null +++ b/dash-spv/src/sync/identifier.rs @@ -0,0 +1,43 @@ +use std::fmt::Display; + +/// Unique identifier for each sync manager. +#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] +pub enum ManagerIdentifier { + BlockHeader, + FilterHeader, + Filter, + Block, + Masternode, + ChainLock, + InstantSend, +} + +impl Display for ManagerIdentifier { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + ManagerIdentifier::BlockHeader => write!(f, "BlockHeader"), + ManagerIdentifier::FilterHeader => write!(f, "FilterHeader"), + ManagerIdentifier::Filter => write!(f, "Filter"), + ManagerIdentifier::Block => write!(f, "Block"), + ManagerIdentifier::Masternode => write!(f, "Masternode"), + ManagerIdentifier::ChainLock => write!(f, "ChainLock"), + ManagerIdentifier::InstantSend => write!(f, "InstantSend"), + } + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_manager_identifier_display() { + assert_eq!(ManagerIdentifier::BlockHeader.to_string(), "BlockHeader"); + assert_eq!(ManagerIdentifier::FilterHeader.to_string(), "FilterHeader"); + assert_eq!(ManagerIdentifier::Filter.to_string(), "Filter"); + assert_eq!(ManagerIdentifier::Block.to_string(), "Block"); + assert_eq!(ManagerIdentifier::Masternode.to_string(), "Masternode"); + assert_eq!(ManagerIdentifier::ChainLock.to_string(), "ChainLock"); + assert_eq!(ManagerIdentifier::InstantSend.to_string(), "InstantSend"); + } +} diff --git a/dash-spv/src/sync/instantsend/manager.rs b/dash-spv/src/sync/instantsend/manager.rs new file mode 100644 index 000000000..0a5e3c08e --- /dev/null +++ b/dash-spv/src/sync/instantsend/manager.rs @@ -0,0 +1,435 @@ +//! InstantSend manager. +//! +//! Handles InstantSendLock messages (islock) from the network. Validates locks +//! when masternode data is available, queues them when not. + +use std::collections::HashMap; +use std::sync::Arc; +use std::time::{Duration, SystemTime}; + +use dashcore::ephemerealdata::instant_lock::InstantLock; +use dashcore::hashes::Hash; +use dashcore::sml::masternode_list_engine::MasternodeListEngine; +use dashcore::Txid; +use tokio::sync::RwLock; + +use crate::error::SyncResult; +use crate::sync::{InstantSendProgress, SyncEvent}; + +/// Maximum number of pending InstantLocks awaiting validation. +const MAX_PENDING_INSTANTLOCKS: usize = 500; + +/// Maximum number of InstantLocks to cache. +const MAX_CACHE_SIZE: usize = 5000; + +/// TTL for cached InstantLocks (1 hour). +const CACHE_TTL: Duration = Duration::from_secs(3600); + +/// Maximum retry attempts before dropping a pending InstantLock (~1 hour at 2.5min blocks). +const MAX_RETRIES: u32 = 24; + +/// Entry in the InstantLock cache. +#[derive(Debug, Clone)] +pub struct InstantLockEntry { + /// The InstantLock data. + pub instant_lock: InstantLock, + /// When the InstantLock was received. + pub received_at: SystemTime, + /// Whether the BLS signature was validated. + pub validated: bool, +} + +/// Pending InstantLock awaiting validation with retry tracking. +#[derive(Debug, Clone)] +struct PendingInstantLock { + /// The InstantLock data. + instant_lock: InstantLock, + /// Number of validation retry attempts. + retry_count: u32, +} + +/// InstantSend manager. +/// +/// This manager: +/// - Subscribes to ISLock messages from the network +/// - Validates InstantLocks when masternode engine is available +/// - Queues InstantLocks for later validation when engine not ready +/// - Emits InstantLockReceived events +pub struct InstantSendManager { + /// Current progress of the manager. + pub(super) progress: InstantSendProgress, + /// Shared Masternode list engine. + engine: Arc>, + /// InstantLocks indexed by txid. + instantlocks: HashMap, + /// Pending InstantLocks awaiting validation with retry tracking. + pending_instantlocks: Vec, +} + +impl InstantSendManager { + /// Create a new InstantSend manager. + pub fn new(engine: Arc>) -> Self { + Self { + progress: InstantSendProgress::default(), + engine, + instantlocks: HashMap::new(), + pending_instantlocks: Vec::new(), + } + } + + /// Process an incoming InstantLock message. + pub(super) async fn process_instantlock( + &mut self, + instantlock: &InstantLock, + ) -> SyncResult> { + let txid = instantlock.txid; + + tracing::info!("Processing InstantLock for txid {}", txid); + + // Check for duplicates + if self.instantlocks.contains_key(&txid) { + tracing::debug!("Already have InstantLock for txid {}", txid); + return Ok(vec![]); + } + + // Structural validation + if !self.validate_structure(instantlock) { + tracing::warn!("Invalid InstantLock structure for txid {}", txid); + self.progress.add_invalid(1); + return Ok(vec![]); + } + + // Try to validate with masternode engine + let validated = self.validate_signature(instantlock).await; + + if validated { + self.progress.add_valid(1); + } else { + self.queue_pending(PendingInstantLock { + instant_lock: instantlock.clone(), + retry_count: 0, + }); + self.progress.update_pending(self.pending_instantlocks.len()); + } + + // Store in cache + let entry = InstantLockEntry { + instant_lock: instantlock.clone(), + received_at: SystemTime::now(), + validated, + }; + self.store_instantlock(txid, entry); + + Ok(vec![SyncEvent::InstantLockReceived { + instant_lock: instantlock.clone(), + validated, + }]) + } + + /// Validate the structural integrity of an InstantLock. + fn validate_structure(&self, instantlock: &InstantLock) -> bool { + // Must have at least one input + if instantlock.inputs.is_empty() { + return false; + } + + // Txid must not be null + if instantlock.txid == Txid::all_zeros() { + return false; + } + + // Signature must not be zeroed + if instantlock.signature.is_zeroed() { + return false; + } + + true + } + + /// Validate the InstantLock BLS signature using the masternode engine. + async fn validate_signature(&self, instantlock: &InstantLock) -> bool { + let engine = self.engine.read().await; + + match engine.verify_is_lock(instantlock) { + Ok(()) => { + tracing::info!( + "InstantLock signature verified for txid {} (cyclehash={})", + instantlock.txid, + instantlock.cyclehash + ); + true + } + Err(e) => { + tracing::warn!( + "InstantLock signature verification failed for txid {} (cyclehash={}, inputs={}): {}", + instantlock.txid, + instantlock.cyclehash, + instantlock.inputs.len(), + e + ); + false + } + } + } + + /// Queue an InstantLock for later validation. + fn queue_pending(&mut self, pending: PendingInstantLock) { + // Remove oldest if at capacity + if self.pending_instantlocks.len() >= MAX_PENDING_INSTANTLOCKS { + let dropped = self.pending_instantlocks.remove(0); + tracing::warn!( + "Pending InstantLocks queue at capacity ({}), dropping oldest for txid {}", + MAX_PENDING_INSTANTLOCKS, + dropped.instant_lock.txid + ); + self.progress.add_invalid(1); + } + self.pending_instantlocks.push(pending); + } + + /// Store an InstantLock in the cache. + fn store_instantlock(&mut self, txid: Txid, entry: InstantLockEntry) { + self.instantlocks.insert(txid, entry); + + // Enforce cache limit by removing oldest + if self.instantlocks.len() > MAX_CACHE_SIZE { + let oldest = + self.instantlocks.iter().min_by_key(|(_, e)| e.received_at).map(|(k, _)| *k); + if let Some(key) = oldest { + self.instantlocks.remove(&key); + } + } + } + + /// Validate pending InstantLocks after masternode engine becomes available. + pub(super) async fn validate_pending(&mut self) -> SyncResult> { + let pending = std::mem::take(&mut self.pending_instantlocks); + let mut events = Vec::new(); + + for mut pending_lock in pending { + pending_lock.retry_count += 1; + let txid = pending_lock.instant_lock.txid; + + // Check if max retries exceeded + if pending_lock.retry_count > MAX_RETRIES { + tracing::warn!( + "Dropping InstantLock for txid {} after {} retries", + txid, + pending_lock.retry_count + ); + self.progress.add_invalid(1); + continue; + } + + let validated = self.validate_signature(&pending_lock.instant_lock).await; + + if validated { + self.progress.add_valid(1); + // Update the cached entry + if let Some(entry) = self.instantlocks.get_mut(&txid) { + entry.validated = true; + } + events.push(SyncEvent::InstantLockReceived { + instant_lock: pending_lock.instant_lock.clone(), + validated: true, + }); + } else { + // Still can't validate, re-queue + self.queue_pending(pending_lock); + } + } + + self.progress.update_pending(self.pending_instantlocks.len()); + Ok(events) + } + + /// Prune old entries from the cache. + pub(super) fn prune_old_entries(&mut self) { + let now = SystemTime::now(); + self.instantlocks.retain(|_, entry| { + now.duration_since(entry.received_at).map(|d| d < CACHE_TTL).unwrap_or(true) + }); + } + + /// Get an InstantLock by transaction ID. + pub fn get_instantlock(&self, txid: &Txid) -> Option<&InstantLockEntry> { + self.instantlocks.get(txid) + } + + /// Check if a transaction has a validated InstantLock. + pub fn is_transaction_locked(&self, txid: &Txid) -> bool { + self.instantlocks.get(txid).map(|e| e.validated).unwrap_or(false) + } + + /// Get the number of pending InstantLocks awaiting validation. + pub fn pending_count(&self) -> usize { + self.pending_instantlocks.len() + } + + /// Get the number of cached InstantLocks. + pub fn cached_count(&self) -> usize { + self.instantlocks.len() + } +} + +impl std::fmt::Debug for InstantSendManager { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("InstantSendManager") + .field("progress", &self.progress) + .field("cached", &self.instantlocks.len()) + .field("pending", &self.pending_instantlocks.len()) + .finish() + } +} +#[cfg(test)] +mod tests { + use super::*; + use crate::network::MessageType; + use crate::sync::{ManagerIdentifier, SyncManager, SyncManagerProgress, SyncState}; + use dashcore::bls_sig_utils::BLSSignature; + use dashcore::hash_types::CycleHash; + use dashcore::hashes::Hash; + use dashcore::OutPoint; + + fn create_test_instantlock(txid: Txid) -> InstantLock { + InstantLock { + version: 1, + inputs: vec![OutPoint::default()], + txid, + cyclehash: CycleHash::all_zeros(), + signature: BLSSignature::from([1u8; 96]), // Non-zero signature + } + } + + fn create_test_manager() -> InstantSendManager { + let engine = Arc::new(RwLock::new(MasternodeListEngine::default_for_network( + dashcore::Network::Testnet, + ))); + InstantSendManager::new(engine) + } + + #[tokio::test] + async fn test_instantsend_manager_new() { + let manager = create_test_manager(); + assert_eq!(manager.identifier(), ManagerIdentifier::InstantSend); + assert_eq!(manager.state(), SyncState::Initializing); + assert_eq!(manager.wanted_message_types(), vec![MessageType::ISLock, MessageType::Inv]); + } + + #[tokio::test] + async fn test_instantsend_duplicate_handling() { + let mut manager = create_test_manager(); + + let txid = Txid::from_byte_array([1u8; 32]); + let islock1 = create_test_instantlock(txid); + let islock2 = create_test_instantlock(txid); + + // First should process + let events1 = manager.process_instantlock(&islock1).await.unwrap(); + assert_eq!(events1.len(), 1); + + // Second should be ignored as duplicate + let events2 = manager.process_instantlock(&islock2).await.unwrap(); + assert_eq!(events2.len(), 0); + } + + #[tokio::test] + async fn test_instantsend_pending_queue() { + let mut manager = create_test_manager(); + + // Without masternode engine, InstantLocks should be queued + let txid = Txid::from_byte_array([1u8; 32]); + let islock = create_test_instantlock(txid); + let _ = manager.process_instantlock(&islock).await.unwrap(); + + assert_eq!(manager.pending_count(), 1); + } + + #[tokio::test] + async fn test_instantsend_structural_validation() { + let manager = create_test_manager(); + + // Valid structure + let txid = Txid::from_byte_array([1u8; 32]); + let valid = create_test_instantlock(txid); + assert!(manager.validate_structure(&valid)); + + // Empty inputs + let mut invalid = create_test_instantlock(txid); + invalid.inputs = vec![]; + assert!(!manager.validate_structure(&invalid)); + + // Null txid + let invalid_txid = InstantLock { + version: 1, + inputs: vec![OutPoint::default()], + txid: Txid::all_zeros(), + cyclehash: CycleHash::all_zeros(), + signature: BLSSignature::from([1u8; 96]), + }; + assert!(!manager.validate_structure(&invalid_txid)); + + // Zeroed signature + let invalid_sig = InstantLock { + version: 1, + inputs: vec![OutPoint::default()], + txid: Txid::from_byte_array([1u8; 32]), + cyclehash: CycleHash::all_zeros(), + signature: BLSSignature::from([0u8; 96]), + }; + assert!(!manager.validate_structure(&invalid_sig)); + } + + #[tokio::test] + async fn test_instantsend_progress() { + let mut manager = create_test_manager(); + manager.set_state(SyncState::Syncing); + manager.progress.update_pending(2); + manager.progress.add_valid(8); + manager.progress.add_invalid(2); + + let progress = manager.progress(); + if let SyncManagerProgress::InstantSend(progress) = progress { + assert_eq!(progress.state(), SyncState::Syncing); + assert_eq!(progress.valid(), 8); + assert_eq!(progress.invalid(), 2); + assert_eq!(progress.pending(), 2); + assert!(progress.last_activity().elapsed().as_secs() < 1); + } else { + panic!("Expected SyncManagerProgress::InstantSend"); + } + } + + #[tokio::test] + async fn test_instantsend_accessors() { + let mut manager = create_test_manager(); + + let txid = Txid::from_byte_array([1u8; 32]); + let islock = create_test_instantlock(txid); + let _ = manager.process_instantlock(&islock).await.unwrap(); + + // Should be retrievable by txid + assert!(manager.get_instantlock(&txid).is_some()); + + // Unknown txid + let unknown = Txid::from_byte_array([2u8; 32]); + assert!(manager.get_instantlock(&unknown).is_none()); + } + + #[tokio::test] + async fn test_instantsend_cache_limit() { + let mut manager = create_test_manager(); + + // Add more than MAX_CACHE_SIZE instantlocks + for i in 0..MAX_CACHE_SIZE + 10 { + let mut bytes = [0u8; 32]; + bytes[0..4].copy_from_slice(&(i as u32).to_le_bytes()); + let txid = Txid::from_byte_array(bytes); + let islock = create_test_instantlock(txid); + let _ = manager.process_instantlock(&islock).await.unwrap(); + } + + // Should be capped at MAX_CACHE_SIZE + assert!(manager.cached_count() <= MAX_CACHE_SIZE); + } +} diff --git a/dash-spv/src/sync/instantsend/mod.rs b/dash-spv/src/sync/instantsend/mod.rs new file mode 100644 index 000000000..80c3f59c7 --- /dev/null +++ b/dash-spv/src/sync/instantsend/mod.rs @@ -0,0 +1,6 @@ +mod manager; +mod progress; +mod sync_manager; + +pub use manager::InstantSendManager; +pub use progress::InstantSendProgress; diff --git a/dash-spv/src/sync/instantsend/progress.rs b/dash-spv/src/sync/instantsend/progress.rs new file mode 100644 index 000000000..b113a764e --- /dev/null +++ b/dash-spv/src/sync/instantsend/progress.rs @@ -0,0 +1,91 @@ +use crate::sync::SyncState; +use std::fmt; +use std::time::Instant; + +/// Progress for InstantSend synchronization. +#[derive(Debug, Clone, PartialEq)] +pub struct InstantSendProgress { + /// Current sync state. + state: SyncState, + /// Number of InstantSend locks pending for validation. + pending: usize, + /// Number of InstantSend locks successfully verified. + valid: u32, + /// Number of InstantSend locks dropped after max retries (couldn't be validated). + invalid: u32, + /// The last time an InstantLock was processed or the last manager state change. + last_activity: Instant, +} + +impl Default for InstantSendProgress { + fn default() -> Self { + Self { + state: Default::default(), + pending: 0, + valid: 0, + invalid: 0, + last_activity: Instant::now(), + } + } +} + +impl InstantSendProgress { + /// Get the current sync state. + pub fn state(&self) -> SyncState { + self.state + } + /// Number of InstantSend locks pending for validation. + pub fn pending(&self) -> usize { + self.pending + } + /// Number of InstantSend locks successfully verified. + pub fn valid(&self) -> u32 { + self.valid + } + /// Number of InstantSend locks dropped after max retries (couldn't be validated). + pub fn invalid(&self) -> u32 { + self.invalid + } + /// The last time an InstantLock was processed or the last manager state change. + pub fn last_activity(&self) -> Instant { + self.last_activity + } + /// Update the sync state and bump the last activity time. + pub fn set_state(&mut self, state: SyncState) { + self.state = state; + self.bump_last_activity(); + } + /// Update the number of pending InstantSend locks. + pub fn update_pending(&mut self, count: usize) { + self.pending = count; + self.bump_last_activity(); + } + /// Add a number to the valid counter. + pub fn add_valid(&mut self, count: u32) { + self.valid += count; + self.bump_last_activity(); + } + /// Add a number to the invalid counter. + pub fn add_invalid(&mut self, count: u32) { + self.invalid += count; + self.bump_last_activity(); + } + /// Bump the last activity time. + pub fn bump_last_activity(&mut self) { + self.last_activity = Instant::now(); + } +} + +impl fmt::Display for InstantSendProgress { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + write!( + f, + "{:?} valid: {}, invalid: {}, pending: {}, last_activity: {}s", + self.state, + self.valid, + self.invalid, + self.pending, + self.last_activity.elapsed().as_secs() + ) + } +} diff --git a/dash-spv/src/sync/instantsend/sync_manager.rs b/dash-spv/src/sync/instantsend/sync_manager.rs new file mode 100644 index 000000000..4636f630b --- /dev/null +++ b/dash-spv/src/sync/instantsend/sync_manager.rs @@ -0,0 +1,100 @@ +use crate::error::SyncResult; +use crate::network::{Message, MessageType, RequestSender}; +use crate::sync::{ + InstantSendManager, ManagerIdentifier, SyncEvent, SyncManager, SyncManagerProgress, SyncState, +}; +use async_trait::async_trait; +use dashcore::network::message::NetworkMessage; +use dashcore::network::message_blockdata::Inventory; + +#[async_trait] +impl SyncManager for InstantSendManager { + fn identifier(&self) -> ManagerIdentifier { + ManagerIdentifier::InstantSend + } + + fn state(&self) -> SyncState { + self.progress.state() + } + + fn set_state(&mut self, state: SyncState) { + self.progress.set_state(state); + } + + fn wanted_message_types(&self) -> &'static [MessageType] { + &[MessageType::ISLock, MessageType::Inv] + } + + async fn handle_message( + &mut self, + msg: Message, + requests: &RequestSender, + ) -> SyncResult> { + match msg.inner() { + NetworkMessage::ISLock(instantlock) => self.process_instantlock(instantlock).await, + NetworkMessage::Inv(inv) => { + // Check for InstantSendLock inventory items + let islocks_to_request: Vec = inv + .iter() + .filter(|item| matches!(item, Inventory::InstantSendLock(_))) + .cloned() + .collect(); + + if !islocks_to_request.is_empty() { + tracing::info!( + "Received {} InstantSendLock announcements, requesting via getdata", + islocks_to_request.len() + ); + requests.request_inventory(islocks_to_request)?; + } + Ok(vec![]) + } + _ => Ok(vec![]), + } + } + + async fn handle_sync_event( + &mut self, + event: &SyncEvent, + _requests: &RequestSender, + ) -> SyncResult> { + // Validate pending InstantLocks when masternode state is updated + if let SyncEvent::MasternodeStateUpdated { + .. + } = event + { + let pending = self.pending_count(); + let events = if pending > 0 { + tracing::info!( + "Masternode state updated, validating {} pending InstantLocks", + pending + ); + self.validate_pending().await? + } else { + vec![] + }; + + // Transition to Synced when no pending validations after masternode sync + if self.pending_count() == 0 + && matches!(self.state(), SyncState::Syncing | SyncState::WaitForEvents) + { + self.set_state(SyncState::Synced); + tracing::info!("InstantSend manager synced (no pending validations)"); + } + + return Ok(events); + } + + Ok(vec![]) + } + + async fn tick(&mut self, _requests: &RequestSender) -> SyncResult> { + // Prune old entries periodically + self.prune_old_entries(); + Ok(vec![]) + } + + fn progress(&self) -> SyncManagerProgress { + SyncManagerProgress::InstantSend(self.progress.clone()) + } +} diff --git a/dash-spv/src/sync/masternodes/manager.rs b/dash-spv/src/sync/masternodes/manager.rs new file mode 100644 index 000000000..8ecbfa013 --- /dev/null +++ b/dash-spv/src/sync/masternodes/manager.rs @@ -0,0 +1,258 @@ +//! Masternode manager for parallel sync. +//! +//! Handles masternode list synchronization via QRInfo and MnListDiff messages. +//! Subscribes to BlockHeaderSyncComplete events to start sync after headers are caught up. +//! Emits MasternodeStateUpdated events. + +use std::sync::Arc; +use std::time::Instant; + +use dashcore::network::constants::NetworkExt; +use dashcore::sml::masternode_list_engine::MasternodeListEngine; +use tokio::sync::RwLock; + +use super::pipeline::MnListDiffPipeline; +use crate::error::{SyncError, SyncResult}; +use crate::network::RequestSender; +use crate::storage::BlockHeaderStorage; +use crate::sync::{MasternodesProgress, SyncEvent, SyncManager, SyncState}; +use dashcore::BlockHash; +use std::collections::{BTreeSet, HashSet}; + +/// Sync state for masternode list synchronization. +#[derive(Debug, Default)] +pub(super) struct MasternodeSyncState { + /// Block hashes for which we have received MnListDiffs. + pub(super) known_block_hashes: HashSet, + /// Heights where the engine has masternode lists (for chaining diffs). + pub(super) known_mn_list_heights: BTreeSet, + /// Last successfully processed QRInfo block hash (for progressive sync). + pub(super) last_qrinfo_block_hash: Option, + /// Pipeline for MnListDiff requests. + pub(super) mnlistdiff_pipeline: MnListDiffPipeline, + /// Whether we are waiting for a QRInfo response. + pub(super) waiting_for_qrinfo: bool, + /// When we started waiting for QRInfo response. + pub(super) qrinfo_wait_start: Option, + /// Current retry count for QRInfo. + pub(super) qrinfo_retry_count: u8, + /// When to retry after a ChainLock unavailability error. + /// The QRInfo response includes the current tip which may not have ChainLock yet. + pub(super) chainlock_retry_after: Option, +} + +impl MasternodeSyncState { + fn new() -> Self { + Self::default() + } + + pub(super) fn has_pending_requests(&self) -> bool { + !self.mnlistdiff_pipeline.is_complete() || self.waiting_for_qrinfo + } + + pub(super) fn clear_pending(&mut self) { + self.mnlistdiff_pipeline.clear(); + self.waiting_for_qrinfo = false; + self.qrinfo_wait_start = None; + } + + fn start_waiting_for_qrinfo(&mut self) { + self.waiting_for_qrinfo = true; + self.qrinfo_wait_start = Some(Instant::now()); + } + + pub(super) fn qrinfo_received(&mut self) { + self.waiting_for_qrinfo = false; + self.qrinfo_wait_start = None; + } +} + +/// Masternode manager for synchronizing masternode lists. +/// +/// This manager: +/// - Waits for BlockHeaderSyncComplete event before starting sync +/// - Handles QRInfo and MnListDiff messages +/// - Verifies quorums +/// - Emits MasternodeStateUpdated events +/// +/// Generic over `H: BlockHeaderStorage` to allow different storage implementations. +pub struct MasternodesManager { + /// Current progress of the manager. + pub(super) progress: MasternodesProgress, + /// Block header storage (for height lookups). + pub(super) header_storage: Arc>, + /// Shared Masternode list engine. + pub(super) engine: Arc>, + /// Network type for genesis hash. + network: dashcore::Network, + /// Sync state tracking. + pub(super) sync_state: MasternodeSyncState, +} + +impl MasternodesManager { + /// Create a new masternode manager with the given header storage. + pub fn new( + header_storage: Arc>, + engine: Arc>, + network: dashcore::Network, + ) -> Self { + Self { + progress: MasternodesProgress::default(), + header_storage, + engine, + network, + sync_state: MasternodeSyncState::new(), + } + } + + /// Send QRInfo request for the current tip. + /// + /// Called when BlockHeaderSyncComplete is received, ensuring we have all headers. + pub(super) async fn send_qrinfo_for_tip( + &mut self, + requests: &RequestSender, + ) -> SyncResult> { + // Get info from storage + let (tip_height, tip_block_hash) = { + let storage = self.header_storage.read().await; + match storage.get_tip().await { + Some(tip) => (tip.height(), *tip.hash()), + None => { + tracing::warn!("MasternodesManager: No headers available for QRInfo request"); + return Ok(vec![]); + } + } + }; + + if tip_height == 0 { + tracing::info!("MasternodesManager: At genesis, nothing to sync"); + return Ok(vec![]); + } + + // Only transition to Syncing if not already Synced (incremental updates stay Synced) + if self.state() != SyncState::Synced { + self.set_state(SyncState::Syncing); + } + + // Build known hashes from tracked block hashes + let mut known_hashes: Vec<_> = self.sync_state.known_block_hashes.iter().copied().collect(); + + // Add base hash + let base_hash = self + .sync_state + .last_qrinfo_block_hash + .or_else(|| self.network.known_genesis_block_hash()); + if let Some(hash) = base_hash { + known_hashes.push(hash); + } + + // Send QRInfo request for the tip + // Note: The server's response includes `mn_list_diff_tip` which is always the current tip, + // regardless of the requested block. If the tip was just mined and doesn't have a ChainLock + // yet, we'll retry after a delay. + tracing::info!("Requesting QRInfo for tip at height {}", tip_height); + requests.request_qr_info(known_hashes, tip_block_hash, true)?; + + self.sync_state.start_waiting_for_qrinfo(); + + Ok(vec![]) + } + + /// Verify quorums and mark complete. + /// + /// For initial sync (state == Syncing), emits MasternodeStateUpdated and logs completion. + /// For incremental updates (state == Synced), updates quietly without events. + pub(super) async fn verify_and_complete(&mut self) -> SyncResult> { + let mut events = Vec::new(); + let is_initial_sync = self.state() == SyncState::Syncing; + + let mut engine = self.engine.write().await; + + // Get the latest height from the engine and verify at that height + if let Some(&height) = engine.masternode_lists.keys().last() { + if let Err(e) = engine.verify_non_rotating_masternode_list_quorums(height, &[]) { + drop(engine); + self.set_state(SyncState::Error); + return Err(SyncError::MasternodeSyncFailed(format!( + "Quorum verification failed at height {}: {}", + height, e + ))); + } + + tracing::info!("Non-rotating quorum verification completed at height {}", height); + + self.progress.update_current_height(height); + + events.push(SyncEvent::MasternodeStateUpdated { + height, + }); + } else if is_initial_sync { + drop(engine); + self.set_state(SyncState::Error); + return Err(SyncError::MasternodeSyncFailed("No masternode lists available".into())); + } + + drop(engine); + + if is_initial_sync { + self.set_state(SyncState::Synced); + tracing::info!("Masternode sync complete at height {}", self.progress.current_height()); + } + + Ok(events) + } +} + +impl std::fmt::Debug for MasternodesManager { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("MasternodesManager").field("progress", &self.progress).finish() + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::network::MessageType; + use crate::storage::{DiskStorageManager, PersistentBlockHeaderStorage}; + use crate::sync::sync_manager::SyncManager; + use crate::sync::{ManagerIdentifier, SyncManagerProgress}; + + type TestMasternodesManager = MasternodesManager; + + async fn create_test_manager() -> TestMasternodesManager { + let storage = DiskStorageManager::with_temp_dir().await.unwrap(); + let engine = Arc::new(RwLock::new(MasternodeListEngine::default_for_network( + dashcore::Network::Testnet, + ))); + MasternodesManager::new(storage.header_storage(), engine, dashcore::Network::Testnet) + } + + #[tokio::test] + async fn test_masternode_manager_new() { + let manager = create_test_manager().await; + assert_eq!(manager.identifier(), ManagerIdentifier::Masternode); + assert_eq!(manager.state(), SyncState::Initializing); + assert_eq!( + manager.wanted_message_types(), + vec![MessageType::MnListDiff, MessageType::QRInfo] + ); + } + + #[tokio::test] + async fn test_masternode_manager_progress() { + let mut manager = create_test_manager().await; + manager.progress.update_current_height(500); + manager.progress.update_target_height(1000); + manager.progress.add_diffs_processed(10); + + let progress = manager.progress(); + if let SyncManagerProgress::Masternodes(progress) = progress { + assert_eq!(progress.current_height(), 500); + assert_eq!(progress.target_height(), 1000); + assert_eq!(progress.diffs_processed(), 10); + assert!(progress.last_activity().elapsed().as_secs() < 1); + } else { + panic!("Expected SyncManagerProgress::Masternodes"); + } + } +} diff --git a/dash-spv/src/sync/masternodes/mod.rs b/dash-spv/src/sync/masternodes/mod.rs new file mode 100644 index 000000000..8237579ba --- /dev/null +++ b/dash-spv/src/sync/masternodes/mod.rs @@ -0,0 +1,7 @@ +mod manager; +mod pipeline; +mod progress; +mod sync_manager; + +pub use manager::MasternodesManager; +pub use progress::MasternodesProgress; diff --git a/dash-spv/src/sync/masternodes/pipeline.rs b/dash-spv/src/sync/masternodes/pipeline.rs new file mode 100644 index 000000000..63c4b6493 --- /dev/null +++ b/dash-spv/src/sync/masternodes/pipeline.rs @@ -0,0 +1,417 @@ +//! MnListDiff pipeline implementation. +//! +//! Handles pipelined download of MnListDiff messages for quorum validation. +//! Uses DownloadCoordinator for request tracking with timeout and retry logic. + +use std::collections::HashMap; +use std::time::Duration; + +use crate::error::SyncResult; +use crate::network::RequestSender; +use crate::sync::download_coordinator::{DownloadConfig, DownloadCoordinator}; +use dashcore::network::message_sml::MnListDiff; +use dashcore::BlockHash; + +/// Maximum concurrent MnListDiff requests. +const MAX_CONCURRENT_MNLISTDIFF: usize = 20; + +/// Timeout for MnListDiff requests. +const MNLISTDIFF_TIMEOUT: Duration = Duration::from_secs(15); + +/// Maximum number of retries for MnListDiff requests. +const MNLISTDIFF_MAX_RETRIES: u32 = 3; + +/// Pipeline for downloading MnListDiff messages for quorum validation. +/// +/// Uses `DownloadCoordinator` for request tracking (keyed by target block_hash), +/// with a HashMap to store the base hash for each request. +#[derive(Debug)] +pub(super) struct MnListDiffPipeline { + /// Core coordinator tracks requests by target block_hash. + coordinator: DownloadCoordinator, + /// Maps target_hash -> base_hash for each request. + base_hashes: HashMap, +} + +impl Default for MnListDiffPipeline { + fn default() -> Self { + Self::new() + } +} + +impl MnListDiffPipeline { + /// Create a new MnListDiff pipeline. + pub(super) fn new() -> Self { + Self { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_max_concurrent(MAX_CONCURRENT_MNLISTDIFF) + .with_timeout(MNLISTDIFF_TIMEOUT) + .with_max_retries(MNLISTDIFF_MAX_RETRIES), + ), + base_hashes: HashMap::new(), + } + } + + /// Clear all state. + pub(super) fn clear(&mut self) { + self.coordinator.clear(); + self.base_hashes.clear(); + } + + /// Queue MnListDiff requests. + /// + /// Each request is a (base_hash, target_hash) pair. + pub(super) fn queue_requests(&mut self, requests: Vec<(BlockHash, BlockHash)>) { + for (base_hash, target_hash) in requests { + self.coordinator.enqueue([target_hash]); + self.base_hashes.insert(target_hash, base_hash); + } + + if !self.base_hashes.is_empty() { + tracing::info!("Queued {} MnListDiff requests", self.base_hashes.len()); + } + } + + /// Send pending requests. + /// + /// Returns the number of requests sent. + pub(super) fn send_pending(&mut self, requests: &RequestSender) -> SyncResult<()> { + let count = self.coordinator.available_to_send(); + if count == 0 { + return Ok(()); + } + + let target_hashes = self.coordinator.take_pending(count); + + for target_hash in target_hashes { + let Some(&base_hash) = self.base_hashes.get(&target_hash) else { + tracing::warn!("Missing base hash for target {}, skipping", target_hash); + continue; + }; + + requests.request_mnlist_diff(base_hash, target_hash)?; + self.coordinator.mark_sent(&[target_hash]); + + tracing::debug!( + "Sent GetMnListDiff: base={}, target={} ({} active, {} pending)", + base_hash, + target_hash, + self.coordinator.active_count(), + self.coordinator.pending_count() + ); + } + + Ok(()) + } + + /// Check if response matches an in-flight request. + pub(super) fn match_response(&self, diff: &MnListDiff) -> bool { + self.coordinator.is_in_flight(&diff.block_hash) + } + + /// Receive a MnListDiff response. + /// + /// Returns true if the diff was expected, false if unexpected. + pub(super) fn receive(&mut self, diff: &MnListDiff) -> bool { + let target_hash = diff.block_hash; + + if !self.coordinator.receive(&target_hash) { + return false; + } + + self.base_hashes.remove(&target_hash); + + tracing::debug!( + "Received MnListDiff for {} ({} remaining)", + target_hash, + self.coordinator.remaining() + ); + + true + } + + /// Requeue a received MnListDiff for retry. + /// + /// Removes from in-flight tracking and pushes back to the front of the + /// pending queue. Returns `true` if successfully requeued, `false` if + /// max retries were exceeded (in which case the request is dropped). + pub(super) fn requeue(&mut self, diff: &MnListDiff) -> bool { + let target_hash = diff.block_hash; + + // Remove from in-flight + self.coordinator.receive(&target_hash); + + // Re-enqueue for retry + if self.coordinator.enqueue_retry(target_hash) { + tracing::debug!("Requeued MnListDiff for {} for retry", diff.block_hash); + true + } else { + tracing::warn!("MnListDiff for {} exceeded max retries, dropping", diff.block_hash); + self.base_hashes.remove(&target_hash); + false + } + } + + /// Handle timeouts, re-queuing failed requests. + /// + /// Returns hashes that exceeded max retries and were dropped. + pub(super) fn handle_timeouts(&mut self) { + for target_hash in self.coordinator.check_timeouts() { + if !self.coordinator.enqueue_retry(target_hash) { + tracing::warn!( + "MnListDiff request for {} exceeded max retries, dropping", + target_hash + ); + self.base_hashes.remove(&target_hash); + } + } + } + + /// Check if pipeline has no pending work. + pub(super) fn is_complete(&self) -> bool { + self.coordinator.is_empty() + } + + /// Get the number of in-flight requests. + pub(super) fn active_count(&self) -> usize { + self.coordinator.active_count() + } +} + +#[cfg(test)] +mod tests { + use dashcore::transaction::{OutPoint, Transaction}; + use dashcore::{ScriptBuf, TxIn, TxOut, Witness}; + use dashcore_hashes::Hash; + + use super::*; + + /// Create a minimal MnListDiff for testing. + fn create_test_diff(base_hash: BlockHash, target_hash: BlockHash) -> MnListDiff { + // Create a minimal coinbase transaction + let coinbase_tx = Transaction { + version: 1, + lock_time: 0, + input: vec![TxIn { + previous_output: OutPoint::null(), + script_sig: ScriptBuf::new(), + sequence: 0xffffffff, + witness: Witness::new(), + }], + output: vec![TxOut { + value: 0, + script_pubkey: ScriptBuf::new(), + }], + special_transaction_payload: None, + }; + + MnListDiff { + version: 1, + base_block_hash: base_hash, + block_hash: target_hash, + total_transactions: 1, + merkle_hashes: vec![], + merkle_flags: vec![], + coinbase_tx, + deleted_masternodes: vec![], + new_masternodes: vec![], + deleted_quorums: vec![], + new_quorums: vec![], + quorums_chainlock_signatures: vec![], + } + } + + #[test] + fn test_pipeline_new() { + let pipeline = MnListDiffPipeline::new(); + assert!(pipeline.is_complete()); + assert_eq!(pipeline.active_count(), 0); + } + + #[test] + fn test_queue_requests() { + let mut pipeline = MnListDiffPipeline::new(); + + let base1 = BlockHash::from_byte_array([0x01; 32]); + let target1 = BlockHash::from_byte_array([0x02; 32]); + let base2 = BlockHash::from_byte_array([0x03; 32]); + let target2 = BlockHash::from_byte_array([0x04; 32]); + + pipeline.queue_requests(vec![(base1, target1), (base2, target2)]); + + assert!(!pipeline.is_complete()); + assert_eq!(pipeline.coordinator.pending_count(), 2); + assert_eq!(pipeline.base_hashes.len(), 2); + assert_eq!(pipeline.base_hashes.get(&target1), Some(&base1)); + assert_eq!(pipeline.base_hashes.get(&target2), Some(&base2)); + } + + #[test] + fn test_match_response() { + let mut pipeline = MnListDiffPipeline::new(); + + let base = BlockHash::from_byte_array([0x01; 32]); + let target = BlockHash::from_byte_array([0x02; 32]); + + pipeline.queue_requests(vec![(base, target)]); + + // Take and mark as sent + let items = pipeline.coordinator.take_pending(1); + pipeline.coordinator.mark_sent(&items); + + // Create a test diff + let diff = create_test_diff(base, target); + assert!(pipeline.match_response(&diff)); + + // Unknown hash should not match + let unknown_diff = create_test_diff(base, BlockHash::from_byte_array([0xFF; 32])); + assert!(!pipeline.match_response(&unknown_diff)); + } + + #[test] + fn test_receive() { + let mut pipeline = MnListDiffPipeline::new(); + + let base = BlockHash::from_byte_array([0x01; 32]); + let target = BlockHash::from_byte_array([0x02; 32]); + + pipeline.queue_requests(vec![(base, target)]); + + // Take and mark as sent + let items = pipeline.coordinator.take_pending(1); + pipeline.coordinator.mark_sent(&items); + + let diff = create_test_diff(base, target); + assert!(pipeline.receive(&diff)); + assert!(pipeline.is_complete()); + assert!(pipeline.base_hashes.is_empty()); + } + + #[test] + fn test_receive_unexpected() { + let mut pipeline = MnListDiffPipeline::new(); + + let diff = create_test_diff( + BlockHash::from_byte_array([0x01; 32]), + BlockHash::from_byte_array([0x02; 32]), + ); + + // Receiving unexpected diff should return false + assert!(!pipeline.receive(&diff)); + } + + #[test] + fn test_clear() { + let mut pipeline = MnListDiffPipeline::new(); + + let base = BlockHash::from_byte_array([0x01; 32]); + let target = BlockHash::from_byte_array([0x02; 32]); + + pipeline.queue_requests(vec![(base, target)]); + pipeline.clear(); + + assert!(pipeline.is_complete()); + assert!(pipeline.base_hashes.is_empty()); + } + + #[test] + fn test_handle_timeouts() { + use std::time::Duration; + + let mut pipeline = MnListDiffPipeline { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_timeout(Duration::from_millis(1)) + .with_max_retries(0), + ), + base_hashes: HashMap::new(), + }; + + let base = BlockHash::from_byte_array([0x01; 32]); + let target = BlockHash::from_byte_array([0x02; 32]); + + pipeline.base_hashes.insert(target, base); + pipeline.coordinator.mark_sent(&[target]); + + std::thread::sleep(Duration::from_millis(5)); + + pipeline.handle_timeouts(); + assert!(pipeline.base_hashes.is_empty()); + } + + #[test] + fn test_handle_timeouts_with_retry() { + use std::time::Duration; + + let mut pipeline = MnListDiffPipeline { + coordinator: DownloadCoordinator::new( + DownloadConfig::default() + .with_timeout(Duration::from_millis(1)) + .with_max_retries(3), + ), + base_hashes: HashMap::new(), + }; + + let base = BlockHash::from_byte_array([0x01; 32]); + let target = BlockHash::from_byte_array([0x02; 32]); + + pipeline.base_hashes.insert(target, base); + pipeline.coordinator.mark_sent(&[target]); + + std::thread::sleep(Duration::from_millis(5)); + + // First timeout should retry, not fail + pipeline.handle_timeouts(); + assert_eq!(pipeline.coordinator.pending_count(), 1); + assert!(pipeline.base_hashes.contains_key(&target)); + } + + #[test] + fn test_requeue_puts_back_in_pending() { + let mut pipeline = MnListDiffPipeline::new(); + + let base = BlockHash::from_byte_array([0x01; 32]); + let target = BlockHash::from_byte_array([0x02; 32]); + + pipeline.queue_requests(vec![(base, target)]); + + // Take and mark as sent (simulates sending the request) + let items = pipeline.coordinator.take_pending(1); + pipeline.coordinator.mark_sent(&items); + assert_eq!(pipeline.active_count(), 1); + assert_eq!(pipeline.coordinator.pending_count(), 0); + + let diff = create_test_diff(base, target); + + // Requeue should move from in-flight back to pending + assert!(pipeline.requeue(&diff)); + assert_eq!(pipeline.active_count(), 0); + assert_eq!(pipeline.coordinator.pending_count(), 1); + // base_hash mapping should be preserved for the retry + assert!(pipeline.base_hashes.contains_key(&target)); + // Pipeline should not be considered complete + assert!(!pipeline.is_complete()); + } + + #[test] + fn test_requeue_drops_after_max_retries() { + let mut pipeline = MnListDiffPipeline { + coordinator: DownloadCoordinator::new(DownloadConfig::default().with_max_retries(0)), + base_hashes: HashMap::new(), + }; + + let base = BlockHash::from_byte_array([0x01; 32]); + let target = BlockHash::from_byte_array([0x02; 32]); + + pipeline.base_hashes.insert(target, base); + pipeline.coordinator.mark_sent(&[target]); + + let diff = create_test_diff(base, target); + + // With max_retries=0, requeue should fail and clean up + assert!(!pipeline.requeue(&diff)); + assert!(!pipeline.base_hashes.contains_key(&target)); + assert_eq!(pipeline.coordinator.pending_count(), 0); + } +} diff --git a/dash-spv/src/sync/masternodes/progress.rs b/dash-spv/src/sync/masternodes/progress.rs new file mode 100644 index 000000000..aab9b8abf --- /dev/null +++ b/dash-spv/src/sync/masternodes/progress.rs @@ -0,0 +1,114 @@ +use crate::sync::SyncState; +use dashcore::prelude::CoreBlockHeight; +use std::fmt; +use std::time::Instant; + +/// Progress for masternode list synchronization. +#[derive(Debug, Clone, PartialEq)] +pub struct MasternodesProgress { + /// Current sync state. + state: SyncState, + /// The highest block height of a valid masternode list diff. + current_height: u32, + /// Target height (peer's best height). Used for progress display. + target_height: u32, + /// The tip height of the block header storage (determines when masternode sync can complete). + block_header_tip_height: u32, + /// Number of mnlistdiffs processed in the current sync session. + diffs_processed: u32, + /// The last time a mnlistdiff was stored/processed or the last manager state change. + last_activity: Instant, +} + +impl Default for MasternodesProgress { + fn default() -> Self { + Self { + state: Default::default(), + current_height: 0, + target_height: 0, + block_header_tip_height: 0, + diffs_processed: 0, + last_activity: Instant::now(), + } + } +} + +impl MasternodesProgress { + pub fn state(&self) -> SyncState { + self.state + } + + pub fn current_height(&self) -> u32 { + self.current_height + } + + /// Get the target height (peer's best height, for progress display). + pub fn target_height(&self) -> u32 { + self.target_height + } + + /// Get the block header tip height (determines when masternode sync can complete). + pub fn block_header_tip_height(&self) -> u32 { + self.block_header_tip_height + } + + /// Number of mnlistdiffs processed in the current sync session. + pub fn diffs_processed(&self) -> u32 { + self.diffs_processed + } + + /// The last time a mnlistdiff was stored/processed or the last manager state change. + pub fn last_activity(&self) -> Instant { + self.last_activity + } + + /// Update the sync state and bump the last activity time. + pub fn set_state(&mut self, state: SyncState) { + self.state = state; + self.bump_last_activity(); + } + + /// Update the current height (last successfully processed height). + pub fn update_current_height(&mut self, height: CoreBlockHeight) { + self.current_height = height; + self.bump_last_activity(); + } + + /// Update the target height (peer's best height, for progress display). + /// Only updates if the new height is greater than the current target (monotonic increase). + pub fn update_target_height(&mut self, height: CoreBlockHeight) { + if height > self.target_height { + self.target_height = height; + self.bump_last_activity(); + } + } + + /// Update the block header tip height (called when new block headers are stored). + pub fn update_block_header_tip_height(&mut self, height: CoreBlockHeight) { + self.block_header_tip_height = height; + self.bump_last_activity(); + } + + pub fn add_diffs_processed(&mut self, count: u32) { + self.diffs_processed += count; + self.bump_last_activity(); + } + + pub fn bump_last_activity(&mut self) { + self.last_activity = Instant::now(); + } +} + +impl fmt::Display for MasternodesProgress { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + write!( + f, + "{:?} {}/{} | diffs_processed: {}, last_activity: {}s", + self.state, + self.current_height, + self.target_height, + self.diffs_processed, + self.last_activity.elapsed().as_secs() + ) + } +} diff --git a/dash-spv/src/sync/masternodes/sync_manager.rs b/dash-spv/src/sync/masternodes/sync_manager.rs new file mode 100644 index 000000000..5bd5fea81 --- /dev/null +++ b/dash-spv/src/sync/masternodes/sync_manager.rs @@ -0,0 +1,563 @@ +use crate::error::SyncResult; +use crate::network::{Message, MessageType, RequestSender}; +use crate::storage::BlockHeaderStorage; +use crate::sync::{ + ManagerIdentifier, MasternodesManager, SyncEvent, SyncManager, SyncManagerProgress, SyncState, +}; +use crate::SyncError; +use async_trait::async_trait; +use dashcore::network::message::NetworkMessage; +use dashcore::network::message_qrinfo::QRInfo; +use dashcore::sml::masternode_list_engine::{MasternodeListEngine, WORK_DIFF_DEPTH}; +use dashcore::sml::quorum_validation_error::QuorumValidationError; +use dashcore::{BlockHash, QuorumHash}; +use dashcore_hashes::Hash; +use std::collections::{BTreeSet, HashSet}; +use std::time::{Duration, Instant}; + +/// Timeout duration for waiting for QRInfo response. +const QRINFO_TIMEOUT_SECS: u64 = 15; + +/// Maximum number of retry attempts before giving up. +const MAX_RETRY_ATTEMPTS: u8 = 3; + +/// Delay between retries when ChainLock is not yet available for the tip. +/// ChainLocks typically propagate within a few seconds after a block is mined. +const CHAINLOCK_RETRY_DELAY_SECS: u64 = 5; + +/// Build MnListDiff request pairs (base_hash, target_hash) for quorum validation. +/// +/// Chains diffs from known heights where we have masternode lists, per DIP-0004: +/// - Uses all-zeros base for full list requests when no known height exists below target +/// - Finds the nearest known height below the target to use as base +pub(super) async fn build_mnlistdiff_request_pairs( + storage: &S, + quorum_hashes: &BTreeSet, + known_heights: &BTreeSet, +) -> SyncResult> { + let mut request_pairs = Vec::new(); + let mut seen_targets = HashSet::new(); + + for quorum_hash in quorum_hashes { + let quorum_block_hash = *quorum_hash; + + let quorum_height = match storage.get_header_height_by_hash(&quorum_block_hash).await { + Ok(Some(height)) => height, + Ok(None) => { + tracing::warn!("Height not found for quorum hash {}, skipping", quorum_block_hash); + continue; + } + Err(e) => { + tracing::warn!( + "Failed to get height for quorum hash {}: {}, skipping", + quorum_block_hash, + e + ); + continue; + } + }; + + let validation_height = quorum_height.saturating_sub(8); + + // Skip if we already have this height + if known_heights.contains(&validation_height) { + continue; + } + + // Skip duplicates + if seen_targets.contains(&validation_height) { + continue; + } + seen_targets.insert(validation_height); + + // Find nearest known height BELOW validation_height to use as base + let base_height = known_heights.range(..validation_height).next_back().copied(); + + let base_hash = if let Some(height) = base_height { + match storage.get_header(height).await { + Ok(Some(h)) => h.block_hash(), + Ok(None) => { + tracing::warn!("Base header not found at height {}, using all-zeros", height); + BlockHash::all_zeros() + } + Err(e) => { + tracing::warn!( + "Failed to get base header at height {}: {}, using all-zeros", + height, + e + ); + BlockHash::all_zeros() + } + } + } else { + // No known height below target - request full list per DIP-0004 + BlockHash::all_zeros() + }; + + let target_hash = match storage.get_header(validation_height).await { + Ok(Some(h)) => h.block_hash(), + Ok(None) => { + tracing::warn!("Target header not found at height {}, skipping", validation_height); + continue; + } + Err(e) => { + tracing::warn!( + "Failed to get target header at height {}: {}, skipping", + validation_height, + e + ); + continue; + } + }; + + tracing::debug!( + "Adding MnListDiff request: base_height={:?}, target_height={}", + base_height, + validation_height + ); + + request_pairs.push((base_hash, target_hash)); + } + + // Sort by target height for sequential application + let storage_ref = storage; + let mut pairs_with_height = Vec::new(); + for (base, target) in request_pairs { + if let Ok(Some(height)) = storage_ref.get_header_height_by_hash(&target).await { + pairs_with_height.push((height, base, target)); + } + } + pairs_with_height.sort_by_key(|(h, _, _)| *h); + + Ok(pairs_with_height.into_iter().map(|(_, base, target)| (base, target)).collect()) +} + +/// Feed QRInfo block heights to the engine from storage. +/// +/// This feeds all block heights referenced in the QRInfo diffs, plus the cycle boundary +/// height which is needed for rotated quorum storage key calculation. +pub(super) async fn feed_qrinfo_heights_to_engine( + engine: &mut MasternodeListEngine, + qr_info: &QRInfo, + storage: &S, +) -> SyncResult { + let mut block_hashes = vec![ + qr_info.mn_list_diff_tip.block_hash, + qr_info.mn_list_diff_h.block_hash, + qr_info.mn_list_diff_at_h_minus_c.block_hash, + qr_info.mn_list_diff_at_h_minus_2c.block_hash, + qr_info.mn_list_diff_at_h_minus_3c.block_hash, + qr_info.mn_list_diff_tip.base_block_hash, + qr_info.mn_list_diff_h.base_block_hash, + qr_info.mn_list_diff_at_h_minus_c.base_block_hash, + qr_info.mn_list_diff_at_h_minus_2c.base_block_hash, + qr_info.mn_list_diff_at_h_minus_3c.base_block_hash, + ]; + + if let Some((_, diff)) = &qr_info.quorum_snapshot_and_mn_list_diff_at_h_minus_4c { + block_hashes.push(diff.block_hash); + block_hashes.push(diff.base_block_hash); + } + + for diff in &qr_info.mn_list_diff_list { + block_hashes.push(diff.block_hash); + block_hashes.push(diff.base_block_hash); + } + + block_hashes.sort(); + block_hashes.dedup(); + + let mut fed_count = 0; + for block_hash in block_hashes { + if let Ok(Some(height)) = storage.get_header_height_by_hash(&block_hash).await { + engine.feed_block_height(height, block_hash); + fed_count += 1; + tracing::debug!("Fed height {} for block {}", height, block_hash); + } + } + + // Feed cycle boundary heights for all diffs (current and historical cycles) + // Each diff's block_hash is at the "work block" height; the cycle boundary is WORK_DIFF_DEPTH higher + let mut work_block_hashes = vec![ + qr_info.mn_list_diff_h.block_hash, + qr_info.mn_list_diff_at_h_minus_c.block_hash, + qr_info.mn_list_diff_at_h_minus_2c.block_hash, + qr_info.mn_list_diff_at_h_minus_3c.block_hash, + ]; + + if let Some((_, diff)) = &qr_info.quorum_snapshot_and_mn_list_diff_at_h_minus_4c { + work_block_hashes.push(diff.block_hash); + } + + for work_block_hash in work_block_hashes { + if let Ok(Some(work_block_height)) = + storage.get_header_height_by_hash(&work_block_hash).await + { + let cycle_boundary_height = work_block_height + WORK_DIFF_DEPTH; + if let Ok(Some(cycle_boundary_header)) = storage.get_header(cycle_boundary_height).await + { + let cycle_boundary_hash = cycle_boundary_header.block_hash(); + engine.feed_block_height(cycle_boundary_height, cycle_boundary_hash); + fed_count += 1; + tracing::debug!( + "Fed cycle boundary height {} for block {}", + cycle_boundary_height, + cycle_boundary_hash + ); + } + } + } + + tracing::info!("Fed {} block heights to engine", fed_count); + Ok(fed_count) +} + +#[async_trait] +impl SyncManager for MasternodesManager { + fn identifier(&self) -> ManagerIdentifier { + ManagerIdentifier::Masternode + } + + fn state(&self) -> SyncState { + self.progress.state() + } + + fn set_state(&mut self, state: SyncState) { + self.progress.set_state(state); + } + + fn update_target_height(&mut self, height: u32) { + self.progress.update_target_height(height); + } + + fn wanted_message_types(&self) -> &'static [MessageType] { + &[MessageType::MnListDiff, MessageType::QRInfo] + } + + async fn handle_message( + &mut self, + msg: Message, + requests: &RequestSender, + ) -> SyncResult> { + match msg.inner() { + NetworkMessage::QRInfo(qr_info) => { + tracing::info!("Processing QRInfo message"); + self.sync_state.qrinfo_received(); + + // Feed block heights to engine using internal storage + let storage = self.header_storage.read().await; + let mut engine = self.engine.write().await; + let fed = feed_qrinfo_heights_to_engine(&mut engine, qr_info, &*storage).await?; + drop(storage); + tracing::info!("Fed {} block heights to engine", fed); + + // Feed QRInfo to engine first to populate masternode lists + if let Err(e) = engine.feed_qr_info( + qr_info.clone(), + true, + true, + None::< + fn( + &BlockHash, + ) -> Result< + u32, + dashcore::sml::quorum_validation_error::ClientDataRetrievalError, + >, + >, + ) { + // Check if this is a tip ChainLock error (h - 0 means the tip block) + // The QRInfo response always includes `mn_list_diff_tip` which is the current + // chain tip. If the tip was just mined, the ChainLock hasn't propagated yet. + let is_tip_chainlock_error = matches!( + e, + QuorumValidationError::RequiredRotatedChainLockSigNotPresent(0, _) + ); + + if is_tip_chainlock_error { + self.sync_state.qrinfo_retry_count += 1; + + if self.sync_state.qrinfo_retry_count <= MAX_RETRY_ATTEMPTS { + tracing::info!( + "ChainLock not yet available for tip, scheduling retry {}/{} in {}s", + self.sync_state.qrinfo_retry_count, + MAX_RETRY_ATTEMPTS, + CHAINLOCK_RETRY_DELAY_SECS + ); + // Schedule a delayed retry - the tick handler will trigger it + self.sync_state.chainlock_retry_after = Some( + Instant::now() + Duration::from_secs(CHAINLOCK_RETRY_DELAY_SECS), + ); + drop(engine); + self.set_state(SyncState::Syncing); + return Ok(vec![]); + } + } + + // For other errors or max retries reached, fail + tracing::error!( + "QRInfo failed after {} retries: {}", + self.sync_state.qrinfo_retry_count, + e + ); + return Err(SyncError::MasternodeSyncFailed(e.to_string())); + } + + // Populate known_mn_list_heights from engine after QRInfo processing + self.sync_state.known_mn_list_heights = + engine.masternode_lists.keys().copied().collect(); + tracing::debug!( + "Engine has masternode lists at {} heights", + self.sync_state.known_mn_list_heights.len() + ); + + // Get quorum hashes and build request pairs, chaining from known heights + let quorum_hashes = + engine.latest_masternode_list_non_rotating_quorum_hashes(&[], false); + let storage = self.header_storage.read().await; + let request_pairs = build_mnlistdiff_request_pairs( + &*storage, + &quorum_hashes, + &self.sync_state.known_mn_list_heights, + ) + .await?; + + // Drop locks before potentially long operations + drop(engine); + drop(storage); + + // Queue and send MnListDiff requests via pipeline + self.sync_state.mnlistdiff_pipeline.queue_requests(request_pairs); + self.sync_state.mnlistdiff_pipeline.send_pending(requests)?; + + // Track last processed block hash + let block_hash = qr_info.mn_list_diff_h.block_hash; + self.sync_state.known_block_hashes.insert(block_hash); + self.sync_state.last_qrinfo_block_hash = Some(block_hash); + + self.progress.bump_last_activity(); + + // If no pending requests, complete + if !self.sync_state.has_pending_requests() { + return self.verify_and_complete().await; + } + } + + NetworkMessage::MnListDiff(diff) => { + // Check if this diff matches an in-flight request + if !self.sync_state.mnlistdiff_pipeline.match_response(diff) { + tracing::debug!("Received unexpected MnListDiff for {}", diff.block_hash); + return Ok(vec![]); + } + + tracing::debug!("Processing MnListDiff message for {}", diff.block_hash); + + // Get target height from storage + let storage = self.header_storage.read().await; + let target_height = match storage.get_header_height_by_hash(&diff.block_hash).await + { + Ok(Some(h)) => h, + Ok(None) => { + tracing::warn!( + "Height not found for MnListDiff block {}, requeuing for retry", + diff.block_hash + ); + self.sync_state.mnlistdiff_pipeline.requeue(diff); + self.sync_state.mnlistdiff_pipeline.send_pending(requests)?; + return Ok(vec![]); + } + Err(e) => { + tracing::warn!( + "Failed to get height for MnListDiff block {}: {}, requeuing for retry", + diff.block_hash, + e + ); + self.sync_state.mnlistdiff_pipeline.requeue(diff); + self.sync_state.mnlistdiff_pipeline.send_pending(requests)?; + return Ok(vec![]); + } + }; + drop(storage); + + // Apply diff to engine + let mut engine = self.engine.write().await; + engine.feed_block_height(target_height, diff.block_hash); + + match engine.apply_diff(diff.clone(), Some(target_height), false, None) { + Ok(_) => { + self.sync_state.known_mn_list_heights.insert(target_height); + self.sync_state.known_block_hashes.insert(diff.block_hash); + tracing::debug!("Applied MnListDiff at height {}", target_height); + } + Err(e) => { + tracing::warn!( + "Failed to apply MnListDiff at height {}: {}", + target_height, + e + ); + } + } + drop(engine); + + self.progress.add_diffs_processed(1); + self.sync_state.mnlistdiff_pipeline.receive(diff); + self.sync_state.mnlistdiff_pipeline.send_pending(requests)?; + + // Check if all responses received + if self.sync_state.mnlistdiff_pipeline.is_complete() { + tracing::info!("All MnListDiff responses received"); + return self.verify_and_complete().await; + } + } + + _ => {} + } + + Ok(vec![]) + } + + async fn handle_sync_event( + &mut self, + event: &SyncEvent, + requests: &RequestSender, + ) -> SyncResult> { + // Track block header tip height as headers come in + if let SyncEvent::BlockHeadersStored { + tip_height, + } = event + { + self.progress.update_block_header_tip_height(*tip_height); + // Keep target_height up to date post-sync + if *tip_height > self.progress.target_height() { + self.progress.update_target_height(*tip_height); + } + + // If Synced but behind, trigger incremental update to catch up with new blocks + if self.state() == SyncState::Synced + && self.progress.current_height() < self.progress.block_header_tip_height() + { + tracing::debug!( + "New headers stored (tip: {}), updating masternode list from {}", + tip_height, + self.progress.current_height() + ); + self.sync_state.qrinfo_retry_count = 0; + self.sync_state.clear_pending(); + return self.send_qrinfo_for_tip(requests).await; + } + } + + // Start masternode sync when headers are fully caught up + if let SyncEvent::BlockHeaderSyncComplete { + tip_height, + } = event + { + self.progress.update_block_header_tip_height(*tip_height); + // Keep target_height up to date post-sync + if *tip_height > self.progress.target_height() { + self.progress.update_target_height(*tip_height); + } + + // Determine if we should (re)start sync: + // 1. WaitingForConnections: first time starting + // 2. WaitForEvents: waiting for this event + // 3. Syncing but stuck at height 0 with no pending requests: timed out before headers ready + // 4. Synced but behind target: new headers arrived after sync completed + let should_restart = match self.state() { + SyncState::WaitingForConnections | SyncState::WaitForEvents => true, + SyncState::Syncing => { + self.progress.current_height() == 0 && !self.sync_state.has_pending_requests() + } + SyncState::Synced => { + self.progress.current_height() < self.progress.block_header_tip_height() + } + _ => false, + }; + + if should_restart { + // Use debug for incremental updates (when already Synced) + if self.state() == SyncState::Synced { + tracing::debug!( + "Headers sync complete at {}, updating masternode list", + self.progress.block_header_tip_height() + ); + } else { + tracing::info!( + "Headers sync complete at {}, starting masternode sync", + self.progress.block_header_tip_height() + ); + } + self.sync_state.qrinfo_retry_count = 0; + self.sync_state.clear_pending(); + return self.send_qrinfo_for_tip(requests).await; + } + } + + Ok(vec![]) + } + + async fn tick(&mut self, requests: &RequestSender) -> SyncResult> { + // Handle ticks for both Syncing (initial) and Synced (incremental updates) + if !matches!(self.state(), SyncState::Syncing | SyncState::Synced) { + return Ok(vec![]); + } + + // If Synced with no pending requests, nothing to do + if self.state() == SyncState::Synced && !self.sync_state.has_pending_requests() { + return Ok(vec![]); + } + + // Check for ChainLock retry (tip didn't have ChainLock yet) + if let Some(retry_after) = self.sync_state.chainlock_retry_after { + if Instant::now() >= retry_after { + tracing::info!("Retrying QRInfo after ChainLock delay"); + self.sync_state.chainlock_retry_after = None; + return self.send_qrinfo_for_tip(requests).await; + } + // Still waiting for retry delay + return Ok(vec![]); + } + + // Check for QRInfo timeout + if self.sync_state.waiting_for_qrinfo { + if let Some(wait_start) = self.sync_state.qrinfo_wait_start { + let timeout = Duration::from_secs(QRINFO_TIMEOUT_SECS); + if wait_start.elapsed() > timeout { + if self.sync_state.qrinfo_retry_count < MAX_RETRY_ATTEMPTS { + tracing::warn!("Timeout waiting for QRInfo response, retrying..."); + self.sync_state.qrinfo_retry_count += 1; + self.sync_state.clear_pending(); + return self.send_qrinfo_for_tip(requests).await; + } else { + tracing::warn!( + "QRInfo timeout after {} retries, skipping masternode sync", + MAX_RETRY_ATTEMPTS + ); + self.sync_state.clear_pending(); + return self.verify_and_complete().await; + } + } + } + return Ok(vec![]); + } + + // Check for MnListDiff timeouts via pipeline + if self.sync_state.mnlistdiff_pipeline.active_count() > 0 { + self.sync_state.mnlistdiff_pipeline.handle_timeouts(); + + // Send any re-queued requests + self.sync_state.mnlistdiff_pipeline.send_pending(requests)?; + + // Check if complete after handling timeouts + if self.sync_state.mnlistdiff_pipeline.is_complete() { + tracing::info!("MnListDiff pipeline complete"); + return self.verify_and_complete().await; + } + } + + Ok(vec![]) + } + + fn progress(&self) -> SyncManagerProgress { + SyncManagerProgress::Masternodes(self.progress.clone()) + } +} diff --git a/dash-spv/src/sync/mod.rs b/dash-spv/src/sync/mod.rs index eb9bd8e0a..88defc285 100644 --- a/dash-spv/src/sync/mod.rs +++ b/dash-spv/src/sync/mod.rs @@ -1,23 +1,32 @@ //! Synchronization management for the Dash SPV client. -//! -//! This module implements a strict sequential sync pipeline where each phase -//! must complete 100% before the next phase begins. -//! -//! # Sequential Sync Benefits: -//! - Simpler state management (one active phase) -//! - Easier error recovery (restart current phase) -//! - Matches dependencies (need headers before filters) -//! - More reliable than concurrent sync -//! -//! # CRITICAL: Lock Ordering -//! To prevent deadlocks, acquire locks in this order: -//! 1. state (via read/write methods) -//! 2. storage (via async methods) -//! 3. network (via send_message) -//! -//! # Module Structure -//! - `legacy` - Original sequential sync implementation -//! - `headers2` - Headers2 compressed header state management // Legacy sync modules (moved to legacy/ subdirectory) pub mod legacy; + +mod block_headers; +mod blocks; +mod chainlock; +pub(super) mod download_coordinator; +mod events; +mod filter_headers; +mod filters; +mod identifier; +mod instantsend; +mod masternodes; +mod progress; +mod sync_coordinator; +mod sync_manager; + +pub use block_headers::{BlockHeadersManager, BlockHeadersProgress}; +pub use blocks::{BlocksManager, BlocksProgress}; +pub use chainlock::{ChainLockManager, ChainLockProgress}; +pub use filter_headers::{FilterHeadersManager, FilterHeadersProgress}; +pub use filters::{FiltersManager, FiltersProgress}; +pub use instantsend::{InstantSendManager, InstantSendProgress}; +pub use masternodes::{MasternodesManager, MasternodesProgress}; + +pub use events::SyncEvent; +pub use identifier::ManagerIdentifier; +pub use progress::{SyncProgress, SyncState}; +pub use sync_coordinator::{Managers, SyncCoordinator}; +pub use sync_manager::{SyncManager, SyncManagerProgress, SyncManagerTaskContext}; diff --git a/dash-spv/src/sync/progress.rs b/dash-spv/src/sync/progress.rs new file mode 100644 index 000000000..1fb77715c --- /dev/null +++ b/dash-spv/src/sync/progress.rs @@ -0,0 +1,234 @@ +use crate::error::{SyncError, SyncResult}; +use crate::sync::{ + BlockHeadersProgress, BlocksProgress, ChainLockProgress, FilterHeadersProgress, + FiltersProgress, InstantSendProgress, MasternodesProgress, +}; +use std::fmt; + +/// Overall state of the parallel sync system. +#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)] +pub enum SyncState { + #[default] + Initializing, + WaitingForConnections, + WaitForEvents, + Syncing, + Synced, + Error, +} + +/// Aggregate progress for all managers. +#[derive(Debug, Clone, Default, PartialEq)] +pub struct SyncProgress { + /// Headers synchronization progress. + headers: Option, + /// Filter headers synchronization progress. + filter_headers: Option, + /// Filters synchronization progress. + filters: Option, + /// Blocks synchronization progress. + blocks: Option, + /// Masternodes synchronization progress. + masternodes: Option, + /// ChainLock synchronization progress. + chainlocks: Option, + /// InstantSend synchronization progress. + instantsend: Option, +} + +impl SyncProgress { + /// Get the overall sync state. + /// + /// Returns the most progressed state among all managers, + /// or Initializing if no managers have started. + pub fn state(&self) -> SyncState { + let states: Vec = [ + self.headers.as_ref().map(|h| h.state()), + self.filter_headers.as_ref().map(|f| f.state()), + self.filters.as_ref().map(|f| f.state()), + self.blocks.as_ref().map(|b| b.state()), + self.masternodes.as_ref().map(|m| m.state()), + ] + .into_iter() + .flatten() + .collect(); + + if states.is_empty() { + return SyncState::Initializing; + } + + // Return the "most progressed" state + // Priority: Error > Syncing > WaitForEvents > WaitingForConnections > Synced > Initializing + if states.contains(&SyncState::Error) { + return SyncState::Error; + } + if states.contains(&SyncState::Syncing) { + return SyncState::Syncing; + } + if states.contains(&SyncState::WaitForEvents) { + return SyncState::WaitForEvents; + } + if states.contains(&SyncState::WaitingForConnections) { + return SyncState::WaitingForConnections; + } + if states.iter().all(|s| *s == SyncState::Synced) { + return SyncState::Synced; + } + SyncState::Initializing + } + + /// Check if all managers are idle (sync complete). + pub fn is_synced(&self) -> bool { + let states: Vec = [ + self.headers.as_ref().map(|h| h.state()), + self.filter_headers.as_ref().map(|f| f.state()), + self.filters.as_ref().map(|f| f.state()), + self.blocks.as_ref().map(|b| b.state()), + self.masternodes.as_ref().map(|m| m.state()), + ] + .into_iter() + .flatten() + .collect(); + + // Not synced if no managers have reported yet + if states.is_empty() { + return false; + } + + states.iter().all(|state| *state == SyncState::Synced) + } + + /// Get overall completion percentage (0.0 to 1.0). + pub fn percentage(&self) -> f64 { + let percentages = [ + self.headers.as_ref().map(|h| h.percentage()).unwrap_or(1.0), + self.filter_headers.as_ref().map(|f| f.percentage()).unwrap_or(1.0), + self.filters.as_ref().map(|f| f.percentage()).unwrap_or(1.0), + ]; + percentages.iter().sum::() / percentages.len() as f64 + } + + pub fn headers(&self) -> SyncResult<&BlockHeadersProgress> { + self.headers + .as_ref() + .ok_or_else(|| SyncError::InvalidState("BlockHeadersManager not started".into())) + } + + pub fn filter_headers(&self) -> SyncResult<&FilterHeadersProgress> { + self.filter_headers + .as_ref() + .ok_or_else(|| SyncError::InvalidState("FilterHeadersManager not started".into())) + } + + pub fn filters(&self) -> SyncResult<&FiltersProgress> { + self.filters + .as_ref() + .ok_or_else(|| SyncError::InvalidState("FiltersManager not started".into())) + } + + pub fn blocks(&self) -> SyncResult<&BlocksProgress> { + self.blocks + .as_ref() + .ok_or_else(|| SyncError::InvalidState("BlocksManager not started".into())) + } + + pub fn masternodes(&self) -> SyncResult<&MasternodesProgress> { + self.masternodes + .as_ref() + .ok_or_else(|| SyncError::InvalidState("MasternodeListManager not started".into())) + } + + pub fn chainlocks(&self) -> SyncResult<&ChainLockProgress> { + self.chainlocks + .as_ref() + .ok_or_else(|| SyncError::InvalidState("ChainLocksManager not started".into())) + } + + pub fn instantsend(&self) -> SyncResult<&InstantSendProgress> { + self.instantsend + .as_ref() + .ok_or_else(|| SyncError::InvalidState("InstantSendManager not started".into())) + } + + pub fn update_headers(&mut self, progress: BlockHeadersProgress) { + let updated_headers = Some(progress); + if self.headers != updated_headers { + self.headers = updated_headers; + } + } + + pub fn update_filter_headers(&mut self, progress: FilterHeadersProgress) { + let updated_filter_headers = Some(progress); + if self.filter_headers != updated_filter_headers { + self.filter_headers = updated_filter_headers; + } + } + + /// Update filters progress. + pub fn update_filters(&mut self, progress: FiltersProgress) { + let updated_filters = Some(progress); + if self.filters != updated_filters { + self.filters = updated_filters; + } + } + + /// Update blocks progress. + pub fn update_blocks(&mut self, progress: BlocksProgress) { + let updated_blocks = Some(progress); + if self.blocks != updated_blocks { + self.blocks = updated_blocks; + } + } + + /// Update masternodes progress. + pub fn update_masternodes(&mut self, progress: MasternodesProgress) { + let updated_masternodes = Some(progress); + if self.masternodes != updated_masternodes { + self.masternodes = updated_masternodes; + } + } + + /// Update chainlock progress. + pub fn update_chainlocks(&mut self, progress: ChainLockProgress) { + let updated_chainlocks = Some(progress); + if self.chainlocks != updated_chainlocks { + self.chainlocks = updated_chainlocks; + } + } + + /// Update instantsend progress. + pub fn update_instantsend(&mut self, progress: InstantSendProgress) { + let updated_instantsend = Some(progress); + if self.instantsend != updated_instantsend { + self.instantsend = updated_instantsend; + } + } +} + +impl fmt::Display for SyncProgress { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + writeln!(f)?; + if let Some(h) = &self.headers { + writeln!(f, " Headers: {}", h)?; + } + if let Some(fh) = &self.filter_headers { + writeln!(f, " Filter Headers: {}", fh)?; + } + if let Some(fl) = &self.filters { + writeln!(f, " Filters: {}", fl)?; + } + if let Some(b) = &self.blocks { + writeln!(f, " Blocks: {}", b)?; + } + if let Some(m) = &self.masternodes { + writeln!(f, " Masternodes: {}", m)?; + } + if let Some(c) = &self.chainlocks { + writeln!(f, " ChainLocks: {}", c)?; + } + if let Some(i) = &self.instantsend { + writeln!(f, " InstantSend: {}", i)?; + } + Ok(()) + } +} diff --git a/dash-spv/src/sync/sync_coordinator.rs b/dash-spv/src/sync/sync_coordinator.rs new file mode 100644 index 000000000..1999505b8 --- /dev/null +++ b/dash-spv/src/sync/sync_coordinator.rs @@ -0,0 +1,419 @@ +//! Parallel sync coordinator. +//! +//! The coordinator orchestrates all sync managers, spawning each in its own +//! tokio task for true parallel processing. It tracks aggregate progress and +//! coordinates graceful shutdown. + +use std::time::{Duration, Instant}; + +use futures::stream::{select_all, StreamExt}; +use tokio::sync::{broadcast, watch}; +use tokio::task::JoinSet; +use tokio_stream::wrappers::WatchStream; +use tokio_util::sync::CancellationToken; + +use crate::error::SyncResult; +use crate::network::NetworkManager; +use crate::storage::{BlockHeaderStorage, BlockStorage, FilterHeaderStorage, FilterStorage}; +use crate::sync::{ + BlockHeadersManager, BlocksManager, ChainLockManager, FilterHeadersManager, FiltersManager, + InstantSendManager, ManagerIdentifier, MasternodesManager, SyncEvent, SyncManager, + SyncManagerProgress, SyncManagerTaskContext, SyncProgress, +}; +use crate::SyncError; +use key_wallet_manager::wallet_interface::WalletInterface; + +const TASK_JOIN_TIMEOUT: Duration = Duration::from_secs(5); +const DEFAULT_SYNC_EVENT_CAPACITY: usize = 10000; + +/// Macro to spawn a manager if present. +macro_rules! spawn_manager { + ($self:expr, $field:ident, $network:expr) => { + if let Some(manager) = $self.managers.$field.take() { + let identifier = manager.identifier(); + let wanted_message_types = manager.wanted_message_types(); + let requests = $network.request_sender(); + let message_receiver = $network.message_receiver(wanted_message_types).await; + let network_event_rx = $network.subscribe_network_events(); + let (progress_sender, progress_receiver) = watch::channel(manager.progress()); + + tracing::info!( + "Spawning {} task, receiving message types: {:?}", + identifier, + wanted_message_types + ); + + let context = SyncManagerTaskContext { + message_receiver, + sync_event_sender: $self.sync_event_sender.clone(), + network_event_receiver: network_event_rx, + requests, + shutdown: $self.shutdown.clone(), + progress_sender, + }; + + $self.tasks.spawn(manager.run(context)); + $self.progress_receivers.push(progress_receiver); + } + }; +} + +/// Container for all manager instances. +pub struct Managers +where + H: BlockHeaderStorage, + FH: FilterHeaderStorage, + F: FilterStorage, + B: BlockStorage, + W: WalletInterface + 'static, +{ + pub block_headers: Option>, + pub filter_headers: Option>, + pub filters: Option>, + pub blocks: Option>, + pub masternode: Option>, + pub chainlock: Option>, + pub instantsend: Option, +} + +impl Default for Managers +where + H: BlockHeaderStorage, + FH: FilterHeaderStorage, + F: FilterStorage, + B: BlockStorage, + W: WalletInterface + 'static, +{ + fn default() -> Self { + Self { + block_headers: None, + filter_headers: None, + filters: None, + blocks: None, + masternode: None, + chainlock: None, + instantsend: None, + } + } +} + +/// Sync coordinator handling the separate sync managers. +/// +/// - Spawns each manager in its own tokio task +/// - Tracks and aggregates progress via watch channels +/// - Coordinates graceful shutdown +pub struct SyncCoordinator +where + H: BlockHeaderStorage, + FH: FilterHeaderStorage, + F: FilterStorage, + B: BlockStorage, + W: WalletInterface + 'static, +{ + /// Manager instances provided on construction and consumed in start spawned tasks. + managers: Managers, + /// Progress receivers from spawned manager tasks. + progress_receivers: Vec>, + /// JoinSet for managing spawned tasks. + tasks: JoinSet>, + /// Event bus for inter-manager communication. + sync_event_sender: broadcast::Sender, + /// Watch channel sender for progress updates. + progress_sender: watch::Sender, + /// Watch channel receiver for progress updates. + progress_receiver: watch::Receiver, + /// Time when sync started (for duration logging). + sync_start_time: Option, + /// Shutdown token for all tasks. + shutdown: CancellationToken, + /// Handle for the progress aggregation task. + progress_task: Option>, +} + +impl SyncCoordinator +where + H: BlockHeaderStorage, + FH: FilterHeaderStorage, + F: FilterStorage, + B: BlockStorage, + W: WalletInterface + 'static, +{ + /// Create a new coordinator with the given config. + /// + /// Managers are passed to `start()` when sync begins. + pub fn new(managers: Managers) -> Self { + let (progress_sender, progress_receiver) = watch::channel(SyncProgress::default()); + Self { + managers, + progress_receivers: Vec::new(), + tasks: JoinSet::new(), + sync_event_sender: broadcast::Sender::new(DEFAULT_SYNC_EVENT_CAPACITY), + progress_sender, + progress_receiver, + sync_start_time: None, + shutdown: CancellationToken::new(), + progress_task: None, + } + } + + /// Subscribe to progress updates. + pub fn subscribe_progress(&self) -> watch::Receiver { + self.progress_sender.subscribe() + } + + /// Subscribe to sync events. + pub fn subscribe_events(&self) -> broadcast::Receiver { + self.sync_event_sender.subscribe() + } + + /// Start all managers by spawning each in its own task. + /// + /// Each manager receives: + /// - A message stream filtered by its subscribed types + /// - An event bus subscription for inter-manager events + /// - A request sender for outgoing network messages + /// - A shutdown token for graceful termination + pub async fn start(&mut self, network: &mut N) -> SyncResult<()> + where + N: NetworkManager, + { + if !self.tasks.is_empty() { + return Err(SyncError::SyncInProgress); + } + + tracing::info!("Starting sync managers in separate tasks"); + + // Record sync start time + let sync_start_time = Instant::now(); + self.sync_start_time = Some(sync_start_time); + + // Spawn each manager using the macro + spawn_manager!(self, block_headers, network); + spawn_manager!(self, filter_headers, network); + spawn_manager!(self, filters, network); + spawn_manager!(self, blocks, network); + spawn_manager!(self, masternode, network); + spawn_manager!(self, chainlock, network); + spawn_manager!(self, instantsend, network); + + // Clone receivers for progress task + let receivers = self.progress_receivers.clone(); + + // Spawn progress aggregation task + let progress_sender = self.progress_sender.clone(); + let sync_event_sender = self.sync_event_sender.clone(); + let shutdown = self.shutdown.clone(); + + self.progress_task = Some(tokio::spawn(run_progress_task( + receivers, + progress_sender, + sync_event_sender, + shutdown, + sync_start_time, + ))); + + tracing::info!("All {} manager tasks spawned", self.progress_receivers.len()); + + Ok(()) + } + + /// Run periodic tick to check for task completion errors. + /// + /// Progress aggregation is handled reactively by the dedicated progress task. + /// This method only checks for completed manager tasks (errors or early exits). + pub async fn tick(&mut self) -> SyncResult<()> { + while let Some(result) = self.tasks.try_join_next() { + match result { + Ok(Ok(identifier)) => { + tracing::debug!("{} task completed successfully", identifier); + } + Ok(Err(e)) => { + tracing::error!("Manager task failed: {}", e); + } + Err(e) => { + tracing::error!("Manager task panicked: {}", e); + } + } + } + + Ok(()) + } + + /// Gracefully shutdown all manager tasks. + pub async fn shutdown(&mut self) -> SyncResult<()> { + tracing::info!("Shutting down SyncCoordinator"); + + // Signal all tasks to shutdown + self.shutdown.cancel(); + + // Wait for all manager tasks to complete with timeout + let drain_tasks = async { + while let Some(result) = self.tasks.join_next().await { + match result { + Ok(Ok(identifier)) => { + tracing::debug!("{} task completed during shutdown", identifier); + } + Ok(Err(e)) => { + tracing::warn!("Manager task error during shutdown: {}", e); + } + Err(e) => { + tracing::error!("Manager task panic during shutdown: {}", e); + } + } + } + }; + + if tokio::time::timeout(TASK_JOIN_TIMEOUT, drain_tasks).await.is_err() { + tracing::warn!( + "Shutdown timeout after {:?}, {} tasks may not have completed cleanly", + TASK_JOIN_TIMEOUT, + self.tasks.len() + ); + } + + // Wait for progress task to complete with timeout + if let Some(handle) = self.progress_task.take() { + if tokio::time::timeout(Duration::from_secs(1), handle).await.is_err() { + tracing::warn!("Progress task did not complete within timeout"); + } + } + + tracing::info!("Shutdown complete"); + + Ok(()) + } + + /// Get current progress. + pub fn progress(&self) -> SyncProgress { + self.progress_receiver.borrow().clone() + } + + /// Check if all managers are idle (sync complete). + pub fn is_synced(&self) -> bool { + self.progress_receiver.borrow().is_synced() + } + + /// Get the duration since sync started. + pub fn sync_duration(&self) -> Option { + self.sync_start_time.map(|start| start.elapsed()) + } +} + +impl std::fmt::Debug for SyncCoordinator +where + H: BlockHeaderStorage, + FH: FilterHeaderStorage, + F: FilterStorage, + B: BlockStorage, + W: WalletInterface + 'static, +{ + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("SyncCoordinator") + .field("manager_count", &self.tasks.len()) + .field("progress", &*self.progress_receiver.borrow()) + .finish() + } +} + +/// Reactive progress aggregation task. +/// +/// Listens to all manager progress receivers and emits consolidated updates +/// immediately when any manager's progress changes. +async fn run_progress_task( + receivers: Vec>, + progress_sender: watch::Sender, + sync_event_sender: broadcast::Sender, + shutdown: CancellationToken, + sync_start_time: Instant, +) { + let streams: Vec<_> = + receivers.into_iter().map(|rx| WatchStream::new(rx).map(move |p| p)).collect(); + + let mut merged = select_all(streams); + let mut progress = SyncProgress::default(); + let mut sync_complete_emitted = false; + + loop { + tokio::select! { + _ = shutdown.cancelled() => break, + Some(manager_progress) = merged.next() => { + update_progress_from_manager(&mut progress, manager_progress); + + let _ = progress_sender.send(progress.clone()); + + if progress.is_synced() && !sync_complete_emitted { + let duration = sync_start_time.elapsed(); + tracing::info!("Initial sync complete in {:.2}s", duration.as_secs_f64()); + + let header_tip = progress.headers().ok().map(|h| h.current_height()).unwrap_or(0); + let _ = sync_event_sender.send(SyncEvent::SyncComplete { header_tip }); + sync_complete_emitted = true; + } + } + } + } +} + +/// Update aggregate progress from a single manager's progress update. +fn update_progress_from_manager( + progress: &mut SyncProgress, + manager_progress: SyncManagerProgress, +) { + match manager_progress { + SyncManagerProgress::BlockHeaders(h) => progress.update_headers(h), + SyncManagerProgress::FilterHeaders(fh) => progress.update_filter_headers(fh), + SyncManagerProgress::Filters(f) => progress.update_filters(f), + SyncManagerProgress::Blocks(b) => progress.update_blocks(b), + SyncManagerProgress::Masternodes(m) => progress.update_masternodes(m), + SyncManagerProgress::ChainLock(c) => progress.update_chainlocks(c), + SyncManagerProgress::InstantSend(i) => progress.update_instantsend(i), + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::sync::{BlockHeadersProgress, FiltersProgress, SyncState}; + + #[test] + fn test_sync_progress_default() { + let progress = SyncProgress::default(); + assert_eq!(progress.state(), SyncState::Initializing); + assert!(!progress.is_synced()); + // Fields are None by default - getters return errors + assert!(progress.headers().is_err()); + assert!(progress.filters().is_err()); + assert!(progress.blocks().is_err()); + } + + #[test] + fn test_sync_percentage_empty() { + let progress = SyncProgress::default(); + // Both headers and filters are None, so percentage defaults to 1.0 + assert_eq!(progress.percentage(), 1.0); + } + + #[test] + fn test_sync_percentage() { + let mut progress = SyncProgress::default(); + + // Create headers progress at 50% + let mut headers_progress = BlockHeadersProgress::default(); + headers_progress.set_state(SyncState::Syncing); + headers_progress.update_current_height(500); + headers_progress.update_target_height(1000); + headers_progress.add_processed(500); + progress.update_headers(headers_progress); + + // Create filters progress at 25% + let mut filters_progress = FiltersProgress::default(); + filters_progress.set_state(SyncState::Syncing); + filters_progress.update_current_height(250); + filters_progress.update_target_height(1000); + filters_progress.add_downloaded(250); + progress.update_filters(filters_progress); + + // (0.5 + 1.0 + 0.25) / 3 = ~0.583 (filter_headers defaults to 1.0) + assert!((progress.percentage() - 0.583).abs() < 0.01); + } +} diff --git a/dash-spv/src/sync/sync_manager.rs b/dash-spv/src/sync/sync_manager.rs new file mode 100644 index 000000000..7954a1d4f --- /dev/null +++ b/dash-spv/src/sync/sync_manager.rs @@ -0,0 +1,450 @@ +use crate::error::SyncResult; +use crate::network::{Message, MessageType, NetworkEvent, RequestSender}; +use crate::sync::{ + BlockHeadersProgress, BlocksProgress, ChainLockProgress, FilterHeadersProgress, + FiltersProgress, InstantSendProgress, ManagerIdentifier, MasternodesProgress, SyncEvent, + SyncState, +}; +use async_trait::async_trait; + +/// Contains a trait for event-driven sync managers. +/// +/// Each manager is responsible for a specific sync task (headers, filters, blocks, etc.) +/// and communicates with other managers via events. Managers progress independently and +/// catch up to each other as events flow between them. +use std::time::Duration; +use tokio::sync::broadcast; +use tokio::sync::mpsc::UnboundedReceiver; +use tokio::sync::watch; +use tokio::time::interval; +use tokio_util::sync::CancellationToken; + +#[derive(Debug, Clone, PartialEq)] +pub enum SyncManagerProgress { + BlockHeaders(BlockHeadersProgress), + FilterHeaders(FilterHeadersProgress), + Filters(FiltersProgress), + Blocks(BlocksProgress), + Masternodes(MasternodesProgress), + ChainLock(ChainLockProgress), + InstantSend(InstantSendProgress), +} + +impl SyncManagerProgress { + pub fn state(&self) -> SyncState { + match self { + SyncManagerProgress::BlockHeaders(progress) => progress.state(), + SyncManagerProgress::FilterHeaders(progress) => progress.state(), + SyncManagerProgress::Filters(progress) => progress.state(), + SyncManagerProgress::Blocks(progress) => progress.state(), + SyncManagerProgress::Masternodes(progress) => progress.state(), + SyncManagerProgress::ChainLock(progress) => progress.state(), + SyncManagerProgress::InstantSend(progress) => progress.state(), + } + } +} + +pub struct SyncManagerTaskContext { + pub(super) message_receiver: UnboundedReceiver, + pub(super) sync_event_sender: broadcast::Sender, + pub(super) network_event_receiver: broadcast::Receiver, + pub(super) requests: RequestSender, + pub(super) shutdown: CancellationToken, + pub(super) progress_sender: watch::Sender, +} + +impl SyncManagerTaskContext { + pub(super) fn emit_sync_event(&self, event: SyncEvent) { + let _ = self.sync_event_sender.send(event); + } + pub(super) fn emit_sync_events(&self, events: impl IntoIterator) { + for event in events { + self.emit_sync_event(event); + } + } +} + +#[async_trait] +pub trait SyncManager: Send + Sync + std::fmt::Debug { + /// Get the unique identifier for this manager. + fn identifier(&self) -> ManagerIdentifier; + + /// Get the manager's sync state. + fn state(&self) -> SyncState; + + /// Update the manager's sync state. + fn set_state(&mut self, state: SyncState); + + /// Update the target height for this manager. + fn update_target_height(&mut self, _height: u32) {} + + /// Message types this manager subscribes to for topic-based routing. + /// + /// The network manager uses this to route only relevant messages to each + /// manager's task via topic-based filtering. + fn wanted_message_types(&self) -> &'static [MessageType]; + + /// Initialize the manager. + /// + /// Called once at startup before the main loop. Loads persisted state + /// from internal storage and initial target heights. + async fn initialize(&mut self) -> SyncResult<()> { + self.set_state(SyncState::WaitingForConnections); + tracing::info!("{} initialized", self.identifier()); + Ok(()) + } + + /// Start the sync process. + /// + /// Called after initialization to trigger the initial sync requests. + /// For example, BlockHeadersManager sends its first getheaders request here. + /// The default implementation is for reactive managers that just wait for events. + async fn start_sync(&mut self, _requests: &RequestSender) -> SyncResult> { + if !matches!(self.state(), SyncState::WaitingForConnections | SyncState::WaitForEvents) { + tracing::warn!("{} sync already started.", self.identifier()); + return Ok(vec![]); + } + + self.set_state(SyncState::WaitForEvents); + Ok(vec![SyncEvent::SyncStart { + identifier: self.identifier(), + }]) + } + + /// Stop the internal processing. + /// Called when the network manager loses its peers. + fn stop_sync(&mut self) { + self.set_state(SyncState::WaitingForConnections); + } + + /// Handle an incoming network message. + /// + /// Returns events to emit to other managers. + async fn handle_message( + &mut self, + msg: Message, + requests: &RequestSender, + ) -> SyncResult>; + + /// Handle a sync event from another manager. + /// + /// This is how managers learn about progress from other managers. + /// For example, `FilterHeadersManager` subscribes to `BlockHeadersStored` + /// events to know when new headers are available. + async fn handle_sync_event( + &mut self, + event: &SyncEvent, + requests: &RequestSender, + ) -> SyncResult>; + + /// Periodic tick for timeouts, retries, and proactive work. + /// + /// Called regularly by the coordinator (e.g., every 100ms). + /// Use this for: + /// - Timeout detection and retry logic + /// - Proactive request sending + /// - State cleanup + async fn tick(&mut self, requests: &RequestSender) -> SyncResult>; + + /// Handle a network event (peer connection changes). + /// + /// Default implementation handles state transitions for WaitingForConnections. + /// Managers can override to customize behavior. + async fn handle_network_event( + &mut self, + event: &NetworkEvent, + requests: &RequestSender, + ) -> SyncResult> { + // Default: transition from WaitingForConnections to Syncing when peers connect + if let NetworkEvent::PeersUpdated { + connected_count, + best_height, + .. + } = event + { + if let Some(best_height) = best_height { + self.update_target_height(*best_height); + } + if *connected_count == 0 { + tracing::info!("{} - no peers available, stopping sync", self.identifier()); + self.stop_sync(); + } else if *connected_count > 0 && self.state() == SyncState::WaitingForConnections { + tracing::info!( + "{} - peers available ({}), starting sync", + self.identifier(), + connected_count + ); + return self.start_sync(requests).await; + } + } + Ok(vec![]) + } + + /// Retrieves the current progress of the Manager. + fn progress(&self) -> SyncManagerProgress; + + fn try_emit_progress( + &self, + progress_before: SyncManagerProgress, + progress_sender: &watch::Sender, + ) { + let progress_now = self.progress(); + if progress_now != progress_before { + let _ = progress_sender.send(progress_now); + } + } + + /// Run the manager task, processing messages, events, and periodic ticks. + /// + /// This consumes the manager and runs until shutdown is signaled. + /// The `initial_peer_count` parameter indicates how many peers are connected at start. + async fn run(mut self, mut context: SyncManagerTaskContext) -> SyncResult + where + Self: Sized, + { + let identifier = self.identifier(); + tracing::info!("{} task starting", identifier); + + let mut sync_event_receiver = context.sync_event_sender.subscribe(); + + // Initialize the manager + self.initialize().await?; + + // Tick interval for periodic housekeeping + let mut tick_interval = interval(Duration::from_millis(100)); + + tracing::info!("{} task entering main loop", identifier); + + loop { + tokio::select! { + _ = context.shutdown.cancelled() => { + tracing::info!("{} task received shutdown signal", identifier); + break; + } + // Process incoming network messages + Some(message) = context.message_receiver.recv() => { + tracing::trace!("{} received message: {}", identifier, message.cmd()); + let progress_before = self.progress(); + match self.handle_message(message, &context.requests).await { + Ok(events) => { + if !events.is_empty() { + for event in &events { + tracing::debug!("{} emitting: {}", identifier, event.description()); + } + context.emit_sync_events(events); + } + self.try_emit_progress(progress_before, &context.progress_sender); + } + Err(e) => { + tracing::error!("{} error handling message: {}", identifier, e); + let error_event = SyncEvent::ManagerError { + manager: identifier, + error: e.to_string(), + }; + context.emit_sync_event(error_event); + } + } + } + // Process events from other managers + result = sync_event_receiver.recv() => { + match result { + Ok(event) => { + tracing::trace!("{} received event: {}", identifier, event.description()); + let progress_before = self.progress(); + match self.handle_sync_event(&event, &context.requests).await { + Ok(events) => { + if !events.is_empty() { + for e in &events { + tracing::trace!("{} emitting: {}", identifier, e.description()); + } + context.emit_sync_events(events); + } + self.try_emit_progress(progress_before, &context.progress_sender); + } + Err(e) => { + tracing::error!("{} error handling event: {}", identifier, e); + } + } + } + Err(error) => { + tracing::error!("{} sync event error: {}", identifier, error); + break; + } + } + } + // Process network events + result = context.network_event_receiver.recv() => { + match result { + Ok(event) => { + tracing::debug!("{} received network event: {}", identifier, event.description()); + let progress_before = self.progress(); + match self.handle_network_event(&event, &context.requests).await { + Ok(events) => { + if !events.is_empty() { + for e in &events { + tracing::debug!("{} emitting: {}", identifier, e.description()); + } + context.emit_sync_events(events); + } + self.try_emit_progress(progress_before, &context.progress_sender); + } + Err(e) => { + tracing::error!("{} error handling network event: {}", identifier, e); + } + } + } + Err(error) => { + tracing::error!("{} network event error: {}", identifier, error); + break; + } + } + } + // Periodic tick for timeouts and housekeeping + _ = tick_interval.tick() => { + let progress_before = self.progress(); + match self.tick(&context.requests).await { + Ok(events) => { + if !events.is_empty() { + context.emit_sync_events(events); + } + self.try_emit_progress(progress_before, &context.progress_sender); + } + Err(e) => { + tracing::error!("{} tick error: {}", identifier, e); + } + } + } + } + } + + tracing::info!("{} task exiting", identifier); + Ok(identifier) + } +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::network::NetworkRequest; + use crate::sync::BlockHeadersProgress; + use crate::sync::SyncState; + use async_trait::async_trait; + use std::sync::atomic::{AtomicU32, Ordering}; + use std::sync::Arc; + use tokio::sync::{broadcast, mpsc}; + + /// Mock manager for testing the task runner. + struct MockManager { + identifier: ManagerIdentifier, + state: SyncState, + message_count: Arc, + event_count: Arc, + tick_count: Arc, + } + + impl std::fmt::Debug for MockManager { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + f.debug_struct("MockManager").field("identifier", &self.identifier).finish() + } + } + + #[async_trait] + impl SyncManager for MockManager { + fn identifier(&self) -> ManagerIdentifier { + self.identifier + } + + fn state(&self) -> SyncState { + self.state + } + + fn set_state(&mut self, state: SyncState) { + self.state = state; + } + + fn wanted_message_types(&self) -> &'static [MessageType] { + &[] + } + + async fn handle_message( + &mut self, + _msg: Message, + _requests: &RequestSender, + ) -> SyncResult> { + self.message_count.fetch_add(1, Ordering::Relaxed); + Ok(vec![]) + } + + async fn handle_sync_event( + &mut self, + _event: &SyncEvent, + _requests: &RequestSender, + ) -> SyncResult> { + self.event_count.fetch_add(1, Ordering::Relaxed); + Ok(vec![]) + } + + async fn tick(&mut self, _requests: &RequestSender) -> SyncResult> { + self.tick_count.fetch_add(1, Ordering::Relaxed); + Ok(vec![]) + } + + fn progress(&self) -> SyncManagerProgress { + let mut progress = BlockHeadersProgress::default(); + progress.set_state(self.state); + SyncManagerProgress::BlockHeaders(progress) + } + } + + #[tokio::test] + async fn test_manager_task_shutdown() { + let message_count = Arc::new(AtomicU32::new(0)); + let event_count = Arc::new(AtomicU32::new(0)); + let tick_count = Arc::new(AtomicU32::new(0)); + + let manager = MockManager { + identifier: ManagerIdentifier::BlockHeader, + state: SyncState::Initializing, + message_count: message_count.clone(), + event_count: event_count.clone(), + tick_count: tick_count.clone(), + }; + + // Create channels + let (_, message_receiver) = mpsc::unbounded_channel(); + let sync_event_sender = broadcast::Sender::::new(100); + let network_event_sender = broadcast::Sender::::new(100); + let (req_tx, _req_rx) = mpsc::unbounded_channel::(); + let requests = RequestSender::new(req_tx); + let shutdown = CancellationToken::new(); + let (progress_sender, _progress_rx) = watch::channel(manager.progress()); + + let context = SyncManagerTaskContext { + message_receiver, + sync_event_sender, + network_event_receiver: network_event_sender.subscribe(), + requests, + shutdown: shutdown.clone(), + progress_sender, + }; + + // Spawn the task using trait's run method + let handle = tokio::spawn(async move { manager.run(context).await }); + + // Let it run for a bit + tokio::time::sleep(Duration::from_millis(250)).await; + + // Signal shutdown + shutdown.cancel(); + + // Wait for task to complete + let result = handle.await.unwrap(); + assert!(result.is_ok()); + + // Verify the returned identifier matches + assert_eq!(result.unwrap(), ManagerIdentifier::BlockHeader); + + // Verify tick was called multiple times + assert!(tick_count.load(Ordering::Relaxed) > 0); + } +} diff --git a/dash-spv/src/test_utils/network.rs b/dash-spv/src/test_utils/network.rs index b237baab1..dbce759c6 100644 --- a/dash-spv/src/test_utils/network.rs +++ b/dash-spv/src/test_utils/network.rs @@ -1,15 +1,20 @@ use crate::error::{NetworkError, NetworkResult}; -use crate::network::{Message, MessageDispatcher, MessageType, NetworkManager}; +use crate::network::{ + Message, MessageDispatcher, MessageType, NetworkEvent, NetworkManager, NetworkRequest, + RequestSender, +}; use async_trait::async_trait; +use dashcore::network::constants::ServiceFlags; use dashcore::prelude::CoreBlockHeight; use dashcore::{ - block::Header as BlockHeader, network::constants::ServiceFlags, - network::message::NetworkMessage, network::message_blockdata::GetHeadersMessage, BlockHash, + block::Header as BlockHeader, network::message::NetworkMessage, + network::message_blockdata::GetHeadersMessage, BlockHash, }; use dashcore_hashes::Hash; use std::any::Any; use std::net::SocketAddr; -use tokio::sync::mpsc::UnboundedReceiver; +use tokio::sync::broadcast; +use tokio::sync::mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender}; pub fn test_socket_address(id: u8) -> SocketAddr { SocketAddr::from(([127, 0, 0, id], id as u16)) @@ -23,11 +28,18 @@ pub struct MockNetworkManager { peer_best_height: Option, message_dispatcher: MessageDispatcher, sent_messages: Vec, + /// Request sender for outgoing messages. + request_tx: UnboundedSender, + /// Receiver generated in the constructor. Can be taken out of the struct for testing. + request_rx: Option>, + /// Event bus for network events. + network_event_sender: broadcast::Sender, } impl MockNetworkManager { /// Create a new mock network manager pub fn new() -> Self { + let (request_tx, request_rx) = unbounded_channel(); Self { connected: true, connected_peer: SocketAddr::new(std::net::Ipv4Addr::LOCALHOST.into(), 9999), @@ -35,9 +47,16 @@ impl MockNetworkManager { peer_best_height: None, message_dispatcher: MessageDispatcher::default(), sent_messages: Vec::new(), + request_tx, + request_rx: Some(request_rx), + network_event_sender: broadcast::Sender::new(100), } } + pub fn take_receiver(&mut self) -> Option> { + self.request_rx.take() + } + /// Add a chain of headers for testing pub fn add_headers_chain(&mut self, genesis_hash: BlockHash, count: usize) { let mut headers = Vec::new(); @@ -114,6 +133,10 @@ impl NetworkManager for MockNetworkManager { self.message_dispatcher.message_receiver(types) } + fn request_sender(&self) -> RequestSender { + RequestSender::new(self.request_tx.clone()) + } + async fn connect(&mut self) -> NetworkResult<()> { self.connected = true; Ok(()) @@ -162,4 +185,8 @@ impl NetworkManager for MockNetworkManager { async fn has_peer_with_service(&self, _service_flags: ServiceFlags) -> bool { self.connected } + + fn subscribe_network_events(&self) -> broadcast::Receiver { + self.network_event_sender.subscribe() + } } diff --git a/dash-spv/src/types.rs b/dash-spv/src/types.rs index ddf44a498..b86ef227c 100644 --- a/dash-spv/src/types.rs +++ b/dash-spv/src/types.rs @@ -21,7 +21,7 @@ use dashcore::{ hash_types::FilterHeader, network::constants::NetworkExt, sml::masternode_list_engine::MasternodeListEngine, - Amount, Block, BlockHash, Network, Transaction, Txid, + Amount, Block, BlockHash, ChainLock, Network, Transaction, Txid, }; use serde::{Deserialize, Serialize}; @@ -669,18 +669,18 @@ pub enum SpvEvent { /// ChainLock received and validated. ChainLockReceived { - /// Block height of the ChainLock. - height: u32, - /// Block hash of the ChainLock. - hash: dashcore::BlockHash, + /// The complete ChainLock data. + chain_lock: ChainLock, + /// Whether the BLS signature was validated. + validated: bool, }, /// InstantLock received and validated. InstantLockReceived { - /// Transaction ID locked by this InstantLock. - txid: Txid, - /// Transaction inputs locked by this InstantLock. - inputs: Vec, + /// The complete InstantLock data. + instant_lock: dashcore::ephemerealdata::instant_lock::InstantLock, + /// Whether the BLS signature was validated. + validated: bool, }, /// Unconfirmed transaction added to mempool. diff --git a/dash-spv/src/validation/filter.rs b/dash-spv/src/validation/filter.rs new file mode 100644 index 000000000..2947e22fa --- /dev/null +++ b/dash-spv/src/validation/filter.rs @@ -0,0 +1,390 @@ +//! Filter validation functionality. +//! +//! Provides verification of compact block filters against their +//! corresponding filter headers. + +use std::collections::HashMap; + +use dashcore::bip158::BlockFilter; +use dashcore::hash_types::FilterHeader; +use key_wallet_manager::wallet_manager::FilterMatchKey; +use rayon::prelude::*; + +use crate::error::{ValidationError, ValidationResult}; +use crate::validation::Validator; + +/// Input data for filter validation. +pub struct FilterValidationInput<'a> { + /// The filters to validate, keyed by (height, block_hash). + pub filters: &'a HashMap, + /// Expected filter headers indexed by height. + pub expected_headers: &'a HashMap, + /// Filter header at (batch_start - 1) for chaining verification. + pub prev_filter_header: FilterHeader, +} + +/// Validates compact block filters against their expected headers. +/// +/// Each filter's header is computed by chaining from the previous filter header, +/// then compared against the expected header from storage. Uses rayon for +/// parallel verification. +#[derive(Default)] +pub struct FilterValidator; + +impl FilterValidator { + pub fn new() -> Self { + Self + } +} + +impl Validator> for FilterValidator { + fn validate(&self, input: FilterValidationInput<'_>) -> ValidationResult<()> { + if input.filters.is_empty() { + return Ok(()); + } + + // Build the prev_header chain for verification. + // Each filter at height H needs prev_header at H-1. + // We start with prev_filter_header and chain forward using expected headers. + let mut prev_headers: HashMap = HashMap::new(); + + // Sort expected header heights to build chain correctly + let mut heights: Vec = input.expected_headers.keys().copied().collect(); + heights.sort(); + + // Reject non-contiguous heights since the chain cannot be verified with gaps + for window in heights.windows(2) { + if window[1] != window[0] + 1 { + return Err(ValidationError::InvalidFilterHeaderChain(format!( + "Non-contiguous filter header heights: gap between {} and {}", + window[0], window[1] + ))); + } + } + + // Build prev_header map by chaining from prev_filter_header through expected headers + let mut prev = input.prev_filter_header; + for &height in &heights { + prev_headers.insert(height, prev); + prev = input.expected_headers[&height]; + } + + // Verify all filters in parallel + let failures: Vec<(u32, String)> = input + .filters + .par_iter() + .filter_map(|(key, filter)| { + let height = key.height(); + + // Get prev_header for this filter + let Some(prev_header) = prev_headers.get(&height) else { + return Some((height, "Missing prev header".to_string())); + }; + + // Get expected header for this filter + let Some(expected_header) = input.expected_headers.get(&height) else { + return Some((height, "Missing expected header".to_string())); + }; + + // Compute header from filter and compare + let computed = filter.filter_header(prev_header); + if computed != *expected_header { + return Some(( + height, + format!( + "Header mismatch: computed {:?} != expected {:?}", + computed, expected_header + ), + )); + } + + None // Verification passed + }) + .collect(); + + if !failures.is_empty() { + let details: Vec = failures + .iter() + .take(5) // Limit to first 5 failures for the error message + .map(|(h, msg)| format!("height {}: {}", h, msg)) + .collect(); + + tracing::error!( + "Filter verification failed for {} filters: {:?}", + failures.len(), + details + ); + + return Err(ValidationError::InvalidFilterHeaderChain(format!( + "Filter verification failed for {} filters. First failure: {}", + failures.len(), + details.first().unwrap_or(&"unknown".to_string()) + ))); + } + + tracing::debug!("Verified {} filters successfully", input.filters.len()); + Ok(()) + } +} + +#[cfg(test)] +mod tests { + use dashcore::bip158::BlockFilter; + use dashcore::BlockHash; + use dashcore_hashes::Hash; + use key_wallet_manager::wallet_manager::FilterMatchKey; + + use super::*; + + fn test_hash(n: u8) -> BlockHash { + BlockHash::from_byte_array([n; 32]) + } + + fn zero_filter_header() -> FilterHeader { + FilterHeader::all_zeros() + } + + #[test] + fn test_verify_empty_batch() { + let validator = FilterValidator::new(); + let filters = HashMap::new(); + let headers = HashMap::new(); + let prev = zero_filter_header(); + + let input = FilterValidationInput { + filters: &filters, + expected_headers: &headers, + prev_filter_header: prev, + }; + + let result = validator.validate(input); + assert!(result.is_ok()); + } + + #[test] + fn test_verify_single_filter_success() { + let validator = FilterValidator::new(); + + // Create a filter + let filter_data = vec![0u8; 10]; + let filter = BlockFilter::new(&filter_data); + let prev_header = zero_filter_header(); + + // Compute what the expected header should be + let expected_header = filter.filter_header(&prev_header); + + // Build inputs + let mut filters = HashMap::new(); + let key = FilterMatchKey::new(1, test_hash(1)); + filters.insert(key, filter); + + let mut expected_headers = HashMap::new(); + expected_headers.insert(1, expected_header); + + let input = FilterValidationInput { + filters: &filters, + expected_headers: &expected_headers, + prev_filter_header: prev_header, + }; + + // Verify should pass + let result = validator.validate(input); + assert!(result.is_ok()); + } + + #[test] + fn test_verify_single_filter_failure() { + let validator = FilterValidator::new(); + + // Create a filter + let filter_data = vec![0u8; 10]; + let filter = BlockFilter::new(&filter_data); + let prev_header = zero_filter_header(); + + // Use a WRONG expected header + let wrong_expected = FilterHeader::from_byte_array([0xFF; 32]); + + // Build inputs + let mut filters = HashMap::new(); + let key = FilterMatchKey::new(1, test_hash(1)); + filters.insert(key, filter); + + let mut expected_headers = HashMap::new(); + expected_headers.insert(1, wrong_expected); + + let input = FilterValidationInput { + filters: &filters, + expected_headers: &expected_headers, + prev_filter_header: prev_header, + }; + + // Verify should fail + let result = validator.validate(input); + assert!(result.is_err()); + assert!(matches!(result.unwrap_err(), ValidationError::InvalidFilterHeaderChain(_))); + } + + #[test] + fn test_verify_multiple_filters_success() { + let validator = FilterValidator::new(); + let prev_header = zero_filter_header(); + + // Create filters and compute expected headers in chain + let filter_data_1 = vec![1u8; 10]; + let filter_1 = BlockFilter::new(&filter_data_1); + let expected_1 = filter_1.filter_header(&prev_header); + + let filter_data_2 = vec![2u8; 10]; + let filter_2 = BlockFilter::new(&filter_data_2); + let expected_2 = filter_2.filter_header(&expected_1); + + let filter_data_3 = vec![3u8; 10]; + let filter_3 = BlockFilter::new(&filter_data_3); + let expected_3 = filter_3.filter_header(&expected_2); + + // Build inputs + let mut filters = HashMap::new(); + filters.insert(FilterMatchKey::new(1, test_hash(1)), filter_1); + filters.insert(FilterMatchKey::new(2, test_hash(2)), filter_2); + filters.insert(FilterMatchKey::new(3, test_hash(3)), filter_3); + + let mut expected_headers = HashMap::new(); + expected_headers.insert(1, expected_1); + expected_headers.insert(2, expected_2); + expected_headers.insert(3, expected_3); + + let input = FilterValidationInput { + filters: &filters, + expected_headers: &expected_headers, + prev_filter_header: prev_header, + }; + + // Verify should pass + let result = validator.validate(input); + assert!(result.is_ok()); + } + + #[test] + fn test_verify_missing_expected_header() { + let validator = FilterValidator::new(); + + let filter_data = vec![0u8; 10]; + let filter = BlockFilter::new(&filter_data); + let prev_header = zero_filter_header(); + + // Build inputs with NO expected header + let mut filters = HashMap::new(); + let key = FilterMatchKey::new(1, test_hash(1)); + filters.insert(key, filter); + + let expected_headers = HashMap::new(); // Empty! + + let input = FilterValidationInput { + filters: &filters, + expected_headers: &expected_headers, + prev_filter_header: prev_header, + }; + + // Verify should fail + let result = validator.validate(input); + assert!(result.is_err()); + } + + #[test] + fn test_verify_large_batch_parallel() { + let validator = FilterValidator::new(); + + // Create 150 filters to exercise rayon parallel verification + let prev_header = zero_filter_header(); + let mut filters = HashMap::new(); + let mut expected_headers = HashMap::new(); + + let mut prev = prev_header; + for i in 1..=150u32 { + let filter_data: Vec = (0..20).map(|j| ((i + j) % 256) as u8).collect(); + let filter = BlockFilter::new(&filter_data); + let expected = filter.filter_header(&prev); + expected_headers.insert(i, expected); + filters.insert(FilterMatchKey::new(i, test_hash(i as u8)), filter); + prev = expected; + } + + let input = FilterValidationInput { + filters: &filters, + expected_headers: &expected_headers, + prev_filter_header: prev_header, + }; + + let result = validator.validate(input); + assert!(result.is_ok()); + } + + #[test] + fn test_verify_large_batch_with_failure() { + let validator = FilterValidator::new(); + + // Create batch where one filter fails verification + let prev_header = zero_filter_header(); + let mut filters = HashMap::new(); + let mut expected_headers = HashMap::new(); + + let mut prev = prev_header; + for i in 1..=100u32 { + let filter_data: Vec = (0..20).map(|j| ((i + j) % 256) as u8).collect(); + let filter = BlockFilter::new(&filter_data); + let expected = filter.filter_header(&prev); + + // Corrupt one expected header in the middle + if i == 50 { + expected_headers.insert(i, FilterHeader::from_byte_array([0xFF; 32])); + } else { + expected_headers.insert(i, expected); + } + + filters.insert(FilterMatchKey::new(i, test_hash(i as u8)), filter); + prev = expected; + } + + let input = FilterValidationInput { + filters: &filters, + expected_headers: &expected_headers, + prev_filter_header: prev_header, + }; + + let result = validator.validate(input); + assert!(result.is_err()); + assert!(matches!(result.unwrap_err(), ValidationError::InvalidFilterHeaderChain(_))); + } + + #[test] + fn test_verify_noncontiguous_heights_rejected() { + let validator = FilterValidator::new(); + + // Non-contiguous heights should be rejected since the chain cannot be verified + let prev_header = zero_filter_header(); + let mut filters = HashMap::new(); + let mut expected_headers = HashMap::new(); + + let heights = [10u32, 20, 30]; + let mut prev = prev_header; + + for &h in &heights { + let filter_data = vec![h as u8; 10]; + let filter = BlockFilter::new(&filter_data); + let expected = filter.filter_header(&prev); + expected_headers.insert(h, expected); + filters.insert(FilterMatchKey::new(h, test_hash(h as u8)), filter); + prev = expected; + } + + let input = FilterValidationInput { + filters: &filters, + expected_headers: &expected_headers, + prev_filter_header: prev_header, + }; + + let result = validator.validate(input); + assert!(result.is_err()); + assert!(matches!(result.unwrap_err(), ValidationError::InvalidFilterHeaderChain(_))); + } +} diff --git a/dash-spv/src/validation/mod.rs b/dash-spv/src/validation/mod.rs index 9ae730b37..0f4b428e9 100644 --- a/dash-spv/src/validation/mod.rs +++ b/dash-spv/src/validation/mod.rs @@ -1,8 +1,10 @@ //! Validation functionality for the Dash SPV client. +mod filter; mod header; mod instantlock; +pub use filter::{FilterValidationInput, FilterValidator}; pub use header::BlockHeaderValidator; pub use instantlock::InstantLockValidator; diff --git a/dash-spv/tests/chainlock_simple_test.rs b/dash-spv/tests/chainlock_simple_test.rs deleted file mode 100644 index c768d0a0d..000000000 --- a/dash-spv/tests/chainlock_simple_test.rs +++ /dev/null @@ -1,110 +0,0 @@ -//! Simple integration test for ChainLock validation flow - -use dash_spv::client::{ClientConfig, DashSpvClient}; -use dash_spv::network::PeerNetworkManager; -use dash_spv::storage::DiskStorageManager; -use dash_spv::types::ValidationMode; -use dashcore::Network; -use key_wallet::wallet::managed_wallet_info::ManagedWalletInfo; -use key_wallet_manager::wallet_manager::WalletManager; -use std::sync::Arc; -use tempfile::TempDir; -use tokio::sync::RwLock; -use tracing::Level; - -fn init_logging() { - let _ = tracing_subscriber::fmt() - .with_max_level(Level::DEBUG) - .with_target(false) - .with_thread_ids(true) - .with_line_number(true) - .try_init(); -} - -#[tokio::test] -async fn test_chainlock_validation_flow() { - init_logging(); - - // Create temp directory for storage - let temp_dir = TempDir::new().unwrap(); - - // Create client config with masternodes enabled - let network = Network::Dash; - let enable_masternodes = true; - let config = ClientConfig { - network, - enable_filters: false, - enable_masternodes, - validation_mode: ValidationMode::Basic, - storage_path: temp_dir.path().to_path_buf(), - peers: vec!["127.0.0.1:9999".parse().unwrap()], // Dummy peer to satisfy config - ..Default::default() - }; - - // Create network manager - let network_manager = PeerNetworkManager::new(&config).await.unwrap(); - - // Create storage manager - let storage_manager = DiskStorageManager::new(&config).await.unwrap(); - - // Create wallet manager - let wallet = Arc::new(RwLock::new(WalletManager::::new(config.network))); - - // Create the SPV client - let client = - DashSpvClient::new(config, network_manager, storage_manager, wallet).await.unwrap(); - - // Test that update_chainlock_validation works - let updated = client.update_chainlock_validation().unwrap(); - - // The update may succeed if masternodes are enabled and terminal block data is available - // This is expected behavior - the client pre-loads terminal block data for mainnet - if enable_masternodes && network == Network::Dash { - // On mainnet with masternodes enabled, terminal block data is pre-loaded - assert!(updated, "Should have masternode engine with terminal block data"); - } else { - // Otherwise should be false - assert!(!updated, "Should not have masternode engine before sync"); - } - - tracing::info!("✅ ChainLock validation flow test passed"); -} - -#[tokio::test] -async fn test_chainlock_manager_initialization() { - init_logging(); - - // Create temp directory for storage - let temp_dir = TempDir::new().unwrap(); - - // Create client config - let config = ClientConfig { - network: Network::Dash, - enable_filters: false, - enable_masternodes: false, - validation_mode: ValidationMode::Basic, - storage_path: temp_dir.path().to_path_buf(), - peers: vec!["127.0.0.1:9999".parse().unwrap()], // Dummy peer to satisfy config - ..Default::default() - }; - - // Create network manager - let network_manager = PeerNetworkManager::new(&config).await.unwrap(); - - // Create storage manager - let storage_manager = DiskStorageManager::new(&config).await.unwrap(); - - // Create wallet manager - let wallet = Arc::new(RwLock::new(WalletManager::::new(config.network))); - - // Create the SPV client - let client = - DashSpvClient::new(config, network_manager, storage_manager, wallet).await.unwrap(); - - // Verify chainlock manager is initialized - // We can't directly access it from tests, but we can verify the client works - let sync_progress = client.sync_progress().await.unwrap(); - assert_eq!(sync_progress.header_height, 0); - - tracing::info!("✅ ChainLock manager initialization test passed"); -} diff --git a/dash-spv/tests/header_sync_test.rs b/dash-spv/tests/header_sync_test.rs index 69ffd6272..dbe3c7c56 100644 --- a/dash-spv/tests/header_sync_test.rs +++ b/dash-spv/tests/header_sync_test.rs @@ -13,9 +13,11 @@ use key_wallet::wallet::managed_wallet_info::ManagedWalletInfo; use key_wallet_manager::wallet_manager::WalletManager; use log::info; use std::sync::Arc; +use std::time::Duration; use tempfile::TempDir; use test_case::test_case; use tokio::sync::RwLock; +use tokio::time::timeout; #[tokio::test] async fn test_header_sync_with_client_integration() { @@ -40,14 +42,25 @@ async fn test_header_sync_with_client_integration() { let client = DashSpvClient::new(config, network_manager, storage_manager, wallet).await; assert!(client.is_ok(), "Client creation should succeed"); - let client = client.unwrap(); + let mut client = client.unwrap(); // Verify client starts with empty state - let stats = client.sync_progress().await; - assert!(stats.is_ok()); - - let stats = stats.unwrap(); - assert_eq!(stats.header_height, 0); + client.start().await.unwrap(); + + // Poll until the headers progress becomes available (async managers may not be ready immediately) + let result = timeout(Duration::from_secs(5), async { + loop { + let progress = client.sync_progress(); + if let Ok(headers) = progress.headers() { + return headers.current_height(); + } + tokio::time::sleep(Duration::from_millis(50)).await; + } + }) + .await + .expect("Timed out waiting for headers progress to become available"); + + assert_eq!(result, 0); info!("Header sync client integration test completed"); } diff --git a/dash/src/sml/masternode_list_engine/mod.rs b/dash/src/sml/masternode_list_engine/mod.rs index 8236868a8..fd82c9ca2 100644 --- a/dash/src/sml/masternode_list_engine/mod.rs +++ b/dash/src/sml/masternode_list_engine/mod.rs @@ -33,6 +33,10 @@ use hashes::Hash; #[cfg(feature = "serde")] use serde::{Deserialize, Serialize}; +/// Depth offset between cycle boundary and work block (matches Dash Core WORK_DIFF_DEPTH) +/// The mnListDiffH in QRInfo is at (cycle_height - WORK_DIFF_DEPTH), not at the cycle boundary itself +pub const WORK_DIFF_DEPTH: u32 = 8; + #[derive(Clone, Eq, PartialEq, Default)] #[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] #[cfg_attr(feature = "bincode", derive(Encode, Decode))] diff --git a/key-wallet-manager/src/events.rs b/key-wallet-manager/src/events.rs new file mode 100644 index 000000000..47a67d6c4 --- /dev/null +++ b/key-wallet-manager/src/events.rs @@ -0,0 +1,69 @@ +//! Wallet events for notifying consumers of wallet state changes. +//! +//! These events are emitted by the WalletManager when significant wallet +//! operations occur, allowing consumers to receive push-based notifications. + +use crate::wallet_manager::WalletId; +use alloc::string::String; +use alloc::vec::Vec; +use dashcore::{Address, Txid}; + +/// Events emitted by the wallet manager. +/// +/// Each event represents a meaningful wallet state change that consumers +/// may want to react to. +#[derive(Debug, Clone)] +pub enum WalletEvent { + /// A transaction relevant to the wallet was received. + TransactionReceived { + /// ID of the affected wallet. + wallet_id: WalletId, + /// Account index within the wallet. + account_index: u32, + /// Transaction ID. + txid: Txid, + /// Net amount change (positive for incoming, negative for outgoing). + amount: i64, + /// Addresses involved in the transaction. + addresses: Vec
, + }, + /// The wallet balance has changed. + BalanceUpdated { + /// ID of the affected wallet. + wallet_id: WalletId, + /// New spendable balance in duffs (confirmed and mature). + spendable: u64, + /// New unconfirmed balance in duffs. + unconfirmed: u64, + /// New immature balance (coinbase UTXOs not yet mature). + immature: u64, + /// New locked balance (UTXOs reserved for specific purposes like CoinJoin) + locked: u64, + }, +} + +impl WalletEvent { + /// Get a short description of this event for logging. + pub fn description(&self) -> String { + match self { + WalletEvent::TransactionReceived { + txid, + amount, + .. + } => { + format!("TransactionReceived(txid={}, amount={})", txid, amount) + } + WalletEvent::BalanceUpdated { + spendable, + unconfirmed, + immature, + .. + } => { + format!( + "BalanceUpdated(spendable={}, unconfirmed={}, immature={})", + spendable, unconfirmed, immature + ) + } + } + } +} diff --git a/key-wallet-manager/src/lib.rs b/key-wallet-manager/src/lib.rs index 4a16b84f7..c923f747b 100644 --- a/key-wallet-manager/src/lib.rs +++ b/key-wallet-manager/src/lib.rs @@ -24,6 +24,7 @@ extern crate std; #[cfg(any(test, feature = "test-utils"))] pub mod test_utils; +pub mod events; pub mod wallet_interface; pub mod wallet_manager; @@ -38,6 +39,7 @@ pub use dashcore::blockdata::transaction::Transaction; pub use dashcore::{OutPoint, TxIn, TxOut}; // Export our high-level types +pub use events::WalletEvent; pub use key_wallet::wallet::managed_wallet_info::coin_selection::{ CoinSelector, SelectionResult, SelectionStrategy, }; diff --git a/key-wallet-manager/src/wallet_manager/mod.rs b/key-wallet-manager/src/wallet_manager/mod.rs index 6b02c7999..70254c476 100644 --- a/key-wallet-manager/src/wallet_manager/mod.rs +++ b/key-wallet-manager/src/wallet_manager/mod.rs @@ -27,6 +27,14 @@ use std::collections::BTreeSet; use std::str::FromStr; use zeroize::Zeroize; +use crate::WalletEvent; +#[cfg(feature = "std")] +use tokio::sync::broadcast; + +/// Default capacity for the wallet event bus. +#[cfg(feature = "std")] +const DEFAULT_WALLET_EVENT_CAPACITY: usize = 1000; + /// Unique identifier for a wallet (32-byte hash) pub type WalletId = [u8; 32]; @@ -76,6 +84,9 @@ pub struct WalletManager { wallets: BTreeMap, /// Mutable wallet info indexed by wallet ID wallet_infos: BTreeMap, + /// Event sender for wallet events + #[cfg(feature = "std")] + event_sender: broadcast::Sender, } impl WalletManager { @@ -86,9 +97,25 @@ impl WalletManager { synced_height: 0, wallets: BTreeMap::new(), wallet_infos: BTreeMap::new(), + #[cfg(feature = "std")] + event_sender: broadcast::Sender::new(DEFAULT_WALLET_EVENT_CAPACITY), } } + /// Subscribe to wallet events. + /// + /// Returns a receiver that will receive all wallet events emitted by this manager. + #[cfg(feature = "std")] + pub fn subscribe_events(&self) -> broadcast::Receiver { + self.event_sender.subscribe() + } + + /// Get a reference to the event sender for emitting events. + #[cfg(feature = "std")] + pub fn event_sender(&self) -> &broadcast::Sender { + &self.event_sender + } + /// Create a new wallet from mnemonic and add it to the manager /// Returns the computed wallet ID pub fn create_wallet_from_mnemonic( @@ -494,7 +521,31 @@ impl WalletManager { if check_result.is_new_transaction { result.is_new_transaction = true; } - // Note: balance update is already handled in check_transaction + + // Emit TransactionReceived events for each affected account + #[cfg(feature = "std")] + for account_match in &check_result.affected_accounts { + let Some(account_index) = account_match.account_type_match.account_index() + else { + continue; + }; + let amount = account_match.received as i64 - account_match.sent as i64; + let addresses: Vec
= account_match + .account_type_match + .all_involved_addresses() + .into_iter() + .map(|info| info.address) + .collect(); + + let event = WalletEvent::TransactionReceived { + wallet_id, + account_index, + txid: tx.txid(), + amount, + addresses, + }; + let _ = self.event_sender.send(event); + } } result.new_addresses.extend(check_result.new_addresses); diff --git a/key-wallet-manager/src/wallet_manager/process_block.rs b/key-wallet-manager/src/wallet_manager/process_block.rs index 515fc48bd..32b074977 100644 --- a/key-wallet-manager/src/wallet_manager/process_block.rs +++ b/key-wallet-manager/src/wallet_manager/process_block.rs @@ -111,8 +111,25 @@ impl WalletInterface for WalletM fn update_synced_height(&mut self, height: CoreBlockHeight) { self.synced_height = height; - for info in self.wallet_infos.values_mut() { + + // Update each wallet and emit BalanceUpdated events if balance changed + for (wallet_id, info) in self.wallet_infos.iter_mut() { + let old_balance = info.balance(); info.update_synced_height(height); + let new_balance = info.balance(); + + // Emit event if balance changed + #[cfg(feature = "std")] + if old_balance != new_balance { + let event = crate::WalletEvent::BalanceUpdated { + wallet_id: *wallet_id, + spendable: new_balance.spendable(), + unconfirmed: new_balance.unconfirmed(), + immature: new_balance.immature(), + locked: new_balance.locked(), + }; + let _ = self.event_sender.send(event); + } } } diff --git a/key-wallet/src/managed_account/mod.rs b/key-wallet/src/managed_account/mod.rs index 30120b99f..340fb6b3a 100644 --- a/key-wallet/src/managed_account/mod.rs +++ b/key-wallet/src/managed_account/mod.rs @@ -27,7 +27,7 @@ use dashcore::{Transaction, Txid}; use managed_account_type::ManagedAccountType; #[cfg(feature = "serde")] use serde::{Deserialize, Serialize}; -use std::collections::BTreeSet; +use std::collections::{BTreeSet, HashSet}; pub mod address_pool; pub mod managed_account_collection; @@ -44,7 +44,7 @@ pub mod transaction_record; /// metadata, and balance information. It is managed separately from /// the immutable Account structure. #[derive(Debug, Clone)] -#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))] +#[cfg_attr(feature = "serde", derive(Serialize))] pub struct ManagedCoreAccount { /// Account type with embedded address pools and index pub account_type: ManagedAccountType, @@ -60,6 +60,10 @@ pub struct ManagedCoreAccount { pub transactions: BTreeMap, /// UTXO set for this account pub utxos: BTreeMap, + /// Outpoints spent by recorded transactions. + /// Rebuilt from `transactions` during deserialization. + #[cfg_attr(feature = "serde", serde(skip_serializing))] + spent_outpoints: HashSet, } impl ManagedCoreAccount { @@ -73,9 +77,15 @@ impl ManagedCoreAccount { balance: WalletCoreBalance::default(), transactions: BTreeMap::new(), utxos: BTreeMap::new(), + spent_outpoints: HashSet::new(), } } + /// Check if an outpoint was spent by a previously recorded transaction. + fn is_outpoint_spent(&self, outpoint: &OutPoint) -> bool { + self.spent_outpoints.contains(outpoint) + } + /// Create a ManagedAccount from an Account pub fn from_account(account: &super::Account) -> Self { // Use the account's public key as the key source @@ -306,6 +316,22 @@ impl ManagedCoreAccount { txid, vout: vout as u32, }; + + // Check if this outpoint was already spent by a transaction we've seen. + // This handles out-of-order block processing during rescan where a + // spending transaction at a higher height may be processed before + // the transaction that created the UTXO. + // TODO: This is mostly needed for wallet rescan from storage with the + // there is a timing issue with event processing which might lead to + // invalid UTXO set / balances. There might be a way around it. + if self.is_outpoint_spent(&outpoint) { + tracing::debug!( + outpoint = %outpoint, + "Skipping UTXO already spent by previously processed transaction" + ); + continue; + } + let txout = dashcore::TxOut { value: output.value, script_pubkey: output.script_pubkey.clone(), @@ -323,9 +349,17 @@ impl ManagedCoreAccount { } } - // Remove UTXOs spent by this transaction + // Remove UTXOs spent by this transaction and track spent outpoints for input in &tx.input { - self.utxos.remove(&input.previous_output); + self.spent_outpoints.insert(input.previous_output); + + if self.utxos.remove(&input.previous_output).is_some() { + tracing::debug!( + outpoint = %input.previous_output, + txid = %tx.txid(), + "Removed spent UTXO" + ); + } } } _ => {} @@ -975,3 +1009,42 @@ impl ManagedAccountTrait for ManagedCoreAccount { &mut self.utxos } } + +#[cfg(feature = "serde")] +impl<'de> Deserialize<'de> for ManagedCoreAccount { + fn deserialize(deserializer: D) -> Result + where + D: serde::Deserializer<'de>, + { + #[derive(Deserialize)] + struct Helper { + account_type: ManagedAccountType, + network: Network, + metadata: AccountMetadata, + is_watch_only: bool, + balance: WalletCoreBalance, + transactions: BTreeMap, + utxos: BTreeMap, + } + + let helper = Helper::deserialize(deserializer)?; + + let spent_outpoints = helper + .transactions + .values() + .flat_map(|record| &record.transaction.input) + .map(|input| input.previous_output) + .collect(); + + Ok(ManagedCoreAccount { + account_type: helper.account_type, + network: helper.network, + metadata: helper.metadata, + is_watch_only: helper.is_watch_only, + balance: helper.balance, + transactions: helper.transactions, + utxos: helper.utxos, + spent_outpoints, + }) + } +} diff --git a/key-wallet/src/tests/mod.rs b/key-wallet/src/tests/mod.rs index 339ff67db..811240960 100644 --- a/key-wallet/src/tests/mod.rs +++ b/key-wallet/src/tests/mod.rs @@ -24,4 +24,6 @@ mod special_transaction_tests; mod transaction_tests; +mod spent_outpoints_tests; + mod wallet_tests; diff --git a/key-wallet/src/tests/spent_outpoints_tests.rs b/key-wallet/src/tests/spent_outpoints_tests.rs new file mode 100644 index 000000000..a9e941b26 --- /dev/null +++ b/key-wallet/src/tests/spent_outpoints_tests.rs @@ -0,0 +1,135 @@ +//! Tests for spent_outpoints deserialization and tracking. + +use dashcore::blockdata::transaction::{OutPoint, Transaction}; +use dashcore::{TxIn, Txid}; + +use crate::account::TransactionRecord; +use crate::managed_account::ManagedCoreAccount; + +/// Create a transaction that spends the given outpoints. +fn spending_tx(spent: &[OutPoint]) -> Transaction { + Transaction { + version: 1, + lock_time: 0, + input: spent + .iter() + .map(|op| TxIn { + previous_output: *op, + ..Default::default() + }) + .collect(), + output: Vec::new(), + special_transaction_payload: None, + } +} + +/// Create a receive-only transaction (no meaningful inputs). +fn receive_only_tx() -> Transaction { + Transaction { + version: 1, + lock_time: 0, + input: vec![TxIn::default()], + output: Vec::new(), + special_transaction_payload: None, + } +} + +fn record_from_tx(tx: &Transaction) -> TransactionRecord { + TransactionRecord::new(tx.clone(), 0, 0, false) +} + +#[test] +fn fresh_account_has_empty_spent_outpoints() { + let account = ManagedCoreAccount::dummy_bip44(); + assert!(account.transactions.is_empty()); + + let probe = OutPoint::new(Txid::from([0xAA; 32]), 0); + // Accessing spent_outpoints on a fresh account should not panic or misbehave. + // We verify indirectly via serde round-trip (spent_outpoints is private). + let json = serde_json::to_string(&account).unwrap(); + let deserialized: ManagedCoreAccount = serde_json::from_str(&json).unwrap(); + // No transactions, so spent_outpoints stays empty after round-trip. + assert!(deserialized.transactions.is_empty()); + // Confirm the serialized form does not contain spent_outpoints. + assert!(!json.contains("spent_outpoints")); + let _ = probe; // used only for clarity of intent +} + +#[test] +fn serde_round_trip_rebuilds_spent_outpoints() { + let mut account = ManagedCoreAccount::dummy_bip44(); + + let outpoint_a = OutPoint::new(Txid::from([0x01; 32]), 0); + let outpoint_b = OutPoint::new(Txid::from([0x02; 32]), 1); + let tx = spending_tx(&[outpoint_a, outpoint_b]); + let txid = tx.txid(); + account.transactions.insert(txid, record_from_tx(&tx)); + + // Serialize (spent_outpoints is skipped) + let json = serde_json::to_string(&account).unwrap(); + assert!(!json.contains("spent_outpoints")); + + // Deserialize: spent_outpoints should be rebuilt from transactions + let deserialized: ManagedCoreAccount = serde_json::from_str(&json).unwrap(); + assert_eq!(deserialized.transactions.len(), 1); + + // Verify the rebuilt set by serializing again and comparing transactions + // (spent_outpoints is private, so we test behavior through a second round-trip + // to confirm stability) + let json2 = serde_json::to_string(&deserialized).unwrap(); + let deserialized2: ManagedCoreAccount = serde_json::from_str(&json2).unwrap(); + assert_eq!(deserialized2.transactions.len(), 1); +} + +#[test] +fn receive_only_account_round_trips_correctly() { + let mut account = ManagedCoreAccount::dummy_bip44(); + + // Add a receive-only transaction (coinbase-like, no real spent outpoints) + let tx = receive_only_tx(); + let txid = tx.txid(); + account.transactions.insert(txid, record_from_tx(&tx)); + + assert_eq!(account.transactions.len(), 1); + + // Round-trip should work without issues (no rebuild loop) + let json = serde_json::to_string(&account).unwrap(); + let deserialized: ManagedCoreAccount = serde_json::from_str(&json).unwrap(); + assert_eq!(deserialized.transactions.len(), 1); + + // A second round-trip should be stable + let json2 = serde_json::to_string(&deserialized).unwrap(); + let deserialized2: ManagedCoreAccount = serde_json::from_str(&json2).unwrap(); + assert_eq!(deserialized2.transactions.len(), 1); +} + +#[test] +fn multiple_transactions_all_inputs_tracked_after_round_trip() { + let mut account = ManagedCoreAccount::dummy_bip44(); + + let outpoint_1 = OutPoint::new(Txid::from([0x10; 32]), 0); + let outpoint_2 = OutPoint::new(Txid::from([0x20; 32]), 0); + let outpoint_3 = OutPoint::new(Txid::from([0x30; 32]), 2); + + let tx1 = spending_tx(&[outpoint_1]); + let tx2 = spending_tx(&[outpoint_2, outpoint_3]); + + account.transactions.insert(tx1.txid(), record_from_tx(&tx1)); + account.transactions.insert(tx2.txid(), record_from_tx(&tx2)); + + let json = serde_json::to_string(&account).unwrap(); + let deserialized: ManagedCoreAccount = serde_json::from_str(&json).unwrap(); + + // All three outpoints should be in the rebuilt spent set. + // We verify by confirming the transaction inputs survived the round-trip. + let all_spent: Vec = deserialized + .transactions + .values() + .flat_map(|r| &r.transaction.input) + .map(|inp| inp.previous_output) + .collect(); + assert!(all_spent.contains(&outpoint_1)); + assert!(all_spent.contains(&outpoint_2)); + assert!(all_spent.contains(&outpoint_3)); + assert_eq!(all_spent.len(), 3); +} diff --git a/key-wallet/src/transaction_checking/wallet_checker.rs b/key-wallet/src/transaction_checking/wallet_checker.rs index a73964d21..8af75a217 100644 --- a/key-wallet/src/transaction_checking/wallet_checker.rs +++ b/key-wallet/src/transaction_checking/wallet_checker.rs @@ -809,4 +809,122 @@ mod tests { "total_transactions should not increase on rescan" ); } + + /// Test that UTXO is not created when a spending tx has already been stored + #[tokio::test] + async fn test_utxo_not_created_when_already_spent() { + let network = Network::Testnet; + let mut wallet = Wallet::new_random(network, WalletAccountCreationOptions::Default) + .expect("Should create wallet"); + + let mut managed_wallet = + ManagedWalletInfo::from_wallet_with_name(&wallet, "Test".to_string()); + + // Get wallet addresses (we need two - one for receive, one for change) + let account = + wallet.accounts.standard_bip44_accounts.get(&0).expect("Should have BIP44 account"); + let xpub = account.account_xpub; + + let receive_address = managed_wallet + .first_bip44_managed_account_mut() + .expect("Should have managed account") + .next_receive_address(Some(&xpub), true) + .expect("Should get address"); + + let change_address = managed_wallet + .first_bip44_managed_account_mut() + .expect("Should have managed account") + .next_change_address(Some(&xpub), true) + .expect("Should get change address"); + + // Create the funding transaction + let funding_tx = create_transaction_to_address(&receive_address, 100_000); + + // Create a spending transaction that: + // 1. Spends the funding tx's output + // 2. Sends change back to our wallet (so it WILL be detected as relevant) + let spend_tx = Transaction { + version: 2, + lock_time: 0, + input: vec![TxIn { + previous_output: OutPoint { + txid: funding_tx.txid(), + vout: 0, + }, + script_sig: ScriptBuf::new(), + sequence: 0xffffffff, + witness: dashcore::Witness::new(), + }], + output: vec![TxOut { + value: 50_000, // Change back to us + script_pubkey: change_address.script_pubkey(), + }], + special_transaction_payload: None, + }; + + // Process spending tx FIRST (out of order) + // This time it HAS an output to our wallet, so it should be stored + let spend_context = TransactionContext::InBlock { + height: 100, + block_hash: Some(BlockHash::from_slice(&[1u8; 32]).expect("Should create block hash")), + timestamp: Some(1234567890), + }; + + let spend_result = managed_wallet + .check_core_transaction(&spend_tx, spend_context, &mut wallet, true) + .await; + + // Spending tx should be detected because of the change output + assert!( + spend_result.is_relevant, + "Spending transaction should be detected (has change output to our wallet)" + ); + assert_eq!(spend_result.total_received, 50_000); + assert_eq!(spend_result.total_sent, 0); // Can't detect spend without UTXO + + // Verify the transaction was stored + let account = managed_wallet.first_bip44_managed_account().expect("Should have account"); + assert!( + account.transactions.contains_key(&spend_tx.txid()), + "Spending tx should be stored" + ); + + // One UTXO should exist (the change output from spend_tx) + assert_eq!(account.utxos.len(), 1, "Should have one UTXO (change output)"); + + // Now process the funding tx (which was spent by spend_tx that we already stored) + let fund_context = TransactionContext::InBlock { + height: 99, + block_hash: Some(BlockHash::from_slice(&[2u8; 32]).expect("Should create block hash")), + timestamp: Some(1234567880), + }; + + let fund_result = managed_wallet + .check_core_transaction(&funding_tx, fund_context, &mut wallet, true) + .await; + + // Funding tx should be detected + assert!(fund_result.is_relevant, "Funding transaction should be detected"); + assert_eq!(fund_result.total_received, 100_000); + + // Check UTXO state - the funding tx's UTXO should NOT have been added + // because the stored spend_tx spends it + let account = managed_wallet.first_bip44_managed_account().expect("Should have account"); + + // Should still only have one UTXO (the change from spend_tx) + assert_eq!( + account.utxos.len(), + 1, + "Should still have only one UTXO (change), funding UTXO should not be added" + ); + + // The one UTXO should be the change output, not the funding output + let utxo = account.utxos.values().next().expect("Should have UTXO"); + assert_eq!( + utxo.outpoint.txid, + spend_tx.txid(), + "UTXO should be from spend_tx (change), not funding_tx" + ); + assert_eq!(utxo.txout.value, 50_000, "UTXO value should be 50k (change amount)"); + } }