-
Notifications
You must be signed in to change notification settings - Fork 41
Aggregation broadcast & unskip interop tests #385
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
kamilsa
wants to merge
7
commits into
main
Choose a base branch
from
fix/consensus-broadcast-updates
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Changes from all commits
Commits
Show all changes
7 commits
Select commit
Hold shift + click to select a range
359e438
Update consensus logic: get_attestation_target, aggregation broadcastβ¦
kamilsa a72c8de
fix: wire aggregated attestation broadcast through network pipeline
kamilsa 9aa20e9
fix: update fill framework and tests for new on_tick tuple return type
kamilsa dc2c45b
fix: unskip multi-node interop tests and fix store attribute names
kamilsa a89d33a
fix: resolve lint, type, and test issues across tox suite
kamilsa 26b9035
fix: use convergence-based polling in finalization tests
kamilsa 5ace58d
fix: increase timeouts for CI reliability in interop tests
kamilsa File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -7,6 +7,7 @@ | |||||||||
| __all__ = ["Store"] | ||||||||||
|
|
||||||||||
| import copy | ||||||||||
| import logging | ||||||||||
| from collections import defaultdict | ||||||||||
|
|
||||||||||
| from lean_spec.subspecs.chain.config import ( | ||||||||||
|
|
@@ -45,6 +46,8 @@ | |||||||||
| ) | ||||||||||
| from lean_spec.types.container import Container | ||||||||||
|
|
||||||||||
| logger = logging.getLogger(__name__) | ||||||||||
|
|
||||||||||
|
|
||||||||||
| class Store(Container): | ||||||||||
| """ | ||||||||||
|
|
@@ -488,7 +491,6 @@ def on_gossip_aggregated_attestation( | |||||||||
| new_attestation_data_by_root = dict(self.attestation_data_by_root) | ||||||||||
| new_attestation_data_by_root[data_root] = data | ||||||||||
|
|
||||||||||
| store = self | ||||||||||
| for vid in validator_ids: | ||||||||||
| # Update Proof Map | ||||||||||
| # | ||||||||||
|
|
@@ -497,7 +499,7 @@ def on_gossip_aggregated_attestation( | |||||||||
| new_aggregated_payloads.setdefault(key, []).append(proof) | ||||||||||
|
|
||||||||||
| # Return store with updated aggregated payloads and attestation data | ||||||||||
| return store.model_copy( | ||||||||||
| return self.model_copy( | ||||||||||
| update={ | ||||||||||
| "latest_new_aggregated_payloads": new_aggregated_payloads, | ||||||||||
| "attestation_data_by_root": new_attestation_data_by_root, | ||||||||||
|
|
@@ -856,28 +858,7 @@ def update_head(self) -> "Store": | |||||||||
| ) | ||||||||||
|
|
||||||||||
| def accept_new_attestations(self) -> "Store": | ||||||||||
| """ | ||||||||||
| Process pending aggregated payloads and update forkchoice head. | ||||||||||
|
|
||||||||||
| Moves aggregated payloads from latest_new_aggregated_payloads to | ||||||||||
| latest_known_aggregated_payloads, making them eligible to contribute to | ||||||||||
| fork choice weights. This migration happens at specific interval ticks. | ||||||||||
|
|
||||||||||
| The Interval Tick System | ||||||||||
| ------------------------- | ||||||||||
| Aggregated payloads progress through intervals: | ||||||||||
| - Interval 0: Block proposal | ||||||||||
| - Interval 1: Validators cast attestations (enter "new") | ||||||||||
| - Interval 2: Aggregators create proofs & broadcast | ||||||||||
| - Interval 3: Safe target update | ||||||||||
| - Interval 4: Process accumulated attestations | ||||||||||
|
|
||||||||||
| This staged progression ensures proper timing and prevents premature | ||||||||||
| influence on fork choice decisions. | ||||||||||
|
|
||||||||||
| Returns: | ||||||||||
| New Store with migrated aggregated payloads and updated head. | ||||||||||
| """ | ||||||||||
| """Process pending aggregated payloads and update forkchoice head.""" | ||||||||||
| # Merge new aggregated payloads into known aggregated payloads | ||||||||||
| merged_aggregated_payloads = dict(self.latest_known_aggregated_payloads) | ||||||||||
| for sig_key, proofs in self.latest_new_aggregated_payloads.items(): | ||||||||||
|
|
@@ -937,15 +918,15 @@ def update_safe_target(self) -> "Store": | |||||||||
|
|
||||||||||
| return self.model_copy(update={"safe_target": safe_target}) | ||||||||||
|
|
||||||||||
| def aggregate_committee_signatures(self) -> "Store": | ||||||||||
| def aggregate_committee_signatures(self) -> tuple["Store", list[SignedAggregatedAttestation]]: | ||||||||||
| """ | ||||||||||
| Aggregate committee signatures for attestations in committee_signatures. | ||||||||||
|
|
||||||||||
| This method aggregates signatures from the gossip_signatures map. | ||||||||||
| Attestations are reconstructed from gossip_signatures using attestation_data_by_root. | ||||||||||
|
|
||||||||||
| Returns: | ||||||||||
| New Store with updated latest_new_aggregated_payloads. | ||||||||||
| Tuple of (new Store with updated payloads, list of new SignedAggregatedAttestation). | ||||||||||
| """ | ||||||||||
| new_aggregated_payloads = dict(self.latest_new_aggregated_payloads) | ||||||||||
|
|
||||||||||
|
|
@@ -970,13 +951,14 @@ def aggregate_committee_signatures(self) -> "Store": | |||||||||
| committee_signatures, | ||||||||||
| ) | ||||||||||
|
|
||||||||||
| # iterate to broadcast aggregated attestations | ||||||||||
| # Create list for broadcasting | ||||||||||
| new_aggregates: list[SignedAggregatedAttestation] = [] | ||||||||||
| for aggregated_attestation, aggregated_signature in aggregated_results: | ||||||||||
| _ = SignedAggregatedAttestation( | ||||||||||
| agg = SignedAggregatedAttestation( | ||||||||||
| data=aggregated_attestation.data, | ||||||||||
| proof=aggregated_signature, | ||||||||||
| ) | ||||||||||
| # Note: here we should broadcast the aggregated signature to committee_aggregators topic | ||||||||||
| new_aggregates.append(agg) | ||||||||||
|
|
||||||||||
| # Compute new aggregated payloads | ||||||||||
| new_gossip_sigs = dict(self.gossip_signatures) | ||||||||||
|
|
@@ -998,9 +980,11 @@ def aggregate_committee_signatures(self) -> "Store": | |||||||||
| "latest_new_aggregated_payloads": new_aggregated_payloads, | ||||||||||
| "gossip_signatures": new_gossip_sigs, | ||||||||||
| } | ||||||||||
| ) | ||||||||||
| ), new_aggregates | ||||||||||
|
|
||||||||||
| def tick_interval(self, has_proposal: bool, is_aggregator: bool = False) -> "Store": | ||||||||||
| def tick_interval( | ||||||||||
| self, has_proposal: bool, is_aggregator: bool = False | ||||||||||
| ) -> tuple["Store", list[SignedAggregatedAttestation]]: | ||||||||||
| """ | ||||||||||
| Advance store time by one interval and perform interval-specific actions. | ||||||||||
|
|
||||||||||
|
|
@@ -1042,11 +1026,12 @@ def tick_interval(self, has_proposal: bool, is_aggregator: bool = False) -> "Sto | |||||||||
| is_aggregator: Whether the node is an aggregator. | ||||||||||
|
|
||||||||||
| Returns: | ||||||||||
| New Store with advanced time and interval-specific updates applied. | ||||||||||
| Tuple of (new Store with advanced time, list of new SignedAggregatedAttestation). | ||||||||||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||
| """ | ||||||||||
| # Advance time by one interval | ||||||||||
| store = self.model_copy(update={"time": self.time + Uint64(1)}) | ||||||||||
| current_interval = store.time % INTERVALS_PER_SLOT | ||||||||||
| new_aggregates: list[SignedAggregatedAttestation] = [] | ||||||||||
|
|
||||||||||
| if current_interval == Uint64(0): | ||||||||||
| # Start of slot - process attestations if proposal exists | ||||||||||
|
|
@@ -1055,17 +1040,19 @@ def tick_interval(self, has_proposal: bool, is_aggregator: bool = False) -> "Sto | |||||||||
| elif current_interval == Uint64(2): | ||||||||||
| # Aggregation interval - aggregators create proofs | ||||||||||
| if is_aggregator: | ||||||||||
| store = store.aggregate_committee_signatures() | ||||||||||
| store, new_aggregates = store.aggregate_committee_signatures() | ||||||||||
| elif current_interval == Uint64(3): | ||||||||||
| # Fast confirm - update safe target based on received proofs | ||||||||||
| store = store.update_safe_target() | ||||||||||
| elif current_interval == Uint64(4): | ||||||||||
| # End of slot - accept accumulated attestations | ||||||||||
| store = store.accept_new_attestations() | ||||||||||
|
|
||||||||||
| return store | ||||||||||
| return store, new_aggregates | ||||||||||
|
|
||||||||||
| def on_tick(self, time: Uint64, has_proposal: bool, is_aggregator: bool = False) -> "Store": | ||||||||||
| def on_tick( | ||||||||||
| self, time: Uint64, has_proposal: bool, is_aggregator: bool = False | ||||||||||
| ) -> tuple["Store", list[SignedAggregatedAttestation]]: | ||||||||||
| """ | ||||||||||
| Advance forkchoice store time to given timestamp. | ||||||||||
|
|
||||||||||
|
|
@@ -1079,22 +1066,25 @@ def on_tick(self, time: Uint64, has_proposal: bool, is_aggregator: bool = False) | |||||||||
| is_aggregator: Whether the node is an aggregator. | ||||||||||
|
|
||||||||||
| Returns: | ||||||||||
| New Store with time advanced and all interval actions performed. | ||||||||||
| Tuple of (new Store with time advanced, | ||||||||||
| list of all produced SignedAggregatedAttestation). | ||||||||||
|
Comment on lines
+1069
to
+1070
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||
| """ | ||||||||||
| # Calculate target time in intervals | ||||||||||
| time_delta_ms = (time - self.config.genesis_time) * Uint64(1000) | ||||||||||
| tick_interval_time = time_delta_ms // MILLISECONDS_PER_INTERVAL | ||||||||||
|
|
||||||||||
| # Tick forward one interval at a time | ||||||||||
| store = self | ||||||||||
| all_new_aggregates: list[SignedAggregatedAttestation] = [] | ||||||||||
| while store.time < tick_interval_time: | ||||||||||
| # Check if proposal should be signaled for next interval | ||||||||||
| should_signal_proposal = has_proposal and (store.time + Uint64(1)) == tick_interval_time | ||||||||||
|
|
||||||||||
| # Advance by one interval with appropriate signaling | ||||||||||
| store = store.tick_interval(should_signal_proposal, is_aggregator) | ||||||||||
| store, new_aggregates = store.tick_interval(should_signal_proposal, is_aggregator) | ||||||||||
| all_new_aggregates.extend(new_aggregates) | ||||||||||
|
|
||||||||||
| return store | ||||||||||
| return store, all_new_aggregates | ||||||||||
|
|
||||||||||
| def get_proposal_head(self, slot: Slot) -> tuple["Store", Bytes32]: | ||||||||||
| """ | ||||||||||
|
|
@@ -1122,7 +1112,7 @@ def get_proposal_head(self, slot: Slot) -> tuple["Store", Bytes32]: | |||||||||
| slot_time = self.config.genesis_time + slot_duration_seconds | ||||||||||
|
|
||||||||||
| # Advance time to current slot (ticking intervals) | ||||||||||
| store = self.on_tick(slot_time, True) | ||||||||||
| store, _ = self.on_tick(slot_time, True) | ||||||||||
|
|
||||||||||
| # Process any pending attestations before proposal | ||||||||||
| store = store.accept_new_attestations() | ||||||||||
|
|
@@ -1168,8 +1158,11 @@ def get_attestation_target(self) -> Checkpoint: | |||||||||
| # | ||||||||||
| # This ensures the target doesn't advance too far ahead of safe target, | ||||||||||
| # providing a balance between liveness and safety. | ||||||||||
| # | ||||||||||
| # MODIFIED: We allow the target to be up to 1 slot ahead of safe_target | ||||||||||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||||||
| # to ensure the chain can actually start advancing from genesis. | ||||||||||
| for _ in range(JUSTIFICATION_LOOKBACK_SLOTS): | ||||||||||
| if self.blocks[target_block_root].slot > self.blocks[self.safe_target].slot: | ||||||||||
| if self.blocks[target_block_root].slot > self.blocks[self.safe_target].slot + Slot(1): | ||||||||||
| target_block_root = self.blocks[target_block_root].parent_root | ||||||||||
| else: | ||||||||||
| break | ||||||||||
|
|
@@ -1186,7 +1179,7 @@ def get_attestation_target(self) -> Checkpoint: | |||||||||
| # Create checkpoint from selected target block | ||||||||||
| target_block = self.blocks[target_block_root] | ||||||||||
|
|
||||||||||
| return Checkpoint(root=hash_tree_root(target_block), slot=target_block.slot) | ||||||||||
| return Checkpoint(root=target_block_root, slot=target_block.slot) | ||||||||||
|
|
||||||||||
| def produce_attestation_data(self, slot: Slot) -> AttestationData: | ||||||||||
| """ | ||||||||||
|
|
@@ -1293,7 +1286,7 @@ def produce_block_with_signatures( | |||||||||
| # | ||||||||||
| # The builder iteratively collects valid attestations. | ||||||||||
| # It returns the final block, post-state, and signature proofs. | ||||||||||
| final_block, final_post_state, _, signatures = head_state.build_block( | ||||||||||
| final_block, final_post_state, collected_attestations, signatures = head_state.build_block( | ||||||||||
| slot=slot, | ||||||||||
| proposer_index=validator_index, | ||||||||||
| parent_root=head_root, | ||||||||||
|
|
||||||||||
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to avoid variable names in the documentation because these names can change over time, rendering the documentation obsolete