-
Notifications
You must be signed in to change notification settings - Fork 142
Reduced Data Temporary Softfork, implemented as a modified BIP9 temporary UASF #238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 29.x-knots
Are you sure you want to change the base?
Conversation
|
Let me suggest adding a note that OP_RETURN is deprecated in help texts, please. |
| if (!ContextualCheckBlockHeaderVolatile(block, state, m_chainman, pindex->pprev)) { | ||
| LogError("%s: Consensus::ContextualCheckBlockHeaderVolatile: %s\n", __func__, state.ToString()); | ||
| return false; | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason for calling ContextualCheckBlockHeaderVolatile since it's already called in ContextualCheckBlockHeader?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the way mandatory signaling was implemented in the Taproot BIP8 client. It has been adapted to work with max_activation_height rather than the MUST_SIGNAL phase, which doesn't exist in BIP9.
The ContextualCheckBlockHeaderVolatile function is called in both places (AcceptBlockHeader and ConnectBlock) to make sure the signaling rules are enforced both during initial header acceptance and when connecting blocks to the chain.
I chose this approach over the original BIP148 mandatory signaling implementation since that implementation doesn't reject the block until the full block is downloaded, which is less efficient than rejecting the block when we just have the header.
The reason the mandatory signal is checked twice is because AcceptBlockHeader will skip calling ContextualCheckBlockHeader when the block is already in the index, so the re-check in ConnectBlock is necessary for situations like software updates, reorgs, and reindexes.
|
what's the timeline to get this merged so the signaling can use this implementation? |
|
There is no specific timeline to get this merged into Knots, as it is not confirmed that it will be eligible for merging, even when complete. However, I am aiming to have this draft ready for review in the next few days. Miner signaling can still use this deployment if the activation client is released after the start of the signaling period (which is today, so this will definitely happen). |
|
All comments from #234 are now addressed. Undrafting since the code is relatively stable now. Still needs rebase. |
OP_RETURN is not deprecated; it is merely limited to 83 bytes in consensus. |
…to reduce data push size limit to 256 bytes (except for P2SH redeemScript push)
…TA (still unused)
…DUCED_DATA (still unused)
…ATA is active (still never)
…o cap output scripts at 83 bytes
…l never), limit non-OP_RETURN scripts to 34 bytes
…to expected memory usage
…r than exceed MAX_OP_RETURN_RELAY
…T_REDUCED_DATA is active (never yet)
…BLE_WITNESS_PROGRAM,UPGRADABLE_TAPROOT_VERSION,OP_SUCCESS}
…AM,UPGRADABLE_TAPROOT_VERSION,OP_SUCCESS} on blocks when DEPLOYMENT_REDUCED_DATA is active (never yet)
…imit; add reduced_data deployment name to allow regtest RPC access for testing
…DUCED_DATA); adapt 6 tests to NODE_BIP148 service flag; add assert_equal_without_usage helper for testmempoolaccept results
…eviction threshold The test was failing because commit 58a329b changed gen_return_txouts() from using 1 large OP_RETURN output to 734 small OP_RETURN outputs (to comply with the new MAX_OUTPUT_SCRIPT_SIZE=34 consensus rule in bip444). This change altered how fill_mempool() fills the mempool, raising the eviction threshold from ~0.68 sat/vB to ~1.10 sat/vB. The test's create_package_2p1c() was using hardcoded feerates (1.0 and 2.0 sat/vB), causing parent1 to be below the new eviction threshold and get rejected. Solution: Calculate parent feerates dynamically based on the actual mempoolminfee after fill_mempool() runs. This makes the test robust to future changes in mempool dynamics. - Store mempoolminfee in raise_network_minfee() - Use 2x and 4x mempoolminfee for parent1 and parent2 feerates - Add logging to show the calculated feerates Test results with fix: - mempoolminfee: 1.101 sat/vB - parent1: 2.202 sat/vB (2x threshold) → accepted ✓ - parent2: 4.404 sat/vB (4x threshold) → accepted ✓
The test was expecting services string ending with "nwl2?" but now receives "nwl1" because NODE_BIP148 is advertised (BIP148 service bit is represented as "1" in the services string). Updated regex pattern from "nwl2?" to "nwl[12]?" to accept both the BIP148 service bit (1) and any other service bits that may be represented as (2).
…coding The test was expecting addrv2 messages to be 187 bytes, but they're now 227 bytes due to the BIP148 service bit being added to P2P_SERVICES. P2P_SERVICES is now NODE_NETWORK | NODE_WITNESS | NODE_BIP148 = 0x08000009, which requires 5 bytes in CompactSize encoding (not 1 byte as before). Updated calc_addrv2_msg_size() to properly calculate the services field size using ser_compact_size() instead of assuming 1 byte. Difference: 5 bytes - 1 byte = 4 bytes per address × 10 addresses = 40 bytes 187 + 40 = 227 bytes ✓
The addpeeraddress RPC was creating addresses with only NODE_NETWORK | NODE_WITNESS, but the node requires NODE_BIP148 for outbound connections (added in commit c684ff1 from 2017). ThreadOpenConnections filters addresses using HasAllDesirableServiceFlags, which requires NODE_NETWORK | NODE_WITNESS | NODE_BIP148. Addresses without NODE_BIP148 are skipped entirely, making addpeeraddress useless for its intended testing purpose. This fix updates addpeeraddress to match production requirements, allowing test-added addresses to actually be used for outbound connections. Fixes p2p_seednode.py test which was failing because addresses added via addpeeraddress were being filtered out, preventing "trying v1 connection" log messages from appearing.
…TORY_VERIFY_FLAGS
…ation flags on a pet-input basis
…gin from reduced_data script validation rules
Adapt unit tests to comply with REDUCED_DATA restrictions: - Add REDUCED_DATA flag to mapFlagNames in transaction_tests - Update witness test from 520-byte to 256-byte push limit - Accept SCRIPT_ERR_PUSH_SIZE in miniscript satisfaction tests - Update Taproot tree depth tests from 128 to 7 levels - Fix descriptor error message to report correct nesting limit (7) REDUCED_DATA enforces MAX_SCRIPT_ELEMENT_SIZE_REDUCED (256 bytes) and TAPROOT_CONTROL_MAX_NODE_COUNT_REDUCED (7 levels) at the policy level via STANDARD_SCRIPT_VERIFY_FLAGS.
Replace thresh(2,pk(...),s:pk(...),adv:older(42)) with and_v(v:pk(...),pk(...)) because thresh() uses OP_IF opcodes which are completely forbidden in Tapscript when REDUCED_DATA is active (see src/script/interpreter.cpp:621-623). The and_v() construction provides equivalent 2-of-2 multisig functionality without conditional branching, making it compatible with REDUCED_DATA restrictions. Also update line 1010 test to expect "tr() supports at most 7 nesting levels" error instead of multi() error, as the test's 22 opening braces exceed REDUCED_DATA's 7-level limit before the parser can discover the multi() error.
Add NODE_BIP444 flag to GetDesirableServiceFlags assertions in peerman_tests and to service flags in denialofservice_tests and net_tests peer setup. NODE_BIP444 (bit 27) signals BIP444/REDUCED_DATA enforcement and is now included in desirable service flags alongside NODE_NETWORK and NODE_WITNESS for peer connections.
df70e59 to
d699af3
Compare
|
Rebased on v29.2.knots20251110. Ready for review. |
|
Concept NACK There shouldn't be any emergency softfork to address spam without at least a sketeched out permanent solution |
|
@stackingsaunter Please keep conceptual discussion to the BIP PR. This PR is for code review only. |
| if (flags & SCRIPT_VERIFY_REDUCED_DATA) { | ||
| return set_error(serror, SCRIPT_ERR_TAPSCRIPT_MINIMALIF); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The REDUCED_DATA check for OP_IF runs after the valid input check, meaning valid minimal OP_IF with true input passes the first check but then fails on the REDUCED_DATA check.
Instead of using SCRIPT_ERR_TAPSCRIPT_MINIMALIF, it might be worth defining a new error like SCRIPT_ERR_REDUCED_DATA_OPIF to distinguish "OP_IF forbidden" from "OP_IF argument not minimal"
| assert(flags_per_input.empty() || flags_per_input.size() == tx.vin.size()); | ||
|
|
||
| for (unsigned int i = 0; i < tx.vin.size(); i++) { | ||
| if (!flags_per_input.empty()) flags = flags_per_input[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reassigning flags inside the loop is concerning because the script cache key is computed from the original flags parameter. If flags_per_input[i] has fewer restrictions than flags, a cached "pass" result from a more-restrictive check could incorrectly validate a less-restrictive input. The comment at line 2415 says flags_per_input must only relax restrictions, but this invariant isn't enforced.
May want to add an assertion:
assert((flags_per_input[i] & ~flags) == 0 || ...);
| // Drop annex (this is non-standard; see IsWitnessStandard) | ||
| const valtype& annex = SpanPopBack(stack); | ||
| if (flags & SCRIPT_VERIFY_REDUCED_DATA) { | ||
| return set_error(serror, SCRIPT_ERR_PUSH_SIZE); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would SCRIPT_ERR_ANNEX_NOT_ALLOWED be better suited for this instead of SCRIPT_ERR_PUSH_SIZE?
| const auto block_height_current = m_active_chainstate.m_chain.Height(); | ||
| const auto block_height_next = block_height_current + 1; | ||
| if (!Consensus::CheckTxInputs(tx, state, m_view, block_height_next, ws.m_base_fees)) { | ||
| if (!Consensus::CheckTxInputs(tx, state, m_view, block_height_next, ws.m_base_fees, CheckTxInputsRules::OutputSizeLimit)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OutputSizeLimit is enforced unconditionally in mempool acceptance, but in ConnectBlock it's conditional on DeploymentActiveAt. This means mempool will reject 35+ byte outputs before the soft fork activates. Is this intended as policy-before-consensus? If so, might want to add a comment clarifying this is intentional policy hardening.
| consensus.vDeployments[Consensus::DEPLOYMENT_REDUCED_DATA].bit = 4; | ||
| consensus.vDeployments[Consensus::DEPLOYMENT_REDUCED_DATA].nStartTime = Consensus::BIP9Deployment::NEVER_ACTIVE; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should testnet4 activation parameters be added so the UASF mechanism can be tested?
| // So enforce signaling from (max_activation_height - 2*nPeriod) to (max_activation_height - nPeriod) | ||
| const int enforcement_start = deployment.max_activation_height - (2 * nPeriod); | ||
| const int enforcement_end = deployment.max_activation_height - nPeriod; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With max_activation_height=965664 and nPeriod=2016, enforcement runs from 961632 to 963648. However, versionbits forces LOCKED_IN at max_activation_height - nPeriod (line 79-82 in versionbits.cpp). This creates exactly one period (2016 blocks) of mandatory signaling before forced lock-in.
Please confirm this is sufficient since BIP148 had a longer window.
| if (ThresholdState::ACTIVE != versionbitscache.State(index.pprev, params, dep)) return false; | ||
|
|
||
| const auto& deployment = params.vDeployments[dep]; | ||
| // Permanent deployment (never expires) | ||
| if (deployment.active_duration == std::numeric_limits<int>::max()) return true; | ||
|
|
||
| const int activation_height = versionbitscache.StateSinceHeight(index.pprev, params, dep); | ||
| return index.nHeight < activation_height + deployment.active_duration; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
StateSinceHeight() is called even when State() returns ACTIVE. If the deployment transitions from ACTIVE back to some other state (due to code changes or reorg), StateSinceHeight may return unexpected values.
Should we be caching the state check result and passing it to avoid redundant cache lookups?
| if (vbparams.timeout != Consensus::BIP9Deployment::NO_TIMEOUT && vbparams.max_activation_height < std::numeric_limits<int>::max()) { | ||
| throw std::runtime_error(strprintf("Cannot specify both timeout (%ld) and max_activation_height (%d) for deployment %s. Use timeout for BIP9 or max_activation_height for UASF, not both.", vbparams.timeout, vbparams.max_activation_height, vDeploymentParams[0])); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good validation that timeout and max_activation_height are mutually exclusive, but this only applies to
-vbparams. The hardcoded mainnet deployment in kernel/chainparams.cpp:123 uses NO_TIMEOUT with max_activation_height
Can you verify that all hardcoded deployments follow this invariant, or add compile-time/startup validation?
luke-jr
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review not complete yet
| const auto& deployment = params.vDeployments[dep]; | ||
| // Permanent deployment (never expires) | ||
| if (deployment.active_duration == std::numeric_limits<int>::max()) return true; | ||
|
|
||
| const int activation_height = versionbitscache.StateSinceHeight(index.pprev, params, dep); | ||
| return index.nHeight < activation_height + deployment.active_duration; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DeploymentActiveAfter already checks this, so why is this change needed at all?
| {RPCResult::Type::NUM_TIME, "start_time", "the minimum median time past of a block at which the bit gains its meaning"}, | ||
| {RPCResult::Type::NUM_TIME, "timeout", "the median time past of a block at which the deployment is considered failed if not yet locked in"}, | ||
| {RPCResult::Type::NUM, "min_activation_height", "minimum height of blocks for which the rules may be enforced"}, | ||
| {RPCResult::Type::NUM, "max_activation_height", "height at which the deployment will activate (2147483647 if not set)"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like it would be better to omit the field entirely, rather than return maxint when unused.
| {RPCResult::Type::NUM_TIME, "start_time", "the minimum median time past of a block at which the bit gains its meaning"}, | ||
| {RPCResult::Type::NUM_TIME, "timeout", "the median time past of a block at which the deployment is considered failed if not yet locked in"}, | ||
| {RPCResult::Type::NUM, "min_activation_height", "minimum height of blocks for which the rules may be enforced"}, | ||
| {RPCResult::Type::NUM, "max_activation_height", "height at which the deployment will activate (2147483647 if not set)"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "not set" is phrased poorly IMO
| bip9.pushKV("max_activation_height", chainman.GetConsensus().vDeployments[id].max_activation_height); | ||
|
|
||
| // BIP9 status | ||
| bip9.pushKV("status", get_state_name(current_state)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think "status_next" (below) won't work correctly for expiring softforks.
| bip9.pushKV("max_activation_height", chainman.GetConsensus().vDeployments[id].max_activation_height); | ||
|
|
||
| // BIP9 status | ||
| bip9.pushKV("status", get_state_name(current_state)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest adding a "height_end" with the final enforcement block height, for temporary softforks.
| // Overrides timeout to guarantee activation | ||
| stateNext = ThresholdState::LOCKED_IN; | ||
| } else if (pindexPrev->GetMedianTimePast() >= nTimeTimeout) { | ||
| // Timeout without activation (only if max_activation_height not set) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is wrong. If nTimeTimeout is set, it can still trigger a failure if it's reached before max_activation_height - nPeriod
| options.version_bits_parameters[Consensus::DeploymentPos(j)] = vbparams; | ||
| found = true; | ||
| LogPrintf("Setting version bits activation parameters for %s to start=%ld, timeout=%ld, min_activation_height=%d\n", vDeploymentParams[0], vbparams.start_time, vbparams.timeout, vbparams.min_activation_height); | ||
| LogPrintf("Setting version bits activation parameters for %s to start=%ld, timeout=%ld, min_activation_height=%d, max_activation_height=%d\n", vDeploymentParams[0], vbparams.start_time, vbparams.timeout, vbparams.min_activation_height, vbparams.max_activation_height); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should probably include duration?
This PR re-implements #234 as a UASF rather than an MASF. That is, it adds:
max_activation_heightwhich is mutually exclusive withtimeout, andCommits prior to "Add DEPLOYMENT_REDUCED_DATA as temporary BIP9 UASF" do not compile; this is intentional to preserve all REDUCED_DATA commits precisely after dropping the original BuriedDeployment commits.
Commits prior to "Add mainnet configuration for REDUCED_DATA deployment" have failing unit tests.
Functional tests are passing on all commits.
Draft until the following is complete:
if (!EvalScript(stack, scriptSig, flags & ~SCRIPT_VERIFY_REDUCED_DATA, checker, SigVersion::BASE, serror))Not eligibile for merge until the following are complete: