Reduced Data Temporary Softfork, implemented as a modified BIP9 temporary UASF#238
Reduced Data Temporary Softfork, implemented as a modified BIP9 temporary UASF#238dathonohm wants to merge 69 commits intobitcoinknots:29.x-knotsfrom
Conversation
|
Let me suggest adding a note that OP_RETURN is deprecated in help texts, please. |
|
what's the timeline to get this merged so the signaling can use this implementation? |
|
There is no specific timeline to get this merged into Knots, as it is not confirmed that it will be eligible for merging, even when complete. However, I am aiming to have this draft ready for review in the next few days. Miner signaling can still use this deployment if the activation client is released after the start of the signaling period (which is today, so this will definitely happen). |
|
All comments from #234 are now addressed. Undrafting since the code is relatively stable now. Still needs rebase. |
OP_RETURN is not deprecated; it is merely limited to 83 bytes in consensus. |
df70e59 to
d699af3
Compare
|
Rebased on v29.2.knots20251110. Ready for review. |
|
Concept NACK There shouldn't be any emergency softfork to address spam without at least a sketeched out permanent solution |
|
@stackingsaunter Please keep conceptual discussion to the BIP PR. This PR is for code review only. |
src/deploymentstatus.h
Outdated
| if (ThresholdState::ACTIVE != versionbitscache.State(index.pprev, params, dep)) return false; | ||
|
|
||
| const auto& deployment = params.vDeployments[dep]; | ||
| // Permanent deployment (never expires) | ||
| if (deployment.active_duration == std::numeric_limits<int>::max()) return true; | ||
|
|
||
| const int activation_height = versionbitscache.StateSinceHeight(index.pprev, params, dep); | ||
| return index.nHeight < activation_height + deployment.active_duration; |
There was a problem hiding this comment.
StateSinceHeight() is called even when State() returns ACTIVE. If the deployment transitions from ACTIVE back to some other state (due to code changes or reorg), StateSinceHeight may return unexpected values.
Should we be caching the state check result and passing it to avoid redundant cache lookups?
There was a problem hiding this comment.
Can you elaborate on your concern here? StateSinceHeight() is only called when State() returns ACTIVE. In the scenario where the state is ACTIVE then later not active, locks should guarantee that the versionbits cache doesn't change between these two calls.
Let me know if I'm misunderstanding your question.
luke-jr
left a comment
There was a problem hiding this comment.
Review not complete yet
| bip9.pushKV("max_activation_height", chainman.GetConsensus().vDeployments[id].max_activation_height); | ||
|
|
||
| // BIP9 status | ||
| bip9.pushKV("status", get_state_name(current_state)); |
There was a problem hiding this comment.
I think "status_next" (below) won't work correctly for expiring softforks.
There was a problem hiding this comment.
Yes, currently the BIP9 state machine treats this deployment as permanently active, even though in practice the rules stop being enforced at the expiry height.
I can add an "EXPIRED" state to the next version of the activation code, if you think this is a good idea. It just seemed unnecessary for this version.
There was a problem hiding this comment.
Yes, I think that would be good.
| // Overrides timeout to guarantee activation | ||
| stateNext = ThresholdState::LOCKED_IN; | ||
| } else if (pindexPrev->GetMedianTimePast() >= nTimeTimeout) { | ||
| // Timeout without activation (only if max_activation_height not set) |
There was a problem hiding this comment.
This comment is wrong. If nTimeTimeout is set, it can still trigger a failure if it's reached before max_activation_height - nPeriod
There was a problem hiding this comment.
max_activation_height is intended to be mutually exclusive with timeout. Currently there is no validation code to check this for mainnet deployments, but I can add it if you think this is necessary.
See: #238 (comment)
There was a problem hiding this comment.
At least correct the comment here :)
994b2fe to
5a54af7
Compare
|
All review comments are now addressed. CI is now fully passing (except for the "test each commit" job). A tag has been created for the current version, RC2 (identical to this branch except for the last commit, which updates the UA string for BIP-110). Next I will make a release for RC2, and clean up the commit history here. |
| assert(flags_per_input.empty() || flags_per_input.size() == tx.vin.size()); | ||
|
|
||
| for (unsigned int i = 0; i < tx.vin.size(); i++) { | ||
| if (!flags_per_input.empty()) flags = flags_per_input[i]; |
There was a problem hiding this comment.
Can we add a comment confirming this is safe because flags_per_input[i] is always flags & ~REDUCED_DATA_MANDATORY_VERIFY_FLAGS (never more permissive than the base case).
| if (flags & SCRIPT_VERIFY_REDUCED_DATA) { | ||
| return set_error(serror, SCRIPT_ERR_TAPSCRIPT_MINIMALIF); | ||
| } |
There was a problem hiding this comment.
If we don't want to add a new error code like SCRIPT_ERR_REDUCED_DATA_OPIF_BANNED for clarity, can we at least add a comment explaining this choice?
| // Calculate enforcement window: 1 period before forced lock-in | ||
| // Lock-in happens at (max_activation_height - nPeriod) | ||
| // So enforce signaling from (max_activation_height - 2*nPeriod) to (max_activation_height - nPeriod) | ||
| const int enforcement_start = deployment.max_activation_height - (2 * nPeriod); |
There was a problem hiding this comment.
Can we add bounds checking:
const int enforcement_start = std::max(0, deployment.max_activation_height - (2 * nPeriod));
| * desired service flags (compatible with our new flags). | ||
| */ | ||
| constexpr ServiceFlags SeedsServiceFlags() { return ServiceFlags(NODE_NETWORK | NODE_WITNESS); } | ||
| constexpr ServiceFlags SeedsServiceFlags() { return ServiceFlags(NODE_NETWORK | NODE_WITNESS | NODE_UASF_REDUCED_DATA); } |
There was a problem hiding this comment.
This requests NODE_UASF_REDUCED_DATA from DNS seeds for ALL networks, but testnet/signet seeds may not support this flag. This could cause seeding failures or reduced peer discovery.
There was a problem hiding this comment.
It shouldn't cause failures. I'd leave it alone.
| CAmount txfee = 0; | ||
| assert(!tx.IsCoinBase()); | ||
| assert(Consensus::CheckTxInputs(tx, dummy_state, mempoolDuplicate, spendheight, txfee)); | ||
| assert(Consensus::CheckTxInputs(tx, dummy_state, mempoolDuplicate, spendheight, txfee, CheckTxInputsRules::None)); |
There was a problem hiding this comment.
The mempool consistency check (CTxMemPool::check) doesn't verify output size limits, while mempool acceptance does. This inconsistency is intentional (the check function verifies existing entries, not re-validates), but should be documented.
There was a problem hiding this comment.
These tests hardcode service flag values in multiple places. Some use constants like NODE_NETWORK | NODE_WITNESS | NODE_UASF_REDUCED_DATA, while others use numeric literals. This is fragile if the flag values change.
Can we use the constant form consistently throughout?
src/validation.cpp
Outdated
| * This involves ECDSA signature checks so can be computationally intensive. This function should | ||
| * only be called after the cheap sanity checks in CheckTxInputs passed. | ||
| * | ||
| * WARNING: flags_per_input deviations from flags must be handled with care. Under no |
There was a problem hiding this comment.
Should clarify "with the same" to: "with the stricter" or "with the global"
5a54af7 to
cc089c4
Compare
src/consensus/tx_verify.cpp
Outdated
| if (rules.test(CheckTxInputsRules::OutputSizeLimit)) { | ||
| for (const auto& txout : tx.vout) { | ||
| if (txout.scriptPubKey.empty()) continue; | ||
| if (txout.scriptPubKey.size() > ((txout.scriptPubKey[0] == OP_RETURN) ? MAX_OUTPUT_DATA_SIZE : MAX_OUTPUT_SCRIPT_SIZE)) { |
There was a problem hiding this comment.
Please add a comment above this describing what this is doing. E.g.
// If a script pubkey is present, use the appropriate size as the limit for validation
| branches.push_back(false); // new left branch | ||
| if (branches.size() > TAPROOT_CONTROL_MAX_NODE_COUNT) { | ||
| error = strprintf("tr() supports at most %i nesting levels", TAPROOT_CONTROL_MAX_NODE_COUNT); | ||
| if (branches.size() > TAPROOT_CONTROL_MAX_NODE_COUNT_REDUCED) { |
There was a problem hiding this comment.
Please add comment, e.g.
// limit complexity of tapscript, now with a reduced node count
| bip9.pushKV("start_time", chainman.GetConsensus().vDeployments[id].nStartTime); | ||
| bip9.pushKV("timeout", chainman.GetConsensus().vDeployments[id].nTimeout); | ||
| bip9.pushKV("min_activation_height", chainman.GetConsensus().vDeployments[id].min_activation_height); | ||
| if (chainman.GetConsensus().vDeployments[id].max_activation_height < std::numeric_limits<int>::max()) { |
There was a problem hiding this comment.
Add comment, e.g.
// limit possible activation of the softfork beyond this block height
| execdata.m_codeseparator_pos = 0xFFFFFFFFUL; | ||
| execdata.m_codeseparator_pos_init = true; | ||
|
|
||
| const unsigned int max_element_size = (flags & SCRIPT_VERIFY_REDUCED_DATA) ? MAX_SCRIPT_ELEMENT_SIZE_REDUCED : MAX_SCRIPT_ELEMENT_SIZE; |
There was a problem hiding this comment.
Could you add a comment that explains why max_element_size is being set like this and what it is being used for? I had a difficult time deciphering this one. Might also be helpful to add a comment where it's used.
datacarriersize is not deprecated, but OP_RETURN is. Documenting it is out of scope for this PR, though. |
| # Use 20-byte program to avoid Taproot (32-byte) and stay under | ||
| # REDUCED_DATA's 34-byte output limit (33-byte program would be 35 bytes total) | ||
| script_pubkey = CScript([CScriptOp(version), witness_hash[:20]]) |
There was a problem hiding this comment.
nit: Prefer to keep the original comment, and just change it to remove the last byte (so a 31-byte program) rather than adding one
| det = self.nodes[0].cli('-netinfo', '1').send_cli().splitlines() | ||
| self.log.debug(f"Test -netinfo 1 header output: {det[0]}") | ||
| assert re.match(rf"^{re.escape(self.config['environment']['CLIENT_NAME'])} client.+services nwl2?$", det[0]) | ||
| assert re.match(rf"^{re.escape(self.config['environment']['CLIENT_NAME'])} client.+services nwl[2]?4$", det[0]) |
There was a problem hiding this comment.
| assert re.match(rf"^{re.escape(self.config['environment']['CLIENT_NAME'])} client.+services nwl[2]?4$", det[0]) | |
| assert re.match(rf"^{re.escape(self.config['environment']['CLIENT_NAME'])} client.+services nwl2?4$", det[0]) |
| from test_framework.messages import ( | ||
| CInv, | ||
| MSG_BLOCK, | ||
| NODE_UASF_REDUCED_DATA, |
There was a problem hiding this comment.
"UASF" is incorrect, this is a MASF
| {RPCResult::Type::NUM_TIME, "start_time", "the minimum median time past of a block at which the bit gains its meaning"}, | ||
| {RPCResult::Type::NUM_TIME, "timeout", "the median time past of a block at which the deployment is considered failed if not yet locked in"}, | ||
| {RPCResult::Type::NUM, "min_activation_height", "minimum height of blocks for which the rules may be enforced"}, | ||
| {RPCResult::Type::NUM, "max_activation_height", /*optional=*/true, "height at which the deployment will unconditionally activate (only for UASF deployments)"}, |
There was a problem hiding this comment.
| {RPCResult::Type::NUM, "max_activation_height", /*optional=*/true, "height at which the deployment will unconditionally activate (only for UASF deployments)"}, | |
| {RPCResult::Type::NUM, "max_activation_height", /*optional=*/true, "height at which the deployment will unconditionally activate (absent for miner-vetoable deployments)"}, |
| # NOTE: On 32-bit systems (i686), there's a race condition where concurrent transaction additions | ||
| # can cause the mempool to repeatedly exceed the limit, causing immediate eviction of low-fee | ||
| # transactions. We retry with exponential backoff to handle this scenario. |
| // at which all the in-chain inputs of the tx were included in blocks. | ||
| // Typical usage of GetPriority with chainActive.Height() will ensure this. | ||
| int heightDiff = currentHeight - cachedHeight; | ||
| int heightDiff = int(currentHeight) - int(cachedHeight); |
| export CI_IMAGE_NAME_TAG="quay.io/centos/centos:stream10" | ||
| export CI_BASE_PACKAGES="gcc-c++ glibc-devel libstdc++-devel ccache make git python3 python3-pip which patch xz procps-ng rsync coreutils bison e2fsprogs cmake dash libicns-utils librsvg2-tools ImageMagick" | ||
| export PIP_PACKAGES="pyzmq" | ||
| export DEP_OPTS="DEBUG=1" # Temporarily enable a DEBUG=1 build to check for GCC-bug-117966 regressions. This can be removed once the minimum GCC version is bumped to 12 in the previous releases task, see https://github.com/bitcoin/bitcoin/issues/31436#issuecomment-2530717875 |
There was a problem hiding this comment.
This is hiding a deadlock? Let's just figure out the deadlock instead?
There was a problem hiding this comment.
nah, this isn't Core, removing failing checks is acceptable here
There was a problem hiding this comment.
@l0rinc If you read the commit message, I removed this check because that's exactly what Core did.
There was a problem hiding this comment.
Core has lower standards than Knots, not higher
There was a problem hiding this comment.
Core has lower standards than Knots, not higher
| consensus.CSVHeight = 419328; // 000000000000000004a1b34462cb8aeebd5799177f7a29cf28f2d1961716b5b5 | ||
| consensus.SegwitHeight = 481824; // 0000000000000000001c8018d9cb3b742ef25114f27563e3fc4a1902167f9893 | ||
| consensus.MinBIP9WarningHeight = 483840; // segwit activation height + miner confirmation window | ||
| consensus.MinBIP9WarningHeight = 711648; // taproot activation height + miner confirmation window |
There was a problem hiding this comment.
You shouldn't need to mess with this...?
| * Regular outputs must be <= MAX_OUTPUT_SCRIPT_SIZE (34 bytes). | ||
| * OP_RETURN outputs must be <= MAX_OUTPUT_DATA_SIZE (83 bytes). | ||
| */ | ||
| [[nodiscard]] bool CheckOutputSizes(const CTransaction& tx, TxValidationState& state); |
There was a problem hiding this comment.
nodiscard is incorrect here. A caller could ignore the return value and check the state instead
DEBUG=1 in depends is already tested in the CI job "previous releases, depends DEBUG". Testing with DEBUG=1 is considered equivalent in these two CI jobs in Bitcoin Core PR bitcoin#32560, which thus does effectively the reverse of this commit.
f4045f3 to
1d3cdac
Compare
|
Force-pushed update to v0.2, which includes a rebase onto Knots 29.3 and some fixes. The old HEAD commit (dathonohm@f4045f3) for this PR has been archived at https://github.com/dathonohm/bitcoin/tree/bip110-v0.1, where it is the penultimate commit. I have now begun work on v0.3, which will include addressing the above review comments. |
Script-execution cache poisoning via activation-boundary reorgDisclaimerIt's no secret that I'm strongly opposed to this consensus change (trying to punish everyone just to send a signal to those we disagree with doesn't resonate with me). But regardless of my views on it, I think it's important to disclose a serious consensus-critical bug in the current BIP-110 implementation, since it could harm users who choose to run this software even more. ContextThe new restrictions here don't apply to UTXOs created before the activation height. BugThe tx-wide script-execution cache handling was modified in this PR, but the cache key is still computed from the strict global flags only, and the result is cached under that strict key.
Code proofThis behavior directly contradicts the warning comment (6c19d67#diff-97c3a52bc5fad452d82670a7fd291800bae20c7bc35bb82686c2c0a4ea7b5b98R2409-R2412): * WARNING: flags_per_input deviations from flags must be handled with care. Under no
* circumstances should they allow a script to pass that might not pass with the same
* `flags` parameter (which is used for the cache).The problem stems from the fact that the relaxed flags only apply to validation but not to caching (6c19d67#diff-97c3a52bc5fad452d82670a7fd291800bae20c7bc35bb82686c2c0a4ea7b5b98R2446) hasher.Write(UCharCast(tx.GetWitnessHash().begin()), 32).Write((unsigned char*)&flags, sizeof(flags)).Finalize(hashCacheEntry.begin());before the if (!flags_per_input.empty()) flags = flags_per_input[i];and the cache ends up being written with the strict global flag key (6c19d67#diff-97c3a52bc5fad452d82670a7fd291800bae20c7bc35bb82686c2c0a4ea7b5b98R2495-R2499) if (cacheFullScriptStore && !pvChecks) {
// We executed all of the provided scripts, and were told to
// cache the result. Do so now.
validation_cache.m_script_execution_cache.insert(hashCacheEntry);
}ReproducersThe bug can be reproduced by a simple unit test: BOOST_FIXTURE_TEST_CASE(checkinputs_flags_per_input_cache_safety, Dersig100Setup)
{
// BIP110 cache-safety reproducer:
// A 300-byte witness push passes only when SCRIPT_VERIFY_REDUCED_DATA is relaxed.
const auto& coinbase_script{m_coinbase_txns[0]->vout[0].scriptPubKey};
const unsigned int strict_flags{SCRIPT_VERIFY_P2SH | SCRIPT_VERIFY_WITNESS | SCRIPT_VERIFY_REDUCED_DATA};
const unsigned int relaxed_flags{SCRIPT_VERIFY_P2SH | SCRIPT_VERIFY_WITNESS};
const CScript witness_script = CScript() << OP_DROP << OP_TRUE;
const std::vector<unsigned char> big_witness_elem(300, 0x42);
const CScript p2wsh_script = GetScriptForDestination(WitnessV0ScriptHash(witness_script));
const auto mine_funding_tx{[&]
{
CMutableTransaction tx;
tx.vin = {CTxIn{m_coinbase_txns[0]->GetHash(), 0}};
tx.vout = {CTxOut{11 * CENT, p2wsh_script}};
std::vector<unsigned char> vchSig;
const uint256 hash = SignatureHash(coinbase_script, tx, 0, SIGHASH_ALL, 0, SigVersion::BASE);
BOOST_CHECK(coinbaseKey.Sign(hash, vchSig));
vchSig.push_back(SIGHASH_ALL);
tx.vin[0].scriptSig << vchSig;
const CBlock block = CreateAndProcessBlock({tx}, coinbase_script);
LOCK(cs_main);
BOOST_CHECK(m_node.chainman->ActiveChain().Tip()->GetBlockHash() == block.GetHash());
return CTransaction{tx};
}};
const CTransaction funding_tx{mine_funding_tx()};
// Build spending transaction with witness stack [big_witness_elem, witness_script].
CMutableTransaction spend_tx;
spend_tx.vin = {CTxIn{funding_tx.GetHash(), 0}};
spend_tx.vout = {CTxOut{10 * CENT, GetScriptForDestination(PKHash(coinbaseKey.GetPubKey()))}};
spend_tx.vin[0].scriptWitness.stack = {big_witness_elem, {witness_script.begin(), witness_script.end()}};
const CTransaction spend{spend_tx};
BOOST_CHECK_EQUAL(spend.vin[0].scriptWitness.stack[0].size(), 300U);
LOCK(cs_main);
auto& coins_tip = m_node.chainman->ActiveChainstate().CoinsTip();
// Use a fresh cache to avoid unrelated pre-population or (very unlikely) false positives.
ValidationCache validation_cache{/*script_execution_cache_bytes=*/1 << 20, /*signature_cache_bytes=*/1 << 20};
const auto run_check{[&](const std::vector<unsigned int>& flags_per_input) EXCLUSIVE_LOCKS_REQUIRED(::cs_main) {
TxValidationState state;
PrecomputedTransactionData txdata;
return CheckInputScripts(spend, state, &coins_tip, strict_flags, /*cacheSigStore=*/true, /*cacheFullScriptStore=*/true, txdata, validation_cache, /*pvChecks=*/nullptr, flags_per_input);
}};
// Step 1: strict validation (BIP110 active) should fail.
BOOST_CHECK(!run_check({}));
// Step 2: relaxed per-input flags (no REDUCED_DATA) should pass.
BOOST_CHECK(run_check({relaxed_flags}));
// Step 3: strict validation must still fail.
// Before the cache fix, step 2 could poison the strict cache key and make this pass.
BOOST_CHECK(!run_check({}));
}which fails with:
indicating that the cache was poisoned and the tx passed strict validation while it clearly failed at the same height before. We can also reproduce it with a higher-level functional test: # ======================================================================
# Test 8: cache state must not survive activation-boundary reorg
# ======================================================================
self.log.info("Test 8: script-execution cache must not survive boundary-context flip")
def rewind_to(height):
# Height-based loop: invalidating one tip can switch to an alternate branch at same height.
while node.getblockcount() > height:
node.invalidateblock(node.getbestblockhash())
assert_equal(node.getblockcount(), height)
branch_point = ACTIVATION_HEIGHT - 2 # 430
rewind_to(branch_point)
# spend_tx has a 300-byte witness element: valid only with pre-activation exemption.
funding_tx, spend_tx = self.create_p2wsh_funding_and_spending_tx(wallet, node, VIOLATION_SIZE)
# Branch A: funding at 431 (exempt).
block = self.create_test_block([funding_tx], signal=False)
assert_equal(node.submitblock(block.serialize().hex()), None)
assert_equal(node.getblockcount(), ACTIVATION_HEIGHT - 1)
self.restart_node(0, extra_args=['-vbparams=reduced_data:0:999999999999:288:2147483647:2147483647', '-par=1']) # Use single-threaded validation to maximize chance of hitting cache-related issues.
# Validate-only block at height 432. This calls TestBlockValidity(fJustCheck=true),
# which populates the tx-wide script-execution cache under STRICT flags, even though
# the spend is only valid here due to the per-input "pre-activation UTXO" exemption.
self.generateblock(node, output=wallet.get_address(), transactions=[spend_tx.serialize().hex()], submit=False, sync_fun=self.no_op)
assert_equal(node.getblockcount(), ACTIVATION_HEIGHT - 1)
# Reorg to branch point; cache state is intentionally retained across reorg.
rewind_to(branch_point)
# Branch B: funding at 432 (non-exempt).
# Make this empty block unique to avoid duplicate-invalid when rebuilding branch B.
block = self.create_test_block([], signal=False)
block.nTime += 1
block.solve()
assert_equal(node.submitblock(block.serialize().hex()), None) # 431
block = self.create_test_block([funding_tx], signal=False)
assert_equal(node.submitblock(block.serialize().hex()), None) # 432
# Same spend is now non-exempt and must be rejected.
attack_block = self.create_test_block([spend_tx], signal=False) # 433
result = node.submitblock(attack_block.serialize().hex())
assert result is not None and 'Push value size limit exceeded' in result, \
f"Expected rejection after boundary-crossing reorg, got: {result}"which fails with:
And for those still in disbelief, the bug can be reproduced with a manual #!/usr/bin/env bash
killall bitcoind >/dev/null 2>&1 || true
set -euo pipefail
# BIP110 reduced_data script-exec cache poisoning demo (PR #238):
#
# Rule: Inputs spending UTXOs created before activation (h<432) are exempt from
# the new reduced_data limits. Implementation uses per-input script flags to
# relax checks for those inputs.
#
# Bug: Script-exec cache entries are keyed by tx-wide (witness hash + STRICT
# flags), but the tx may have only passed because some inputs were checked with
# RELAXED per-input flags. This "harmless lie" becomes harmful if a reorg moves
# the *funding tx* across the activation boundary:
# * Chain A: fund at h=431 (<432, exempt) -> spend after activation is valid
# * Reorg: fund at h=432 (>=432, strict) -> same spend becomes invalid
# If the cache says "already validated", script checks can be skipped and the
# now-invalid spend can be accepted.
#
# Demo output: the same spend at the same height (433) is REJECTED (control),
# then after validate-only + reorg it's ACCEPTED (BUG).
# Comment out step [5/6] to see step [6/6] reject again.
BITCOIND=${BITCOIND:-build/bin/bitcoind}; CLI=${CLI:-build/bin/bitcoin-cli}; TX=${TX:-build/bin/bitcoin-tx}
DATADIR="$(mktemp -d "${TMPDIR:-/tmp}/bip110-cache.XXXXXX")"; WALLET=w
cli() { "$CLI" -regtest -datadir="$DATADIR" -rpcwait "$@"; }
w() { cli -rpcwallet="$WALLET" "$@"; }
step() { printf '%s\n' "$*"; }
log() { printf ' [h=%s] %s\n' "$(cli getblockcount 2>/dev/null || echo '?')" "$*"; }
logh() { local h="$1"; shift; printf ' [h=%s] %s\n' "$h" "$*"; }
RST=$'\033[0m'
tid() { # deterministically colorize 4-hex prefixes so different txs stand out
local p="$1" b c; b=$((16#${p:0:2}))
local colors=(31 32 33 34 35 36 91 92 93 94 95 96); c="${colors[$((b % ${#colors[@]}))]}"
printf '\033[%sm%s%s' "$c" "$p" "$RST"
}
ACC="ACCEPTED"; REJ="REJECTED"; OK="👍"; BUG="👎"
j() { sed -nE "s/.*\"$1\"[[:space:]]*:[[:space:]]*\"?([^\",}]*)\"?.*/\\1/p" | head -n1; }
revhex() { echo "$1" | sed -E 's/(..)/\1 /g' | awk '{for (i=NF;i>=1;i--) printf $i; print ""}'; }
le() { local w="$1" v="$2"; revhex "$(printf "%0${w}x" "$v")"; }
rej_reason() { tr '\n' ' ' | sed -E 's/.*TestBlockValidity failed: ([^,"]*).*/\1/'; }
cleanup() { cli stop >/dev/null 2>&1 || true; rm -rf "$DATADIR"; }
trap cleanup EXIT
step "[1/6] start bitcoind"
# -par=1 => no script-check worker threads (enables script-exec cache write in TestBlockValidity)
"$BITCOIND" -regtest -daemon -datadir="$DATADIR" -fallbackfee=0.0001 -par=1 \
-vbparams=reduced_data:0:999999999999:288:2147483647:2147483647 >/dev/null # reduced_data BIP9 params; activates at h=432 on regtest (144-block periods)
cli getblockcount >/dev/null
cli createwallet "$WALLET" >/dev/null; ADDR="$(w getnewaddress)"
MT=""; gb() { [[ -n "${MT:-}" ]] && MT=$((MT + 1)) && cli setmocktime "$MT" >/dev/null; cli generateblock "$ADDR" "$@"; }
step "[2/6] mine to height 430 (activation happens at 432)"
cli generatetoaddress 430 "$ADDR" >/dev/null
MT="$(cli getblockheader "$(cli getbestblockhash)" | j time)"; cli setmocktime "$MT" >/dev/null
REDEEM=7551; P2SH="$(cli decodescript "$REDEEM" | j p2sh)"
step "[3/6] create funding tx + violating spend tx"
FUND_TXID="$(w sendtoaddress "$P2SH" 1.0)"; FUND_TAG="${FUND_TXID:0:4}"
cli getmempoolentry "$FUND_TXID" >/dev/null 2>&1 \
&& log "funding tx $(tid "$FUND_TAG"): ${ACC} ${OK} (mempool)" \
|| log "funding tx $(tid "$FUND_TAG"): ${REJ} ${BUG} (mempool)"
TXINFO="$(w gettransaction "$FUND_TXID")"; FUND_HEX="$(printf '%s\n' "$TXINFO" | j hex)"
FUND_VOUT=0; cli gettxout "$FUND_TXID" 0 true | j address | grep -q "$P2SH" || FUND_VOUT=1
TXID_LE="$(revhex "$FUND_TXID")"; VOUT_LE="$(le 8 "$FUND_VOUT")"; OUT_LE="$(le 16 99990000)" # fee=10k sats
PUSH300="$(printf '42%.0s' {1..300})"; SCRIPTSIG="4d2c01${PUSH300}02${REDEEM}" # 300-byte push + redeemScript; invalid if reduced_data enforced
SPEND_HEX="0200000001${TXID_LE}${VOUT_LE}fd3201${SCRIPTSIG}ffffffff01${OUT_LE}015100000000"
SPEND_TAG="$("$TX" -txid "$SPEND_HEX")"; SPEND_TAG="${SPEND_TAG:0:4}"
log "spend tx $(tid "$SPEND_TAG"): crafted (will be put in blocks directly, never submitted to mempool)"
step "[4/6] control: fund at h=432 (post-activation, NOT exempt) -> spend at h=433 must be REJECTED"
gb "[]" >/dev/null; gb "[\"$FUND_HEX\"]" >/dev/null; log "funding tx $(tid "$FUND_TAG"): ${ACC} ${OK} (confirmed at h=432)"
TIP="$(cli getblockcount)"; TRY_H=$((TIP + 1))
if out="$(gb "[\"$SPEND_HEX\"]" 2>&1)"; then
logh "$TRY_H" "spend tx $(tid "$SPEND_TAG"): ${ACC} ${BUG} (BUG: should be rejected)"
else
logh "$TRY_H" "spend tx $(tid "$SPEND_TAG"): ${REJ} ${OK} :: $(printf '%s' "$out" | rej_reason)"
fi
step "[5/6] poison: reorg so fund is at h=431 (<432, exempt) + validate-only spend in block h=432 (cache insert happens here)"
for _ in 1 2; do cli invalidateblock "$(cli getbestblockhash)" >/dev/null; done
gb "[\"$FUND_HEX\"]" >/dev/null; log "funding tx $(tid "$FUND_TAG"): ${ACC} ${OK} (confirmed at h=431)"
TIP="$(cli getblockcount)"; TRY_H=$((TIP + 1))
# submit=false => TestBlockValidity-only (cache write happens here), no block connection
gb "[\"$SPEND_HEX\"]" false >/dev/null 2>&1 \
&& logh "$TRY_H" "spend tx $(tid "$SPEND_TAG"): ${ACC} ${OK} (validate-only; exempt)" \
|| logh "$TRY_H" "spend tx $(tid "$SPEND_TAG"): ${REJ} ${BUG} (unexpected; should be accepted/exempt here)"
step "[6/6] trigger: reorg back so fund is at h=432 (>=432, NOT exempt) -> spend at h=433 must be REJECTED (ACCEPTED means poisoned cache hit)"
cli invalidateblock "$(cli getbestblockhash)" >/dev/null # if you comment step [5/6], this step rejects (no poisoned cache entry)
gb "[]" >/dev/null; gb "[\"$FUND_HEX\"]" >/dev/null; log "funding tx $(tid "$FUND_TAG"): ${ACC} ${OK} (confirmed at h=432)"
TIP="$(cli getblockcount)"; TRY_H=$((TIP + 1))
if out="$(gb "[\"$SPEND_HEX\"]" 2>&1)"; then
logh "$TRY_H" "spend tx $(tid "$SPEND_TAG"): ${ACC} ${BUG} (BUG: cache poisoning; this same tx at this same height was REJECTED above)"
else
logh "$TRY_H" "spend tx $(tid "$SPEND_TAG"): ${REJ} ${OK} :: $(printf '%s' "$out" | rej_reason)"
fiwhich demonstrated the flow as: Why it wasn't caught earlierThe bug requires a reorg that moves a funding tx across the activation boundary - which is unlikely to occur just by chance. Note that while reorg behavior was lightly tested in 4c99d3b#diff-2a47a7847c78024eff4f7e6ee245aa1faa366720070ba0184436bb6858bee06dR187-R210, the mined blocks didn't contain any txs, and the reorgs weren't done across the activation height. Most blocks on mainnet do contain transactions so it wasn't really testing anything useful. Signed commits cloned to: https://github.com/l0rinc/bitcoin/commits/detached484 nit: the latest rebase done a few hours ago removed @luke-jr from the seeds in https://github.com/bitcoinknots/bitcoin/compare/f4045f37b6ba9780cb1f5d40295d5aa12192a29f..1d3cdac50c5616d7de5c433f3ff76ed7513f3a5a#diff-9468810859a6881caa4f5c4d3c806f494e8e078c4a4c9c53d8ed74a6d96d4973L20 It also changed the Knots release notes to Core release notes https://github.com/bitcoinknots/bitcoin/compare/f4045f37b6ba9780cb1f5d40295d5aa12192a29f..1d3cdac50c5616d7de5c433f3ff76ed7513f3a5a#diff-474e3093b86659f3d23995cb2fbe8e84bcacf0e3b019442b26c729445c7f2a8eL2 Were these done intentionally or is it a rebase oversight? |
|
@l0rinc Thanks for the thorough report, and for the responsible disclosure. It will be fixed as soon as possible. |
These changes came from upstream Knots, not this PR
Only the documentation for how the fixed seeds are generated. This is a minor documentation bug in Core 29.3 (it still uses fixed seeds generated using my crawler) that didn't seem worth fixing for Knots 29.3.
This was a mistake corrected for the actual published release notes. |
…lags_per_input is used (and avoid using it when unnecessary)
…ning via activation-boundary reorg Co-Authored-By: Lőrinc <pap.lorinc@gmail.com>
Mempool unconditionally enforces OutputSizeLimitIn 34ef77a, the ConnectBlock path correctly gates the output size check on REDUCED_DATA activation: const CheckTxInputsRules chk_input_rules{
DeploymentActiveAt(*pindex, m_chainman, Consensus::DEPLOYMENT_REDUCED_DATA)
? CheckTxInputsRules::OutputSizeLimit : CheckTxInputsRules::None};But the mempool path at if (!Consensus::CheckTxInputs(tx, state, m_view, block_height_next, ws.m_base_fees,
CheckTxInputsRules::OutputSizeLimit)) {Before that commit, the mempool call used Practical impact is limited since standard outputs are ≤34 bytes, but it's the same kind of inconsistency between the mempool and consensus paths that led to the cache bug. |
setscriptthreadsenabled widens the cache poisoning surfaceThe reproducers for the cache bug all use But Not a separate bug, just an additional trigger for the existing one , but worth noting since the disclosure only mentions |
DISCOURAGE flags elevated to consensus closes upgrade paths
These are normally policy-only, since they prevent relay but don't invalidate at the consensus level. Making them consensus-mandatory during the active period means witness versions >1, non-0xc0 taproot leaf versions, and OP_SUCCESS opcodes all become consensus-invalid (not just non-standard). This closes every future soft-fork upgrade hook for the ~1-year duration. If Core or any other implementation ships a soft fork using one of these upgrade paths while BIP-110 is active, nodes running this code would reject those transactions at the consensus level, splitting the chain. That's a pretty different risk profile from just tightening push sizes. These flags are also permanently baked into |
It is normal to enforce rules in policy before they become consensus. I do not see a problem with this.
This is well-documented expected behaviour in the BIP. (Also, OP_NOP is not affected, leaving the door open to CTV)
Softforks, including RDTS, are not optional. Miners especially must enforce it to make valid blocks. Any software not enforcing RDTS is a hardfork. |
This PR re-implements #234 as a UASF rather than an MASF. That is, it adds:
max_activation_heightwhich is mutually exclusive withtimeout, andCommits prior to "Add DEPLOYMENT_REDUCED_DATA as temporary BIP9 UASF" do not compile; this is intentional to preserve all REDUCED_DATA commits precisely after dropping the original BuriedDeployment commits.
Commits prior to "Add mainnet configuration for REDUCED_DATA deployment" have failing unit tests.
Functional tests are passing on all commits.
Not eligibile for merge until the following are complete: