BIP 110: Reduced Data Temporary Softfork#2017
Conversation
5314635 to
3c71823
Compare
|
I suggest you add an FAQ item for “why block 987424“. If the intent is to have it be a year out, the height might actually move during discussion, and right now its just a magic number in the document. |
|
@rot13maxi see the deployment section
|
|
There is opportunity to also discuss the effect on DoS blocks and the scope of legacy script as a DoS vector. |
| OP_RETURN outputs are provably unspendable, and nodes do not need to store it in the UTXO set. | ||
| Historically, up to 83 bytes have been tolerated only to avoid unprovably unspendable spam in other output scripts, and no legitimate uses have ever been found. | ||
| With the advent of pay-to-contract and Taproot, it is now also possible to commit to external data in the Taptree, making even hypothetical use of OP_RETURN deprecated. | ||
| However, to avoid breaking legacy protocols that still include such outputs, this proposal allows these outputs. |
There was a problem hiding this comment.
Also I am raising objection to the fragment of the proposal. I think that the presumption of existence of "legacy protocols" is false. There isn't any BIP of such a protocol. Also, I haven't seen any implementation of a hypothetical undocumented one. Last, but not least - arbitrary data storage doesn't belong to Bitcoin and the "OP_RETURN" bug that is exploited by abusers must be fixed.
|
Will or can this softfork affect lightning or currently planned upgrades of it? btw, fwiw, there's also some discussion at https://stacker.news/items/1265553 (sorry for the shameless plug, I work at SN) |
|
According to BIP-2:
When will this be posted to the mailing list as its own thread so it can get greater attention & review? |
This comment has been minimized.
This comment has been minimized.
I reached out yesterday to suggest this and apparently the post is currently in the ML queue for acceptance/publication. |
benthecarman
left a comment
There was a problem hiding this comment.
why no limit on witness or tx size?
thewrlck
left a comment
There was a problem hiding this comment.
I don't think it's a good idea to outright prevent content or actions that are not 100% certain spam
Hi all, a mailing list post by has been published by the BIP author at https://groups.google.com/g/bitcoindev/c/nOZim6FbuF8. Post conceptual feedback and meta-commentary there, and focus here on:
Please refrain from personal or heated commentary in both venues. I've attempted some minor moderation here above. |
There was a problem hiding this comment.
Assigned 110.
Note that assignment does not represent evaluation by the editors whether the proposal is likely to be adopted.
Please update the file names and BIP draft headers, including "Created: 2025-12-03" for the date of assignment, and add an entry to the README.
|
@jonatack Done. |
This comment has been minimized.
This comment has been minimized.
cryptoquick
left a comment
There was a problem hiding this comment.
I'm generally supportive of the changes in this BIP. Aside from minor nitpicks in language, the 34 byte scriptPubKey restriction I think will prove to be quite valuable in addressing the larger concern of DoS blocks / poison blocks that impose such high computational costs on nodes that a single block would take 30 minutes to verify on decent hardware instead of taking about a second. This is an even larger threat to Bitcoin than either CSAM or quantum, because I've read that CSAM has already been present on Bitcoin for a very long time, and quantum computers aren't anywhere near good enough to be a threat, and may never be, whereas DoS blocks could be introduced by miners who take direct submissions without sufficient checks at any time.
It's been pointed out that disabling OP_SUCCESS in Tapscript would conflict with adding new signature verification opcodes in future BIPs that might use them to add quantum resistance, but I would point out that the semantics around existing opcodes could simply be altered to preserve compatibility with BIP 110. For example, instead of creating new sets of OP_CHECKSIG opcodes to support new signature schemes, the semantics of existing OP_CHECKSIG opcode could simply be adjusted to accept imperatively inputs of varying lengths, a form of overloading / polymorphism / or duck typing. While it could be argued that a more declarative approach is superior in cryptographic contexts, I don't weight that concern as heavily as the larger concern over DoS blocks, and as such, I'm supportive of this approach.
My only major objection is that this is temporary. I'm not very comfortable with either temporary soft forks or default node expiry because it forces users to act instead of delaying action, which I think delaying action is perfectly fine and reasonable as the protocol matures. It also reminds me too much of the "difficulty bomb" based monetary policy used to coerce Ethereum miners to adopt new code from the Ethereum foundation or else. That said, if BIP 110 were activated as is, I would still be supportive, and I would also support reactivating it in the future as a more permanent feature of Bitcoin.
At a high level, this proposal reminds me in spirit of early versions of my original P2QRH proposal. I just think it could use a little more polish, but I see it as being directionally correct.
bip-0110.mediawiki
Outdated
There was a problem hiding this comment.
Actually, this is a common misconception about the UTXO set database. It is implemented as a Log Structure Merge tree data structure which allows for flexibility in where the data resides. It can be stored in various levels, including slower hard drives, which is why it's called LevelDB. Fortunately it is a very performant and battle tested embedded key value store that can scale well even on commodity non-server grade hardware. While the UTXO set needs to be checked with every transaction to prevent double spends, it does not need to be held entirely in memory.
There was a problem hiding this comment.
The word "often" already accounts for the possibility of the UTXO set being stored partially on disk. What's important is that it be quick-access. A slow disk, for example, will still cause much slower validation than a fast disk.
There was a problem hiding this comment.
| scriptPubKeys must be stored indefinitely in quick-access memory (often RAM) by all fully validating nodes. | |
| scriptPubKeys must be stored indefinitely in a database that balances data access between quick-access memory and slower non-volatile storage by all fully validating nodes. |
Perhaps this is more accurate? The word "often" isn't quite accurate either. It's just how it works.
There was a problem hiding this comment.
This is [slightly] more accurate, but it detracts from the point, which is that the UTXO set must be kept as small as possible. For that reason, I still like (often RAM) better here.
There was a problem hiding this comment.
Good point. How about this:
| scriptPubKeys must be stored indefinitely in quick-access memory (often RAM) by all fully validating nodes. | |
| scriptPubKeys must be stored indefinitely in a database that balances data access between quick-access memory and slower non-volatile storage by all fully validating nodes. That said, LevelDB is still quite computationally intensive and as the UTXO set size increases, this increases the burden on noderunners.``` |
There was a problem hiding this comment.
The point is that UTXOs must be stored in fast storage, period, or validation speed is severely impacted. Looking up UTXOs stored on disk is much slower than looking up UTXOs stored in RAM. As the UTXO set grows in size, the more likely it is for UTXOs to be stored on slow-access (disk), rather than fast-access (RAM) memory. That's why I think "often RAM" works fine here.
What would you think of something like this?
| scriptPubKeys must be stored indefinitely in quick-access memory (often RAM) by all fully validating nodes. | |
| scriptPubKeys must be stored indefinitely in storage that is as fast as possible. Fast storage (usually RAM) is much more costly per byte than slower, non-volatile storage, so as the UTXO set size increases, this increases the burden on noderunners, harming decentralization. |
There was a problem hiding this comment.
That's a salient point, especially nowadays. I think that's a good clarification.
There was a problem hiding this comment.
That’s an improvement over the previous text. I feel that it still exaggerates the importance of RAM, but it’s an improvement.
bip-0110.mediawiki
Outdated
There was a problem hiding this comment.
I would object to the statement that no legitimate use for OP_RETURN has been found when timestamping of documents in the 2023 Guatemalan elections helped prevent disputes of the outcome of that contentious election to prevent fraud by using Bitcoin as a source of truth.
https://bitcoinmagazine.com/business/how-bitcoin-can-protect-public-records-with-simple-proof-2
That said, timestamps can fit within the 83 byte OP_RETURN exception made in this BIP.
There was a problem hiding this comment.
Also as pointed out here in October, OP_RETURN is mandated in Coinbase transactions for Segwit’s commitment structure: https://github.com/bitcoin/bips/pull/2017/files?diff=unified&w=0#r2463933146
There was a problem hiding this comment.
I would object to the statement that "timestamping of documents" is a legitimate use, it does not prove anything about the validity of the statements. You can absolutely timestamp lies. It's also possible to timestamp conflicting predictions and reveal the one that turned out to be justified, making you look smarter than you are. Some people keep mistaking timestamping with "proof of publication" which is also wrong. Timestamps don't prove that the document is actually published at that time.
They only really prove one thing: someone had the preimage at an earlier time than present.
There was a problem hiding this comment.
@moonsettler None of the points you made contradict the value Simple Proof and Open Timestamps had in the example I provided.
There was a problem hiding this comment.
Heard that story, but still don't understand why people think timestamping proves anything about the authenticity of the data. It's a weird assumption that if the integrity and source of the data can be verified somehow it's valuable to also verify when it was created in general.
The point about compromised or backdated PGP keys is harder to dismiss. Feels like there might be some value there for a decentralized "unforgeable" timestamps server.
There was a problem hiding this comment.
@cryptoquick See the next sentence: "With the advent of pay-to-contract and Taproot, it is now also possible to commit to external data in the Taptree, making even hypothetical use of OP_RETURN deprecated."
This explains that P2C obsoletes OP_RETURN for the "timestamping" use case. The point of this BIP is to reject the data storage use case entirely, but if you really need to put a 32-byte hash in the chain, OP_RETURN is not the best way.
@murchandamus The witness commitment seems more like a hack, and only usable for a very specific consensus-critical function and not for general use, but I will add an exception for the witness commitment if you think it's necessary.
There was a problem hiding this comment.
I just think that writing “no legitimate uses have ever been found” seems wrong when you are aware that an OP_RETURN output is required by consensus in almost every block. It’s also used in merge mining, which you will probably also consider illegitimate, though. Anyway, write what you will, paper doesn’t blush.
There was a problem hiding this comment.
Updated to:
| Historically, up to 83 bytes have been tolerated only to avoid unprovably unspendable spam in other output scripts, and no legitimate uses have ever been found. | |
| Historically, up to 83 bytes have been tolerated only to avoid unprovably unspendable spam in other output scripts, and with the possible exception of commitment schemes that use OP_RETURN in coinbase transaction outputs (notably Segwit), using OP_RETURN is not the optimal solution to any known use case. |
Thanks for the input @cryptoquick and @murchandamus. Let me know if you think it still needs improvement.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
|
On Fri, Jan 30, 2026 at 12:07:54PM -0800, moonsettler wrote:
I would object to the statement that "timestamping of documents" is a legitimate use, it does not prove anything about the validity of the statements. You can absolutely timestamp lies. It's also possible to timestamp conflicting predictions and reveal the one that turned out to be justified, making you look smarter than you are. Some people keep mistaking timestamping with "proof of publication" which is also wrong. Timestamps don't prove that the document is actually published at that time.
There are plenty of examples where merely knowing that data existed in the past
is sufficient to prove something about the validity of that data. For example,
various Bitcoin Core contributors both PGP-sign and timestamp their git commits
and releases; e.g. tag v28.1 is PGP signed and timestamped by Ava Chow.
The value there is that if Ava Chow's PGP key is later compromised at a roughly
known time, we can still validate PGP signatures made well prior to the
compromise, via the assumption that the attackers do not have a time machine.
https://petertodd.org/2016/opentimestamps-git-integration
Anyway, if somehow OP_Return was made unusable, OpenTimestamps would simply
switch to another commitment mechanism like fake, unspendable, pubkeys. That
would bloat the UTXO set. But there's no good way to stop that short of
whitelisting.
|
There was a problem hiding this comment.
This document has received a substantial amount of review and public discussion. The author appears to have made a reasonable effort to collect and respond to objections and alternative approaches.
As work on this proposal had slowed down for several weeks, and now an activation client is being advertised for mainnet adoption, the author appears to be satisfied with their proposal, even while reviewers continue to be in disagreement about whether objections have been adequately addressed.
While I still perceive this proposal’s Specification to be unsatisfactorily motivated and to exhibit an unusually rushed and careless approach to Bitcoin protocol development, proposals may be further refined after they are published as Draft. It would seem in the interest of the Bitcoin community that this proposal be published in Draft status once the remaining formatting issues are addressed to facilitate community-wide consideration.
A subsequent publication should neither be construed as an endorsement, nor as the BIP Editors expecting this proposal to be adopted.
bip-0110.mediawiki
Outdated
There was a problem hiding this comment.
Also as pointed out here in October, OP_RETURN is mandated in Coinbase transactions for Segwit’s commitment structure: https://github.com/bitcoin/bips/pull/2017/files?diff=unified&w=0#r2463933146
ac154c7 to
25039c6
Compare
25039c6 to
b551241
Compare
|
@murchandamus I believe all comments are now addressed. I have also rebased my branch on current master. Let me know if there's anything else left. @cryptoquick Thanks for the review. Am I understanding correctly that you would be in favor of extending all BIP-110 rules, including disabling upgrade hooks, permanently? That seems very disruptive. |
murchandamus
left a comment
There was a problem hiding this comment.
Thanks for the quick turnaround. I only reviewed the new changes. The description of the Deployment looks much better, I point out one ambiguity.
Presumably you meant to write UTXO instead of scriptpubkey in the paragraph about output script sizes, but as written I would consider some of the statements incorrect.
murchandamus
left a comment
There was a problem hiding this comment.
Looks sufficient for Draft.

Mailing list thread at https://groups.google.com/g/bitcoindev/c/nOZim6FbuF8
Editor note: please post conceptual feedback and meta-commentary on the mailing list thread, and focus here on:
Please refrain from personal or heated commentary, trolling, pedantry, and repeating yourself. As this PR now has many comments, please only comment if you are adding new valuable information to the discussion.