Original author: SHINOBI
Original translation: Block unicorn
What are the reasons why Rusty Russell’s “Great Script Recovery” could be the way forward for Bitcoin development, despite the proposal being quite broad in scope?
Block unicorn Note: Rusty Russell is an active developer in the Bitcoin community and is very respected in the community. He has done outstanding work in Linux kernel development and has also participated in many Bitcoin core development projects.
Bitcoin was originally designed to have a complete scripting language that was intended to cover and support any potential security use case that users might come up with in the future. As Satoshi Nakamoto said before he disappeared:
“The nature of Bitcoin is that once version 0.1 is released, the core design is set for the rest of its lifecycle. As a result, I wanted to design it to support every possible transaction type I could think of. The problem was that everything would require special supporting code and data fields, whether used or not, which would lead to too many special cases. The solution was scripts, which generalized the problem so that parties could describe their transactions with specific conditions, and the network of nodes would evaluate or verify them based on those conditions.” - Satoshi Nakamoto, June 17, 2010
Its entire purpose is to give users a language that is general enough to allow them to organize their own transaction types however they wish. That is, to give users room to design and experiment with how to write their own currencies.
Before he disappeared, Satoshi removed 15 of these opcodes, completely disabling them, and added a hard limit on the script engine stack, limiting the size of blocks of data that could be operated on (520 bytes). This was because he actually screwed up, leaving a lot of ways that complex scripts could be used to DOS the entire network (send a lot of junk requests, causing the network to crash), create huge and costly transactions, and cause nodes to crash.
These opcodes were not removed because Satoshi thought the functionality was dangerous or that people shouldn’t build things with them, but simply (at least ostensibly) because of the risk they posed to the network as a whole without resource limits, such that they could impose the worst-case validation cost on the network without limits.
Every upgrade to Bitcoin since then has ultimately been a functional optimization of the remaining functionality, correcting other less severe flaws Satoshi left us, and expanding the functionality of the subset of scripts we have left.
Great script recovery
At the Austin Bitcoin++ conference in early May, core Lightning Network developer Rusty Russell made a very radical proposal in the first talk of the conference, where he basically proposed re-enabling most of the opcodes that Satoshi Nakamoto disabled before he disappeared in 2010.
The development space has been a bit aimless in the past few years since Taproot was activated in 2021. We all know that Bitcoin is not scalable enough to truly provide self-sovereign services to any appreciable size of the world’s population, and may not even be scalable in a trust-minimized or custodial way to service providers that can outpace very large custodians and service providers that can’t really escape the long arm of governments.
This article points to a technical realization of Bitcoin, which is not a matter of debate. The question that is worth debating is how to solve this flaw, which is a very controversial topic. Since Taproot was proposed, everyone has been proposing very narrow proposals that aim to solve problems that can only be achieved with specific use cases.
For example, ANYPREVOUT (APO) is a proposal that allows signatures to be reused in different transactions as long as the input script and amount are the same. This proposal is specifically designed to optimize the Lightning Network and its multi-party version. CHECKTEMPLATEVERIFY (CTV) is a proposal that requires that coins can only be spent by transactions that exactly match predefined transactions. This proposal is designed to extend the functionality of pre-signed transaction chains by making them completely trustless. OP_VAULT is specifically designed to set timeouts for cold storage solutions, so that users can cancel a withdrawal from cold storage by sending it to a colder multi-signature setup to prevent their keys from being compromised.
There are many other proposals, but I think you get the gist. Every proposal over the past few years has been about either increasing scalability slightly or improving a single small feature because that is deemed desirable. This is the root of why these discussions have not progressed. No one is happy with the other proposals because they dont satisfy the use cases they want to see.
No one, other than the proposal authors, believes any proposal is comprehensive enough to be considered a reasonable next step.
This is the logic behind the Great Script Recovery. By pushing for and analyzing a full recovery of the script, as originally designed by Satoshi, we can actually try to explore the entire feature space we need, rather than arguing and infighting about which small feature extension is good enough right now.
OPCODES (Operation Code)
OP_CAT: Get two data from the stack and add them to form one data.
OP_SUBSTR: Accepts a length parameter (in bytes), gets a piece of data from the stack, removes bytes of that length and puts them back on the stack.
OP_LEFT and OP_RIGHT: accept a length argument, take a section of data from the stack, and remove the specified length of bytes from one side or the other.
OP_INVERT, OP_AND, OP_OR, OP_XOR, OP_UPSHIFT, and OP_DOWNSHIFT: accept a data element and perform the corresponding bitwise operation on it.
OP_2MUL, OP_2D IV, OP_MUL, OP_DIV, and OP_MOD: Mathematical operators for multiplication, division, and modulo (returning the remainder of a division).
In addition to the opcodes listed above for restoration, Rusty Russell proposed three more opcodes that are intended to simplify the combination of different opcodes:
OP_CTV (or TXHASH/equivalent): allows fine-grained enforcement of certain parts of a transaction, requiring them to be exactly the same as predefined content.
CSFS: Allows signatures to be verified, not just the entire transaction, so that certain parts of the script or the data used can be required to be signed before execution.
OP_TWEAKVERIFY: Verify Schnorr-based operations involving public keys, such as adding or subtracting a single public key from an aggregate public key. This can be used to ensure that when a party unilaterally leaves a shared unspent transaction output (UTXO), all other parties funds are sent to an aggregate public key that does not require the signature of the leaving party for collaborative spending.
Why we do this
Second-layer networks are essentially extensions of Bitcoin’s base layer, and their functionality is constrained by the functionality of the base layer. The Lightning Network required three separate soft forks before it could actually be implemented: CHECKLOCKTIMEVERIFY (CLTV), CHECKSEQUENCEVERIFY (CSV), and Segregated Witness.
You can’t build a more flexible second layer network without a more flexible base layer. The only shortcut is to trust a third party, which is pretty straightforward and I hope we all aspire to remove as much trust as possible from every aspect of interacting with Bitcoin at scale.
We need to be able to do things that we can’t currently do, to safely merge more than two people into a single unspent transaction output (UTXO) and be able to do that trustlessly on the base layer. Bitcoin script is not flexible enough right now. At the most basic level, we need contracts, we need scripts to be able to actually enforce the finer details about how transactions are executed, to ensure things like one user safely withdrawing their own UTXO doesn’t put other users’ funds at risk.
At a high level, this is the functionality we need:
Introspection: We need to be able to actually inspect specific details about the spending transaction itself on the stack, such as this amount of money goes to this public key of this output. This allows me to extract my funds myself using my own specific Taproot branch, while ensuring that I cant take anyone elses funds. The executed script will ensure that funds from other owners are sent back to addresses composed of other users public keys to prevent other participants from causing fund loss.
Forward Data Carrying: Lets say we take this concept one step further and have a single UTXO with a large number of people, and anyone can come in and out at will. In this case, we need a way to track who has how much money, typically using a Merkle tree and its root. This means that when someone leaves, we have to make sure to record who is entitled to what as part of the change UTXO for everyone elses funds. This is basically a specific use case for introspection.
Public key modifications: We need to ensure that modifications to the aggregate public key can be verified on the stack. In an unspent transaction output (UTXO) sharing scheme, our goal is to facilitate cooperation and efficient movement of funds through an aggregate public key that includes all participants. When someone unilaterally leaves the shared UTXO, we need to remove their individual public key from the aggregate public key. If all possible combinations have not been calculated in advance, then the only option is to verify whether subtracting a public key from the aggregate public key will produce a valid public key composed of the remaining individual public keys.
How to ensure safety: VAROPS As I said above, the reason for disabling all of these opcodes is to address DOS attacks (sending a large number of junk requests to crash the network) that can cause the nodes that make up the network to crash. One way to solve this problem is to limit the amount of resources that any of these opcodes can use.
When it comes to signature verification, the most expensive part of Bitcoin Script, we already have a solution for this, and its called the signature operation (sigops) budget. Each use of the signature check opcode consumes a certain budget, the number of signature operations allowed per block, which sets a hard limit on the cost a transaction can incur for a user to verify a block.
Taproot changes the way this works, instead of using a single global block limit, each transaction has its own sigops (signature operations) limit, proportional to the size of the transaction. This essentially amounts to the same global limit, but makes it easier to understand how many sigops are available for each transaction.
Taproots changes in how it handles the sigops (signature operations) limit per transaction open up the possibility of a generalized approach, which is also what Rusty Russell has suggested with regard to the varops limit. The idea is to assign a cost to each reactivated opcode that takes into account the worst case scenario that each opcode could create, i.e. the most expensive computational cost incurred when verifying. This way, each opcode would have its own sigops limit, limiting the amount of resources it can consume during verification. This would also be based on the size of any transaction that uses those opcodes, so it would be convenient for reasoning while still accumulating to the implicit global limit per block.
This will solve DOS attacks (crashing the network by sending lots of junk requests) which was what caused Satoshi to disable all these opcodes in the first place.
Forward momentum
Im sure many of you are thinking thats too much of a change. I understand that, but I think an important aspect to understand about this proposal is that we dont have to do it all. The value of this proposal is not necessarily in restoring all of these features, but in looking across a large suite of infrastructure components and asking ourselves what we really want in terms of functionality.
This will be a complete shift from the last three years of bickering and debate, where we were just arguing about tiny, narrow changes that only had certain features. This will be like a square where everyone can come together and look at the future direction together. Maybe we will eventually restore all of these features, maybe we will end up activating only a few features because the consensus is that these are the features that we all agree need to be turned on.
Whatever the end result, this can be a change that positively impacts the entire conversation about our future direction. Instead of fumbling around debating which murky route to take next, we can actually map out and fully understand the situation.
This is by no means the path forward that we have to take, but I think this is our best opportunity to decide which route we want to take. It’s time to start working together again in a practical and productive way.