In recent issues, Conflux plans to launch a series of popular science articles, starting with some simple technical principles, to help you identify which concepts are possible to realize in some project promotional concepts, and which concepts need to be compromised if they are to be realized.
In the first issue, we start from the impossible triangle of the blockchain, and talk about what to sacrifice if we want to pursue the ultimate efficiency. At present, in the blockchain media, there is a widely spread concept called impossible triangle, that is, efficiency, security, and decentralization cannot coexist. The concept that appears as frequently as the impossible triangle is that the impossible triangle was broken by a certain project on the public chain. This term has also been used by some media when promoting Conflux.
However, Conflux has never officially claimed to break the impossible triangle, and we think this is not a rigorous concept. It can only be said that when this concept was proposed, no one had done these three things well at the same time, and no one had proved that it was impossible through rigorous analysis.
Today, we will introduce another impossible triangle. Regardless of whether a blockchain is a public chain or a consortium chain, whether it is PoW or PoS, whether it adopts the Nakamoto consensus or BFT or other methods, it cannot be bypassed. This impossible triangle includes three goals. (For ease of understanding, we avoid adopting rigorous formal language to define it, but roughly describe ideas and ideas)
1. Synchronization and verification of all nodes
In the public chain network, the correctness and security of the public chain network depend on the endorsement of some nodes. For example, in Bitcoin or Ethereum, according to the agreement, when each miner digs out a block, it must ensure that each transaction in the new block and in each block in history is correct. That is to say, when Bitcoin miners produce a block, they endorse the correctness of all previous blocks. In EOS, super nodes endorse the correctness of blocks through signatures. We call it nodes participating in the consensus here.
All node synchronization and verification requires that each confirmed transaction has been synchronized and verified by all nodes participating in the consensus (except attackers).
This goal is security related. We imagine a scenario where someone wants to forge invalid signatures, create illegal transactions, and steal your assets. If only a small number of nodes participating in the consensus synchronize and verify the transaction, while other nodes do not synchronize the transaction, the judgment results of that small number of nodes are directly accepted. If so, the possibility of an illegal transaction being mixed into the transaction history will be higher than the synchronization and verification of every node participating in the consensus. The security of the two is different.
2. Ultra-high throughput
The average throughput rate of the final confirmation transaction exceeds 11000 TPS, which is called ultra-high throughput rate. (The size of each transaction is calculated as 250 bytes)
3. Low bandwidth requirements
For each node participating in consensus, the minimum configuration requirement of network bandwidth is no higher than 20 Mbps (2.5 MB/s).
This goal is related to decentralization. The lower the threshold for participation, the more people can participate in the consensus, which is more conducive to decentralization.
Above are the three goals of this impossible triangle. The reason is also very simple to understand. If a node only has a bandwidth of 20 Mbps, it can only download 2.5 MB of data per second, which is about 10,000 transactions. If the average throughput rate of final confirmed transactions in the network exceeds 11,000 TPS, this node with only 20 Mbps of bandwidth is incapable of synchronizing and verifying every transaction.
So in the face of this difficulty, what are the options for making trade-offs? 1. Give up full node synchronization and verification
Among these solutions, Sharding is a well-known solution. The general idea of the Sharding scheme is that the entire blockchain is logically divided into several shards, and unrelated and non-conflicting transactions are divided into different shards, and each shard is synchronized and verified by some miners. For miners, they do not need to be responsible for the correctness of transactions in other shards.
The Sharding scheme is an idea to improve throughput, but this idea sacrifices part of the security. After all, if someone wants to forge signatures and create illegal transactions to steal your assets, every node in the entire network will help you prevent illegal transactions, and only a small number of nodes will help you prevent illegal transactions. The security level of the two is different. of. However, for account addresses that only deposit pocket money, users may be more sensitive to transaction costs than security. So this direction is very valuable to explore.
However, if the TPS under the Sharding scheme is compared with the TPS under other peoples full node synchronization and verification, it is very unscientific.
Another idea is to use cryptographic tools such as zero-knowledge proof or verifiable computing to allow a node to not have to synchronize every transaction, but only need to synchronize the block header and some cryptographic elements, and can also verify that the Merkle Root of a block is correct of. Of course, there are many pitfalls in this idea that need to be solved. If there is a chance, we will write an article to discuss it.
2. Give up high TPS
Giving up high TPS here refers to giving up a throughput rate above 10,000 TPS under the existing network conditions. Conflux retains decentralization and security, so it needs to retain full-node synchronization and verification and low bandwidth requirements, so that you can also be a miner under home network conditions, and every transaction has been verified by every miner. If you want to keep these two points, efficiency has a ceiling.
3. Low bandwidth requirements
In some consensus mechanisms, ordinary users do not participate in the synchronization and verification of transactions, but select a small number of special nodes for consensus through some methods. At this time, we can assume that each candidate node has prepared sufficient computer resources, such as better CPU, larger hard disk, and larger network bandwidth. At this time, there is no need to set the minimum configuration requirements very low.
The next time you see a project claiming to be greater than 10,000 TPS, or even infinitely scalable, you need to look at which corner of the impossible triangle it gives up. Did you give up the first point or the third point? If the first point is abandoned, does the project adopt the Sharding scheme? Or have other modifications been made? Will this modification bring security problems, and how to solve them? If the third point is abandoned, is high TPS based on higher network bandwidth requirements? Or is it infinitely scalable under the condition of unlimited network bandwidth?
Click to sign upClick to sign up