On June 10th, Ruff founder Roy Li was invited to share his experiences at the Beijing stop of the BiBiNews Global Tour at Garage Café.
Roy Li is a well-known hacker and was the youngest R&D Director of Android and Symbian operating systems in North America. He is an IoT domain expert and graduate supervisor at Fudan University; in no way is he a “noob”. He is active on Zhihu, a Chinese social media website, where he participates in Q&A sessions with hundreds of thousands of subscribers, earning him the nickname “Zhihu Guru”. He is also active on other online communities, where he participates in discussions and resolves disputes.
What is the most valuable aspect of blockchain technology? Li does not believe cryptology is very important; rather, performance outweighs consensus algorithms. Therefore, engineering capability is the most important aspect of blockchain. The next few years will be critical in terms of competition for the best engineers.
The following are additional remarks from Roy Li:
1. Precondition — Blockchain with no economic drive is unfeasible
Ropsten, Kovan, and Rinkeby are three Ethereum testnets. You must choose a testnet before you deploy a smart contract.
Morden was the first Ethereum testnet. It had two mainstream clients, Geth and Parity. It was time consuming to sync blocks on Morden due to compatibility issues between the two clients.
Ropsten is a more compatible testnet, based on Morden. After running for some time, however, it introduces another problem: its Ether has no value, because in essence, Ropsten is a network with no economic drive.
Ethereum relies on PoW (Proof of Work) mining. Since Ropsten’s Ether has no value, no miners would work. Consequentially, the network hash rate becomes very low and it costs very little to attack the network. Normally, it takes 4–5 million gas to mine a block. However, it can take 9 billion gas to mine a block when there is a malicious attack on the network. In this case, the Ethereum testnet would be completely clogged.
Kovan was proposed by Parity, which is a client written in Rust by Gavin Wood, author of Ethereum: The Yellow Paper. Rust is not as good as GO performance wise. Parity founders provided a lot of thought before deciding to discard PoW and use PoA (Proof of Authority). PoA works by choosing a few authoritative figures from the community to validate the Ethereum testnet in a fair and impartial manner. It is a centralized approach, but the Ethereum network will run smoothly provided people act honestly on Kovan.
Although Parity proposed Kovan, this was done in their own interest. Users only have access through Parity, not Geth, so Geth developed a cross-capable version, which allows access to both networks. This is how Rinkeby was created.
This story tells us that so-called community governance and decentralization would not be feasible if blockchain had no economic drive.
2. Consensus — There is no good or bad consensus. It only depends on our needs.
People in various communities will argue whether the DPoS (Delegated Proof of Stake) consensus of EOS is reliable and democratic. DPoS relies on PoW mining pools which are centralized.
Realistically, we cannot answer whether any consensus mechanism is good or bad.
When we choose a consensus mechanism, we will undoubtedly be caught in a CAP trilemma, namely, the incompatibility between consistency, availability, and partition tolerance. Therefore, we cannot look for an appropriate consensus mechanism outside of application scenarios.
Why did EOS select 21 nodes? Why can’t there be more supernodes? Because they use PDBFT (Practical and Delegated Byzantine Fault Tolerance) consensus.
The good thing about such a consensus is that it can tolerate more bad nodes. If half of the nodes are attacked, it won’t be disastrous to the network as long as one third of the nodes remains honest. Unfortunately however, performance is inversely proportional to the square root of the number of nodes, while resources consumed is proportional to the square root of the number of nodes. As a result, EOS basically stops working when there are 100 nodes.
In terms of performance, the fewer EOS nodes, the better, because it is less likely to have problems like hard forks due to too fast generation of blocks. However, the lower EOS node number contributes to poor scalability, unlike quantum blockchain, which can have thousands of nodes.
Recently, community groups have been arguing over random number selection. In Bitcoin, through solving a math problem, the first node to calculate a desired hash value keeps the ledger. EOS chooses certain people (nodes) to keep the ledger, and in PoS, nodes holding a majority of tokens keep the ledger. In contrast, in random number selection, a random number is generated by rolling a die and determines who (which nodes) keep the ledger.
Based on these mechanisms, many programs use random numbers to find people (nodes) to fill blocks, keep the ledger, and broadcast transactions. In this situation, it is inexpensive to cause harm to the network. Since the person is chosen at random, what is stopping them from abusing their power?
Under the PoW mechanism, a person (node) works hard to become a ledger keeper. Because of this, it is very expensive to abuse the network. Under PoS, since nodes themselves hold many tokens, it is not in their best interest to harm the network. The sunk cost under PoS is less than the cost of mining machines and electricity under PoW. The sunk cost makes it unreasonable to attack the network, because rewards on the network are already very high.
Ethereum can authenticate only 20 transactions per second under PoW. To reach Visa’s 1,500 to 2,000 TPS throughput, we can only use a relatively centralized mechanism, namely PoS or DPoS.
As such, there is no good or bad consensus. It depends on solely on the demands of the network. Everybody knows that PoW is the most secure consensus mechanism, so we are now considering how to improve its performance and modify the existing network without compromising security.
3. Performance — Performance outweighs consensus algorithms
Is it possible for Ethereum to complete 200 or even 2,000 TPS, like Visa?
We have been thinking about this question. How do we improve performance without damaging the existing architecture, resorting to centralization, expanding desperately, or compromising security? The entire industry is searching for the answer.
I have always thought this question is more important than developing new complex consensus algorithms.
Sharding removes the necessity of network consensus. For example, say I just sent 100 yuan. I divide nodes into ten groups with each group validating a transaction. Finally, all groups hand in their validated transactions which are bundled back together. This is sharding in a nutshell.
Zilliqa is widely discussed for its use of sharding. They did make a good start, yet there is a still a problem.
Transaction-based sharding is relatively straightforward through a UTXO (Unspent Transaction Output) model such as BTC, Qtum, and BCH. A UTXO records all historical transactions but does not trace account balances. While several nodes are validating transactions, the last node only needs to check if there is an insufficient balance. If not, it does not matter who pays first.
Worst of all, say I sent you 100 yuan and then sent you another 200 yuan. Then I realize I had an insufficient balance to send the 200 yuan. This is the most common conflict, which can be well solved through an UTXO model. However, if this conflict occurs in Ethereum smart contracts, it can become very tricky, because transactions are performed sequentially using Ethereum smart contracts. In Ethereum, you have to complete one transaction before making another one. In this case, I cannot divide it into ten shards and complete the transaction at one time in the final stage. This, it is difficult to perform concurrent computations.
There are some infrastructure programs in the market striving to simplify the transaction process. For instance, I sent you 100 yuan, and then you sent 100 yuan to others. As long as the two transactions are packed with atomic operations (i.e. both or neither transactions are packed), we can have a clearing when the two transactions are packed. The whole transaction process is simultaneously packed and recorded in sequence.
Dan Larimer has also given consideration on how to improve the performance (efficiency of packing blocks) of a single node or multiple nodes in a group. There is a very interesting section in the EOS Technical White Paper where EOS miners are supernodes with strong computation skills. Each time hundreds of transactions are broadcast to the network, they need to be packed into a block. Receiving the broadcasted transactions through smart contracts, these supernodes will rearrange these contracts. Miners will decide how to arrange the contracts in order to pack them as parallel as possible. If the contracts are well arranged, miners can receive higher rewards.
It is good that EOS made this decision. However, EOS supernodes are unaware of how to pack for better concurrent execution. I have asked around in the industry, too. In summary, the best way to pack transactions is to divide them as much as possible before combining them together. If there is no way to combine, go back to the initial point and start over.
Regardless if it is EOS or the decentralized (P2P) Ether Network, only at some critical point close to centralization can substantial applications run on the chain. Otherwise, applications such as Crypto Kitties will clog up the Ethereum network and nobody will be able to develop more applications on chain.
We all know that an atomic operation is indivisible, but is it really the whole story? I talked about the transaction of 100 yuan. Only when an account receives 100 yuan can it send 100 yuan. Can it be this way — when we know a transaction of 100 yuan needs to be packed, the network generates conditional data to simultaneously pack the received 100 yuan and the sent 100 yuan? Two successive transactions can be performed simultaneously with atomicity, or neither is performed.
Storage is also a major problem. The Ethereum blockchain size has reached 1TB, more or less. Now, mainstream blockchains use LabelDB, which features strong sequential read/write performance. Many blockchain operations are sequential read/write, so they are fast.
Even if we use solid state drives to synchronize Ethereum, each Ethereum node needs to sync from the first block until the last. However, if the HDD read/write speed is insufficient or not well utilized, the HDD will have difficulty catching up with the constantly generated blocks.
Do we know what we are looking for when reading a technical white paper? Check the engineering capability and see if it is possible to make it faster to sync nodes, execute contracts, or pack transactions to maintain the ledger.
The problem we are facing is how to further increase the current read/write speed. When HDD read/write speed is one thousandth of memory speed, how do we make read/writes faster?
In a centralized network, Weibo and WeChat do not read data right from server hard drives. If they did, there is no way the network would work! They use memory as a buffer before finally accessing the server hard drive. Sina, for example, has seven cache layers, and Tencent has nine.
In blockchain, it is unnecessary for nodes to maintain the same read/write capacity as Weibo or WeChat. In case of an unexpected blackout, I cannot build a highly available architecture to prove my nodes are robust, since all data is loaded into memory. Fortunately, we have some amazing products available today which can raise HDD and memory read/write speeds via message queues.
Many blockchain practices have empowered certain marginalized technologies, which become useful in blockchain. For example, a payment gateway is a small plug-in in a centralized system, but it is a popular program in blockchain. Infrastructure products become valuable when they have a large impact on the industry.
Contracts depend on storage. What is amazing about Bitmain is that there is no algorithm it cannot handle. Based on a zero-knowledge proof, Zcash’s algorithm is over 50 times more complex than Ethereum’s, yet Bitmain was able to crack the Zcash algorithm. You can see how incredible Bitmain is in this domain!
What can Bitmain not handle? Bitmain has long been exploring how to execute contracts quickly. I happen to be good at this and communicated with them a few days ago. With the growing number of gas packed by miners, Ethereum can achieve over 100 TPS without sharding, and over 5,000 TPS with sharding. Vitalik is in a good mood recently, singing and dancing, because he has found a solid way forward for his public blockchain.
4. Developer-friendliness — The key to compete for application ecosystems
When we look at a public blockchain ecosystem or a blockchain infrastructure ecosystem, we should not be looking at how many investors or worshippers it has. To achieve success, EOS must compete against Ethereum for application ecosystems.
Why is Ethereum so successful and so popular? Because it is easy to use and developer-friendly. Even a typical college student can write an application on Ethereum after learning Solidity. Ethereum provides a set of Turing complete development mechanisms, with which developers can rapidly develop their own blockchain applications through templates, such as ERC20 and ERC721. In general, Ethereum plays a major in the development of this thriving industry.
I wanted to develop a set of smart domestic applications when I started Ruff. I had worked in technology R&D for years and thought I was great at it. However, while I was doing it, I found that an embedded application was a real challenge, because it was too difficult and too expensive to develop.
I wanted to write the application like a webpage. After a long period of exploration, I changed the code on the left into the much simplified version on the right (see image below). A programming language is highly abstract and very advanced. It is written with interpreters and virtual machines. The bottom layer is Turing complete, while the top layer is Turing incomplete.
Classic code of an embedded program (L) | Ruff Chain code equivalent (R)
Two to three years ago, the blockchain industry was dominated by scientists. Professors in various disciplines, algorithm gurus, and cryptologists swept the industry. From my perspective, cryptology is not that important. The core of blockchain lies in engineering capability. What is most important is to implement applications more efficiently and provide sound technical support for them. The upcoming two or three years are the most crucial in terms of competition for good engineers.
Ruff Chain is an underlying public blockchain which combines blockchain technology with the Internet of Things. We are dedicated to deeply integrating IoT with distributed ledger technology by providing a fast method to upload valuable data from conventional assets to the blockchain. This allows Ruff Chain to address issues affecting IoT data terminals such as security, siloed information, and consistency.
The Ruff Chain token (RUFF) is already live on six major exchanges, including Huobi, Gate.io, and OTCBTC. We are also pressing ahead into overseas markets, including Korea and the United States. There are over 30,000 members in the Ruff Chain official channels and 15,000 developers using the Ruff ecosystem.
The Garage Café is “the birthplace of China’s bitcoin community”. When established, it was designed to be the first coffee house featuring startups. That idea immediately attracted a large group of innovative pioneers. The trendy atmosphere made it the ideal green house to cultivate blockchain. Dong Zhao, a VIP member of the “Bitcoin Suckers’ Club”, was the CTO of Garage Café at that time. Thanks to this opportunity, the seed of blockchain was planted at Garage Café and has taken root and sprouted since.