當前位置:首頁 > 資訊 >

Technical Analysis by Artela: Why is “Parallel E

Recently, Paradigm made a big bet and led a huge round of financing of $225 million in Monad, which aroused strong market attention to "Parallel EVM". So, what problem does "Parallel EVM" solve? What are the bottlenecks and keys to the development of Parallel EVM? In my opinion, "Parallel EVM" is the last gamble of the EVM chain to fight against the high-performance layer1 chain, which is related to the survival of the Ethereum EVM ecosystem. Why? Next, let's talk about my understanding:

Since the Ethereum EVM virtual machine can only "serialize" transactions, the EVM-Compatible layer1 chain and the EVM-compatible layer2 chain are also subject to corresponding performance constraints, because they are essentially based on the same framework to handle status and transaction finality.

However, high-performance layer1 chains such as Solana, Sui, and Aptos have the inherent advantage of being parallel. In this context, if chains with EVM genes want to face the impact of high-performance layer1 public chains head-on, they must make up for the inherent lack of "parallel" capabilities. How to do it? In terms of technical principles and details, I will take the parallel EVM new chain @Artela_Network as an example to explain:

1) Enhanced EVM layer1 chains represented by Monad, Artela, SEI, etc., will greatly improve TPS on the basis of high compatibility with EVM and enable transactions to run in parallel in a quasi-EVM environment. These independent parallel EVM layer1 chains have independent consensus mechanisms and technical characteristics, but they still aim to be compatible with and expand the EVM ecosystem. This is equivalent to reconstructing the EVM chain in a "bloodline-changing" way and serving the EVM ecosystem.

2) Scalable layer2 EVM-compatible chains represented by Eclipse and MegaETH use the independent consensus and transaction "pre-processing" capabilities of the layer2 chain to screen and process transaction status before large-scale transactions are batched to the main network, and can simultaneously select the execution layer of any other chain to finalize the transaction status. This is equivalent to abstracting the EVM into a pluggable execution module, which can select the best "execution layer" as needed, thereby achieving parallel capabilities; however, this type of solution can serve the EVM, but it is beyond the scope of the EVM framework;

3) Equivalent Alt-layer1 chains represented by Polygon and BSC have achieved the parallel processing capabilities of EVM to a certain extent, but they have only optimized the algorithm layer, and have not optimized the deep consensus layer and storage layer. Therefore, this type of parallel capability can be regarded as a specific feature, and has not completely solved the parallel problem of EVM.

4) Differentiated Non-EVM parallel chains represented by Aptos, Sui, Fuel, etc., to some extent, are not EVM chains, but are inherently high concurrent execution capabilities, and then through some middleware or coding parsing methods, they achieve compatibility with the EVM environment. This is the case with Starknet, which is Ethereum's layer 2. Because Starknet has Cario language and account abstraction, it also has parallel capabilities, but its compatibility with EVM requires a special pipeline. These Non-EVM chains' parallel capabilities are basically connected to EVM chains. This problem exists.

The above four solutions have different focuses. For example, the layer2 with parallel capabilities focuses on the flexibility of modular combination of "execution layer" chains; the EVM-Compatible chain highlights the customization of specific functions; as for the EVM compatibility of other non-EVM chains, they are more aimed at extracting Ethereum's liquidity; the real goal is to completely consolidate the EVM ecosystem and change the parallel capabilities from the bottom up, leaving only an enhanced EVM layer1 track.

So, what is the key to building an enhanced parallel EVM layer1 public chain? How can we reconstruct the EVM chain and serve the EVM ecosystem? There are two key points:

1. The ability to access state I/O disks to read and output information. Since reading and writing data takes time, simply sorting and scheduling transactions cannot fundamentally improve parallel processing capabilities. It is also necessary to introduce cache, data slicing, and even distributed storage technologies to balance the reading speed and the possibility of state conflicts from the fundamental state storage and reading process.

2) Efficient network communication, data synchronization, algorithm optimization, virtual machine enhancement, and optimization of various components of the consensus mechanism layer such as separation of computing and IO tasks, etc. require comprehensive optimization and improvement of the underlying component architecture, collaboration process and other aspects, ultimately achieving the ability of parallel transactions with fast response speed, controllable computing consumption and high accuracy;

Specifically for the parallel EVM layer1 chain project itself, what technical innovations and framework optimizations need to be made to realize "parallel EVM"?

In order to fully realize the "parallel EVM" capability of resource coordination and optimization from the underlying architecture layer, Artela introduced elastic computing and elastic block space. How to understand? Elastic computing, the network can dynamically allocate and adjust computing resources according to demand and load; elastic block space can dynamically adjust the block size according to the number of transactions and data size in the network; the working principle of the entire elastic design is just like the escalator in a shopping mall that automatically senses the flow of people to work, which makes sense;

As mentioned earlier, State I/O disk reading performance is critical to parallel EVM. The "parallel" capabilities achieved by EVM-Compatible chains such as Polygon and BSC through algorithms can also achieve a 2-4 times efficiency improvement, but it is only an optimization at the algorithm layer. Its consensus layer and storage layer have not been deeply optimized. What will the real deep optimization be like?

In response to this, Artela borrowed from database technology solutions and made improvements in both state reading and writing. In terms of writing state, the write-before-log (WAL) technology was used. When the state changes to be written, the change record is first written to the log and submitted to the memory, and it can be considered that the "write" operation is completed. This actually realizes the asynchronous operation and avoids the immediate disk write operation when the state changes, thereby reducing the I/O operation on the disk. In terms of state reading, it is essentially an asynchronous operation. The preloading strategy is used to improve the reading efficiency. According to the historical execution record of the contract, it predicts which states will be used for the next specific contract call and preloads them into the memory, thereby improving the efficiency of disk I/O requests.

In short, this is an algorithm that exchanges memory space for execution time, thereby fundamentally improving the parallel processing capabilities of the EVM virtual machine and fundamentally optimizing the state conflict problem.

In addition, Artela introduces Aspect modular programming capabilities to better manage complexity and improve development efficiency: by introducing WASM encoding parsing to enhance programming flexibility; at the same time, it also has underlying API access rights to achieve secure isolation of the execution layer. This allows developers to efficiently develop, debug and deploy smart contracts in the Artela environment, thereby activating the customized expansion capabilities of the developer community. In particular, developers will also be motivated to optimize the code in the direction of parallelism at the smart contract code layer. After all, to reduce the probability of state conflicts, the calling logic and algorithm of each smart contract are particularly critical.

that's all

It is not difficult to see that the concept of "parallel EVM" is essentially to optimize the execution process of transaction status. @monad_xyz claims to be able to reach 10,000 transactions per second. Its technical core is nothing more than dedicated databases, developer-friendliness, delayed execution consensus, superscalar pipeline technology, etc. to achieve parallel processing of large-scale transactions, which is not much different from the essential logic of Artela's elastic computing and I/O asynchronous operations.

But what I actually want to express is that this type of high-performance parallel EVM chain is actually the result of integrating web2 products and technical strengths, and it does adopt the essence of "technical processing" under high traffic loads from time to time in the mature application market of web2.

If we look to the distant future of Mass Adoption, "Parallel EVM" is indeed the basic infra for the EVM ecosystem to face the broader web2 market in the next step, and it is reasonable that it is so bullish in the capital market.

猜你喜歡

微信二維碼

微信二維碼