AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

avatar
Mint Ventures
5 months ago
This article is approximately 5228 words,and reading the entire article takes about 7 minutes
聚焦顶流叙事项目IO.NET的产品机制、背景情况和上线估值推演。

Original author: Alex Xu, Research Partner at Mint Ventures

introduction

in myPrevious article, mentioned that compared with the previous two cycles, this crypto bull market cycle lacks sufficiently influential new business and new asset narratives. AI is one of the few new narratives in this round of Web3 field. In this article, the author will try to sort out his thoughts on the following two issues based on this years hot AI project IO.NET:

  • The commercial necessity of AI+Web3

  • The necessity and challenges of distributed computing services

Secondly, the author will sort out the key information of the IO.NET project, a representative project of AI distributed computing power, including product logic, competing products and project background, and deduce the valuation of the project.

Part of this article’s thoughts on the combination of AI and Web3 were inspired by Michael rinko, a researcher at Delphi Digital.《The Real Merge》inspiration. Some of the views in this article are digested and quoted from the article. Readers are recommended to read the original article.

This article is the authors staged thinking as of the time of publication. It may change in the future, and the views are highly subjective. There may also be errors in facts, data, and reasoning. Please do not use it as an investment reference. Comments and discussions from peers are welcome. .

The following is the main text.

1. Business logic: the combination of AI and Web3

1.1 2023: The new “miracle year” created by AI

Looking back at the history of human development, once technology achieves a breakthrough, earth-shaking changes will follow from individual daily life, to various industrial structures, and to the entire human civilization.

There are two important years in human history, 1666 and 1905, which are now known as the two miracle years in the history of science and technology.

1666 is regarded as the Year of Miracles because Newton’s scientific achievements were concentrated in that year. In this year, he opened up the physical branch of optics, founded the mathematical branch of calculus, and derived the gravity formula, the basic law of modern natural science. Each of these will be a foundational contribution to the development of human science in the next hundred years, greatly accelerating the development of overall science.

The second miracle year was 1905. Einstein, who was only 26 years old at the time, published four consecutive papers in the Annals of Physics, covering the photoelectric effect (laying the foundation for quantum mechanics) and Brownian motion (becoming a method for analyzing random processes). Important references), special relativity and the mass-energy equation (that is, the famous formula E=MC^2). In the evaluation of later generations, each of these four papers exceeded the average level of the Nobel Prize in Physics (Einstein himself also won the Nobel Prize for his paper on the photoelectric effect), and the historical process of human civilization was once again greatly advanced. Several steps.

The year 2023 that has just passed will most likely be called another “miracle year” because of ChatGPT.

We regard 2023 as a miracle year in the history of human science and technology, not only because of GPTs huge progress in natural language understanding and generation, but also because humans have figured out the growth of large language model capabilities from the evolution of GPT. The rule - that is, by expanding the model parameters and training data, the capabilities of the model can be improved exponentially - and there is no bottleneck in this process in the short term (as long as the computing power is sufficient).

This ability is far from understanding language and generating dialogue. It can also be widely used in various scientific and technological fields. Take the application of large language models in the biological field as an example:

  • In 2018, Nobel Prize winner in Chemistry Francis Arnold said at the award ceremony: Today we can read, write and edit any DNA sequence in practical applications, but we cannot yet compose it. Five years after his speech, in 2023, researchers from Stanford University and Silicon Valley AI startup Salesforce Research published a paper in Nature-Biotechnology. They used a large language model fine-tuned based on GPT 3 to 0 Created 1 million new proteins, and found two proteins with completely different structures, but both have bactericidal capabilities, and are expected to become a solution to combat bacteria in addition to antibiotics. In other words: with the help of AI, the bottleneck of protein “creation” has been broken.

  • Previously, the artificial intelligence AlphaFold algorithm predicted the structure of almost all 214 million proteins on the earth within 18 months. This result was hundreds of times the work of all human structural biologists in the past.

With various models based on AI, everything from hard technologies such as biotechnology, materials science, and drug research and development to humanities fields such as law and art will usher in earth-shaking changes, and 2023 is the first year of all this.

We all know that humankind’s ability to create wealth has grown exponentially in the past century, and the rapid maturity of AI technology will inevitably further accelerate this process.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Global GDP trend chart, data source: World Bank

1.2 The combination of AI and Crypto

To essentially understand the necessity of combining AI with Crypto, we can start from the complementary characteristics of the two.

Complementary features of AI and Crypto

AI has three attributes:

  • Randomness: AI is random. Behind its content production mechanism is a black box that is difficult to reproduce and detect, so the results are also random.

  • Resource-intensive: AI is a resource-intensive industry that requires a lot of energy, chips, and computing power

  • Human-like intelligence: AI will (soon) be able to pass the Turing test, and thereafter, humans will be indistinguishable from machines*

※ On October 30, 2023, a research team at the University of California, San Diego released the Turing test results on GPT-3.5 and GPT-4.0 (testing report). The GPT 4.0 score is 41%, which is only 9% away from the passing line of 50%. The human test score of the same project is 63%. The meaning of this Turing test is how many percent of people think that the person they are chatting with is a real person. If it exceeds 50%, it means that at least half of the people in the crowd think that the conversation partner is a human being, not a machine, which is regarded as passing the Turing test.

While AI creates new leap-forward productivity for mankind, its three attributes also bring huge challenges to human society, namely:

  • How to verify and control the randomness of AI so that randomness becomes an advantage rather than a flaw

  • How to meet the huge energy and computing power gap required by AI

  • How to tell the difference between humans and machines

The characteristics of Crypto and the blockchain economy may be the right medicine to solve the challenges brought by AI. The encryption economy has the following three characteristics:

  • Determinism: The business runs based on blockchain, code and smart contracts. The rules and boundaries are clear. What is input will result in a high degree of certainty.

  • Efficient resource allocation: The crypto-economy has built a huge global free market. The pricing, collection, and circulation of resources are very fast. And due to the existence of tokens, incentives can be used to accelerate the matching of market supply and demand and accelerate the critical point.

  • Trust-free: The ledger is open, the code is open source, and everyone can easily verify it, bringing a trustless system, while ZK technology avoids privacy exposure at the same time as verification

Next, three examples are used to illustrate the complementarity of AI and cryptoeconomics.

Example A: Solving randomness, cryptoeconomic-based AI agent

AI Agent is an artificial intelligence program responsible for performing work for humans based on human will (representative projects include Fetch.AI). Let’s say we want our AI agent to process a financial transaction, such as “Buy $1,000 in BTC.” AI agents may face two situations:

Scenario 1: It wants to connect with traditional financial institutions (such as BlackRock) and purchase BTC ETFs. It faces a large number of adaptation problems between AI agents and centralized institutions, such as KYC, information review, login, identity verification, etc. Its still very troublesome at the moment.

Case two, it runs based on the native crypto economy, and the situation will become much simpler. It will directly use your account to sign and place an order to complete the transaction through Uniswap or an aggregated trading platform, and receive WBTC (or other encapsulation) format BTC), the whole process is quick and easy. In fact, this is what various Trading BOTs are doing. They have actually played the role of a junior AI agent, but their work is focused on trading. In the future, various types of trading BOTs will inevitably be able to execute more complex trading intentions with the integration and evolution of AI. For example: track 100 smart money addresses on the chain, analyze their trading strategies and success rates, use 10% of the funds in my address to perform similar transactions within a week, and stop when the effect is not good, and summarize the possibility of failure reason.

AI will run better in blockchain systems, essentially because of the clarity of cryptoeconomic rules and permissionless access to the system. By performing tasks under limited rules, the potential risks brought by the randomness of AI will also be smaller. For example, AIs performance in chess and card competitions and video games has surpassed humans because chess and card games are closed sandboxes with clear rules. The progress of AI in autonomous driving will be relatively slow, because the challenges of the open external environment are greater, and it is harder for us to tolerate the randomness of AI processing problems.

Example B: Shaping resources and gathering resources through token incentives

The current total computing power of the global computing power network behind BTC (Hashrate: 576.70 EH/s) exceeds the comprehensive computing power of any country’s supercomputers. Its development motivation comes from simple and fair network incentives.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

BTC network computing power trend, source: https://www.coinwarz.com/

In addition, DePIN projects, including Mobile, are also trying to use token incentives to shape a two-sided market on both sides of supply and demand to achieve network effects. IO.NET, which this article will focus on next, is a platform designed to gather AI computing power. It is hoped that through the token model, more AI computing power potential will be stimulated.

Example C: Open source code, introducing ZK, distinguishing humans and machines while protecting privacy

As a Web3 project participated by OpenAI founder Sam Altman, Worldcoin uses the hardware device Orb to generate exclusive and anonymous hash values ​​based on human iris biometrics and ZK technology to verify identity and distinguish between humans and machines. In early March this year, Web3 art project Drip began using Worldcoin IDs to verify real users and issue rewards.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

In addition, Worldcoin has also recently open sourced the program code of its iris hardware Orb to provide guarantees for the security and privacy of user biometrics.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Generally speaking, due to the certainty of code and cryptography, the advantages of resource circulation and fundraising brought by the permissionless and token mechanism, and the trustless attributes based on open source code and public ledgers, the crypto economy has become a major challenge for human society facing AI challenges. An important potential solution.

And among them, the most urgent challenge with the strongest commercial demand is the extreme hunger of AI products for computing resources, surrounding the huge demand for chips and computing power.

This is also the main reason why the growth of distributed computing power projects exceeds the overall AI track in this bull market cycle.

The Business Necessity of Decentralized Compute

AI requires massive computing resources, both for training models and performing inference.

In the training practice of large language models, one fact has been confirmed: as long as the scale of data parameters is large enough, large language models will emerge with some capabilities that were not available before. Behind the exponential jump in the capabilities of each generation of GPT compared to the previous generation is the exponential increase in the amount of calculations required for model training.

Research by DeepMind and Stanford University shows that when different large language models face different tasks (operations, Persian question answering, natural language understanding, etc.), they only need to increase the size of the model parameters during model training (correspondingly, training The amount of calculation has also increased), until the training amount does not reach 10^22 FLOPs (FLOPs refers to floating point operations per second, used to measure computing performance), the performance of any task is almost the same as randomly giving answers. ; And once the parameter scale exceeds the critical value of that scale, the task performance improves sharply, no matter which language model.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Source: Emergent Abilities of Large Language Models

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Source: Emergent Abilities of Large Language Models

It is also the verification of the law and practice of big miracles in computing power that led OpenAI founder Sam Altman to propose raising US$7 trillion to build an advanced chip factory that is 10 times larger than the current size of TSMC (this part It is expected to cost 1.5 trillion) and use the remaining funds for chip production and model training.

In addition to the computing power required for the training of AI models, the inference process of the model itself also requires a lot of computing power (although the amount of calculations is smaller than that of training), so the hunger for chips and computing power has become a major factor in participating in the AI ​​​​track. the normal state of the person.

Compared with centralized AI computing power providers such as Amazon Web Services, Google Cloud Platform, Microsoft Azure, etc., the main value propositions of distributed AI computing include:

  • Accessibility: Getting access to computing chips using cloud services like AWS, GCP, or Azure often takes weeks, and popular GPU models are often out of stock. In addition, in order to obtain computing power, consumers often need to sign long-term, inflexible contracts with these large companies. The distributed computing platform can provide flexible hardware selection and greater accessibility.

  • Low pricing: Due to the use of idle chips and the token subsidies provided by the network protocol party to the chip and computing power suppliers, the distributed computing power network may be able to provide cheaper computing power.

  • Resistance to censorship: At present, cutting-edge computing power chips and supplies are monopolized by large technology companies, and the government represented by the United States is increasing scrutiny of AI computing power services. AI computing power can be distributed, flexibly, and freely obtained. Gradually becoming an explicit demand, this is also the core value proposition of the computing power service platform based on web3.

If fossil energy is the blood of the industrial age, then computing power may be the blood of the new digital era opened by AI, and the supply of computing power will become the infrastructure of the AI ​​era. Just as stablecoins have become a thriving side branch of legal currency in the Web3 era, will the distributed computing power market become a side branch of the rapidly growing AI computing power market?

Since this is still a fairly early market, everything remains to be seen. However, the following factors may stimulate the narrative or market adoption of distributed computing power:

  • GPU supply and demand continue to be tight. The continued tight supply of GPUs may push some developers to try distributed computing platforms.

  • Regulatory expansion. If you want to obtain AI computing power services from a large cloud computing power platform, you must go through KYC and layers of reviews. This may instead promote the adoption of distributed computing platforms, especially in areas subject to restrictions and sanctions.

  • Token price stimulus. The increase in token prices during the bull market cycle will increase the platforms subsidy value to the GPU supply side, thereby attracting more suppliers to enter the market, increasing the size of the market, and reducing the actual purchase price of consumers.

But at the same time, the challenges of distributed computing platforms are also quite obvious:

  • Technical and engineering challenges

  • Work verification problem: Due to the hierarchical structure of the calculation of the deep learning model, the output of each layer is used as the input of the subsequent layer. Therefore, verifying the validity of the calculation requires the execution of all previous work, which cannot be easily and effectively verified. To solve this problem, distributed computing platforms need to develop new algorithms or use approximate verification techniques that can provide probabilistic guarantees of the correctness of results rather than absolute certainty.

  • Parallelization problem: The distributed computing power platform gathers a long-tail supply of chips, which means that the computing power provided by a single device is relatively limited. A single chip supplier can almost complete the training or reasoning tasks of the AI ​​model independently in a short time. Therefore, parallelization must be used to dismantle and distribute tasks to shorten the total completion time. Parallelization will inevitably face a series of problems such as how to decompose tasks (especially complex deep learning tasks), data dependencies, and additional communication costs between devices.

  • Privacy protection issue: How to ensure that the purchaser’s data and models are not exposed to the recipient of the task?

  • Regulatory compliance challenges

  • The distributed computing platform can be used as a selling point to attract some customers due to its permissionless nature of the two-sided market of supply and procurement. On the other hand, as AI regulatory norms are improved, it may become the target of government rectification. In addition, some GPU suppliers are also worried about whether the computing resources they rent are provided to sanctioned businesses or individuals.

In general, most consumers of distributed computing platforms are professional developers or small and medium-sized institutions. Unlike crypto investors who purchase cryptocurrencies and NFTs, these users have little or no interest in the services that the protocol can provide. There are higher requirements for stability and sustainability, and price may not be the main motivation for their decision-making. At present, distributed computing platforms still have a long way to go to gain recognition from such users.

Next, we sorted out and analyzed the project information of a new distributed computing project IO.NET in this cycle, and based on the current AI projects and distributed computing projects in the same track on the market, we calculated its possible potential after listing. valuation level.

2. Distributed AI computing platform: IO.NET

2.1 Project positioning

IO.NET is a decentralized computing network that builds a two-sided market around chips. The supply side is the computing power of chips distributed around the world (mainly GPUs, but also CPUs and Apple’s iGPUs, etc.), and the demand side is hoping to complete Artificial intelligence engineers for AI model training or inference tasks.

On the official website of IO.NET, it writes:

Our Mission

Putting together one million GPUs in a DePIN – decentralized physical infrastructure network.

Its mission is to integrate millions of GPUs into its DePIN network.

Compared with existing cloud AI computing power service providers, its main selling points emphasized are:

  • Flexible combination: AI engineers can freely select and combine the chips they need to form a cluster to complete their own computing tasks

  • Rapid deployment: No need for weeks of approval and waiting (currently the situation with centralized vendors such as AWS), deployment can be completed and tasks can be started within tens of seconds

  • Low-priced services: The cost of services is 90% lower than that of mainstream manufacturers

In addition, IO.NET also plans to launch services such as AI model stores in the future.

2.2 Product mechanism and business data

  • Product mechanism and deployment experience

Like Amazon Cloud, Google Cloud, and Alibaba Cloud, the computing service provided by IO.NET is called IO Cloud. IO Cloud is a distributed, decentralized chip network capable of executing Python-based machine learning code and running AI and machine learning programs.

The basic business module of IO Cloud is called Clusters. Clusters are a group of GPUs that can self-coordinate to complete computing tasks. Artificial intelligence engineers can customize the desired cluster according to their own needs.

IO.NETs product interface is very user-friendly. If you want to deploy your own chip cluster to complete AI computing tasks, after entering its Clusters product page, you can start configuring what you want as needed. Chip cluster.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Page information: https://cloud.io.net/cloud/clusters/create-cluster, the same below

First you need to choose your own mission scenario. There are currently three types to choose from:

  • General: Provides a more general environment, suitable for early project stages where specific resource requirements are uncertain.

  • Train: A cluster designed for training and fine-tuning of machine learning models. This option can provide more GPU resources, higher memory capacity, and/or faster network connections to handle these intensive computing tasks.

  • Inference: A cluster designed for low-latency inference and heavy-load workloads. In the context of machine learning, inference refers to using a trained model to make predictions or analyze new data and provide feedback. Therefore, this option will focus on optimizing latency and throughput to support real-time or near-real-time data processing needs.

Then, you need to choose the supplier of the chip cluster. Currently, IO.NET has reached cooperation with Render Network and Filecoin’s miner network, so users can choose IO.NET or chips from the other two networks as the supplier of their own computing cluster. It is equivalent to IO.NET playing the role of an aggregator (but as of the time of writing, the Filecon service is temporarily offline). It is worth mentioning that according to the page, the number of available GPUs for IO.NET is currently 200,000+, while the number of available GPUs for Render Network is 3,700+.

Next, we enter the cluster chip hardware selection process. Currently, IO.NET lists only GPU as the available hardware type, excluding CPU or Apple’s iGPU (M 1, M 2, etc.), and GPU is also mainly Mainly NVIDIA products.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Among the officially listed and available GPU hardware options, according to the data tested by the author on the day, the number of available GPUs on the IO.NET network is 206,001. The most available is the GeForce RTX 4090 (45,250 pictures), followed by the GeForce RTX 3090 Ti (30,779 pictures).

In addition, the A 100-SXM 4-80 GB chip (market price 15,000 $+), which is more efficient in processing AI computing tasks such as machine learning, deep learning, and scientific computing, has 7,965 photos online.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Nvidias H 100 80 GB HBM 3 graphics card (market price 40,000 $+), which was specifically designed for AI from the beginning of hardware design, has a training performance 3.3 times that of the A 100, and an inference performance that is 4.5 times that of the A 100. The actual number of online There are 86 photos.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

After selecting the hardware type of the cluster, the user also needs to select parameters such as the cluster region, communication speed, number and time of rented GPUs.

Finally, IO.NET will provide you with a bill based on comprehensive selections. Take the authors cluster configuration as an example:

  • General task scenario

  • 16 A 100-SXM 4-80 GB chips

  • Maximum connection speed (Ultra High Speed)

  • Location United States

  • Rental period is 1 week

The total bill price is 3311.6 $, and the hourly rental price of a single card is 1.232 $

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

The hourly rental prices of single cards for A 100-SXM 4-80 GB on Amazon Cloud, Google Cloud and Microsoft Azure are 5.12 $, 5.07 $ and 3.67 $ respectively (data source: https://cloud-gpus.com/, actual Prices will vary based on contract details).

Therefore, in terms of price alone, IO.NETs chip computing power is indeed much cheaper than that of mainstream manufacturers, and the supply combination and procurement are also very flexible, and the operation is also easy to get started.

Business conditions

  • Supply side situation

As of April 4 this year, according to official data, IO.NETs total supply of GPUs on the supply side is 371,027, and CPU supply is 42,321. In addition, with Render Network as its partner, 9997 GPUs and 776 CPUs are connected to the networks supply.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Data source: https://cloud.io.net/explorer/home, the same below

When the author wrote this article, 214,387 of the total number of GPUs connected to IO.NET were online, and the online rate reached 57.8%. The online rate of the GPU from Render Network is 45.1%.

What does the above supply-side data mean?

For comparison, we introduce another established distributed computing project Akash Network that has been online for a longer time.

Akash Network launched its mainnet as early as 2020, initially focusing on distributed services for CPU and storage. In June 2023, it launched the test network of GPU services, and launched the main network of GPU distributed computing power in September of the same year.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Data source: https://stats.akash.network/provider-graph/graphics-gpu

According to Akash’s official data, since the launch of its GPU network, although the supply side has continued to grow, the total number of GPU connections so far is only 365.

Judging from the supply of GPUs, IO.NET is several orders of magnitude higher than Akash Network, and is already the largest supply network in the distributed GPU computing power track.

  • Demand side situation

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

However, from the demand side, IO.NET is still in the early stages of market cultivation, and the total number of actual uses of IO.NET to perform computing tasks is not much. The task load of most online GPUs is 0%, and only four chips, A100 PCIe80GB K8S, RTX A6000K8S, RTX A4000K8S, and H100 80GB HBM 3, are processing tasks. And except for the A100 PCIe80GB K8S, the load capacity of the other three chips is less than 20%.

The official network pressure value disclosed that day was 0%, which means that most of the chip supply is in online standby.

In terms of network cost scale, IO.NET has incurred service fees of 586029 $, and the cost in the past day was 3200 $.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Data source: https://cloud.io.net/explorer/clusters

The scale of the above network settlement fees, both in total and daily transaction volume, is in the same order of magnitude as Akash. However, most of Akash’s network revenue comes from the CPU part, and Akash’s CPU supply is more than 20,000.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Data source: https://stats.akash.network/

In addition, IO.NET also disclosed the business data of AI inference tasks processed by the network. So far, it has processed and verified more than 230,000 inference tasks. However, most of this business volume is generated by IO.NET-sponsored Project BC 8.AI.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

Data source: https://cloud.io.net/explorer/inferences

Judging from current business data, IO.NETs supply side is expanding smoothly. Stimulated by airdrop expectations and community activities codenamed Ignition, it has quickly gathered a large amount of AI chip computing power. Its expansion on the demand side is still in its early stages, and organic demand is currently insufficient. As for the current shortcomings on the demand side, whether it is because the expansion of the consumer side has not yet begun, or because the current service experience is not yet stable and therefore lacks large-scale adoption, this still needs to be evaluated.

However, considering that the gap in AI computing power is difficult to fill in the short term, a large number of AI engineers and projects are looking for alternatives and may be interested in decentralized service providers. In addition, IO.NET has not yet carried out demand-side economic development. With the stimulation of activities, the gradual improvement of product experience, and the subsequent gradual matching of supply and demand, it is still worth looking forward to.

2.3 Team background and financing situation

  • Team situation

The core team of IO.NET was founded in the field of quantitative trading. Before June 2022, they had been focusing on developing institutional-level quantitative trading systems for stocks and crypto assets. Due to the demand for computing power in the backend of the system, the team began to explore the possibility of decentralized computing, and finally focused on the specific issue of reducing the cost of GPU computing services.

Founder CEO: Ahmad Shadid

Ahmad Shadid has been engaged in quantitative and financial engineering related work before IO.NET, and is also a volunteer at the Ethereum Foundation.

CMO Chief Strategy Officer: Garrison Yang

Garrison Yang officially joined IO.NET in March of this year. He was previously the VP of strategy and growth at Avalanche and graduated from the University of California, Santa Barbara.

COO:Tory Green

Tory Green is the chief operating officer of io.net. He was previously the chief operating officer of Hum Capital and the director of corporate development and strategy of Fox Mobile Group. He graduated from Stanford.

Judging from IO.NETs Linkedin information, the team is headquartered in New York, USA, with a branch in San Francisco. The current team size is more than 50 people.

  • Financing situation

IO.NET has only disclosed one round of financing so far, that is, the Series A financing completed in March this year with a valuation of US$1 billion, raising a total of US$30 million, led by Hack VC, and other participating investors include Multicoin Capital, Delphi Digital , Foresight Ventures, Animoca Brands, Continue Capital, Solana Ventures, Aptos, LongHash Ventures, OKX Ventures, Amber Group, SevenX Ventures and ArkStream Capital, among others.

It is worth mentioning that, perhaps because of the investment received from the Aptos Foundation, the BC 8.AI project, which was originally used for settlement and accounting on Solana, has been converted to the same high-performance L1 Aptos.

2.4 Valuation calculation

According to previous founder and CEO Ahmad Shadid, IO.NET will launch the token at the end of April.

IO.NET has two target projects that can be used as a reference for valuation: Render Network and Akash Network, both of which are representative distributed computing projects.

We can deduce the market value range of IO.NET in two ways: 1. Market-to-sales ratio, that is: market value/revenue ratio; 2. Market value/number of network chips ratio.

Let’s first look at the valuation deduction based on the price-to-sales ratio:

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

From the perspective of price-to-sales ratio, Akash can be used as the lower limit of IO.NETs valuation range, while Render can be used as a high-end pricing reference for valuation, with an FDV range of US$1.67 billion to US$5.93 billion.

However, considering that the IO.NET project is updated, the narrative is hotter, coupled with the smaller early circulation market value, and the current larger supply-side scale, the possibility of its FDV exceeding Render is not small.

Let’s look at another angle to compare valuations, namely the “price-to-core ratio”.

In a market context where the demand for AI computing power exceeds supply, the most important element of the distributed AI computing power network is the scale of the GPU supply side. Therefore, we can make a horizontal comparison with the market-to-core ratio and use the total market value of the project and the number of chips in the network Quantity Ratio to deduce the possible valuation range of IO.NET for readers to use as a market value reference.

AI\DePIN\Sol生态三重光环加身:浅析发币在即的IO.NET

If the market value range of IO.NET is calculated based on the market-to-core ratio, IO.NET uses the market-to-core ratio of Render Network as the upper limit and Akash Network as the lower limit. Its FDV range is 20.6 billion to 197.5 billion US dollars.

I believe that readers who are optimistic about the IO.NET project will think that this is an extremely optimistic market value calculation.

And we need to take into account that the current number of IO.NETs huge number of chips online is stimulated by airdrop expectations and incentive activities. After the project is officially launched, the actual number of online chips on the supply side still needs to be observed.

Therefore, in general, valuation calculations from the perspective of price-to-sales ratio may be more informative.

As a project with the triple halo of AI+DePIN+Solana ecology, IO.NET will wait and see what its market value performance will be after its launch.

3. Reference information

Dephi Digital:The Real Merge

Galaxy:Understanding the Intersection of Crypto and AI

Original article, author:Mint Ventures。Reprint/Content Collaboration/For Reporting, Please Contact report@odaily.email;Illegal reprinting must be punished by law.

ODAILY reminds readers to establish correct monetary and investment concepts, rationally view blockchain, and effectively improve risk awareness; We can actively report and report any illegal or criminal clues discovered to relevant departments.

Recommended Reading
Editor’s Picks