FMG Research Report: Three Problems with AI and Solutions to DePIN

IntermediateFeb 02, 2024
In the AI era, product competition relies heavily on stable computing power and data. By combining Web3, it can assist small and medium-sized AI startups in surpassing traditional giants. The DePIN ecosystem, through the integration of token economics and hardware devices, addresses the needs of AI products for computing power and user targeting.
 FMG Research Report: Three Problems with AI and Solutions to DePIN

TL; DR

  • In the AI era, product competition cannot be separated from the resource side (computing power, data, etc.), especially stable resource support.
  • Model training/iteration also requires a huge number of user targets (IP) to help feed data and produce a qualitative change in model efficiency.
  • The integration with Web3 can help small and medium-sized AI startups surpass traditional AI giants.
  • For the DePIN ecosystem, the resource side such as computing power and bandwidth determines the lower limit (pure integration of computing power alone does not create a moat); the upper limit of a project is determined by dimensions such as the application and deep optimization of AI models (similar to BitTensor), specialization (Render, Hivemaper), and effective utilization of data.
  • In the context of AI+DePIN, model inference & fine-tuning, as well as the mobile AI model market, will be given attention.
  • AI market analysis & three questions
  • According to statistics, from September 2022, the eve of the birth of ChatGPT, to August 2023, the world’s top 50 AI products generated more than 24 billion visits, with an average monthly growth of 236.3 million times.
  • The prosperity of AI products is accompanied by an increasing reliance on computing power.

Source: “Language Models are Few-Shot Learners”

A paper from the University of Massachusetts Amherst states that “training an artificial intelligence model can emit as much carbon as five cars over its lifetime.” However, this analysis only involved a single training. As the model improves through repeated training, energy usage will increase significantly.

The latest language models contain billions or even trillions of weights. One popular model, GPT-3, has 175 billion machine learning parameters. Training this model requires 1024 GPUs, 34 days, and 4.6 million dollars if using NVIDIA A100.

In the post-AI era, the competition among products has gradually evolved into a resource-based war, mainly focused on computing power.

Source:AI is harming our planet: addressing AI’s staggering energy cost

This raises three questions. First, whether an AI product has sufficient resources (computing power, bandwidth, etc.), especially stable resource support. Achieving reliability requires decentralized computing power. In the traditional field, chip manufacturers naturally have an advantage and can significantly raise prices due to the gap in chip demand and the world wall built based on policies and ideologies. For instance, the price of the NVIDIA H100 chip model increased from $36,000 in April 2023 to $50,000, which further increases the cost for AI model training teams.

The second question is that the satisfaction of resource end conditions helps AI projects solve the hardware requirements, but model training/iteration also requires a large number of user targets (IP) to help feed data. After the model scale exceeds a certain threshold, its performance shows breakthrough growth in different tasks.

The third question is that small and medium-sized AI start-up teams find it difficult to achieve overtaking in the curve. The monopolistic nature of computing power in the traditional financial market also leads to monopolistic AI model solutions. Large AI model manufacturers represented by OpenAI and Google DeepMind are further building their moats. Small and medium-sized AI teams need to seek more differentiated competition.

All of these three questions can find answers in Web3. In fact, the integration of AI and Web3 has a long history and the ecosystem is relatively prosperous.

The following image shows some tracks and project displays of the AI+Web3 ecosystem produced by Future Money Group.

AI+DePIN

1. DePIN’s solution

DePIN is an abbreviation for Decentralized Physical Infrastructure Network, which is also a collection of relationships between people and devices. By combining token economics with hardware devices such as computers and car cameras, it organically connects users with devices and ensures the orderly operation of the economic model.

Compared to the broader definition of Web3, DePIN has a natural advantage in attracting off-chain AI teams and related funding due to its deeper connection with hardware devices and traditional enterprises.

The DePIN ecosystem, which pursues distributed computing power and incentives for contributors, precisely addresses the needs of AI products for computing power and IP.

  • DePIN uses tokenomics to promote the settlement of world computing power (computing power centers & idle personal computing power), reducing the risk of centralization of computing power, and at the same time reducing the cost of calling computing power for the AI ​​team.
  • The large and diversified IP composition of the DePIN ecosystem helps AI models achieve diversity and objectivity in data acquisition channels. Sufficient data providers can also ensure the improvement of AI model performance.
  • The overlap in portraits between DePIN ecological users and Web3 users can help settled AI projects develop more AI models with Web3 characteristics and form differentiated competition, which is not available in the traditional AI market.

In the Web2 era, AI model data collection usually comes from public data sets or is collected by the model maker itself, which is subject to cultural background and geographical restrictions, causing subjective “distortion” in the content produced by the AI ​​model. Traditional data collection methods are limited by collection efficiency and cost, making it difficult to obtain a larger model scale (number of parameters, training time and data quality).

For AI models, the larger the scale of the model, the easier it is for the performance of the model to change qualitatively.

Source:Large Language Models’ emergent abilities: How do they solve problems they were not trained to address?

DePIN happens to have a natural advantage in this area. Take Hivemapper as an example. It is distributed in 1,920 regions around the world, and nearly 40,000 contributors are providing data for MAP AI (map AI model).

The combination of AI and DePIN signifies a new level of integration between AI and Web3. Currently, AI projects in Web3 are predominantly focused on the application layer and have not been able to fully break away from direct reliance on Web2 infrastructure. These projects are still dependent on existing AI models hosted on traditional computing platforms, with limited exploration in creating new AI models.

Web3 elements have always been positioned at the lower end of the food chain, unable to achieve significant returns. The same applies to distributed computing platforms, as the combination of AI and computing power alone cannot fully unleash their potential. In this relationship, computing power providers are unable to gain additional profits, and the ecosystem architecture is too singular. As a result, the token economy cannot effectively drive the flywheel.

However, the AI+DePIN concept is breaking this inherent relationship and shifting Web3’s attention to broader AI models.

2. Conclusion of AI+DePIN projects

DePIN inherently possesses the necessary equipment (computing power, bandwidth, algorithms, data), users (model training data providers), and an incentive mechanism within its ecosystem (token economics) that AI urgently requires.

We can boldly define AI+DePIN as providing comprehensive objective conditions (computing power/bandwidth/data/IP), offering scenarios for AI models (training/inference/fine-tuning), and being endowed with token economics.

Future Money Group will list the following classic paradigms of AI+DePIN for clarification.

We divided it into four sectors: computing power, bandwidth, data, and other sectors according to different types of resource provision, and tried to sort out the projects in different sectors.

2.1 Computing power

The computing power side is the main component of the AI+DePIN sector and currently has the largest number of projects. In the computing power side projects, the main components of computing power are GPU (graphics processing unit), CPU (central processing unit), and TPU (tensor processing unit). Among them, TPU is mainly developed by Google due to its high manufacturing difficulty and is only available for cloud computing power leasing services, thus the market size is relatively small. GPU, on the other hand, is a more specialized hardware component similar to CPU. Compared to a regular CPU, it can efficiently handle complex mathematical operations that run in parallel. Initially, GPUs were used specifically for graphics rendering tasks in games and animations, but their applications have now expanded far beyond that. Therefore, GPU is currently the main source of computing power in the market.

Therefore, many AI+DePIN projects related to computing power that we can see are specialized in graphics and video rendering, or related to gaming. This is because of the characteristics of GPUs.

From a global perspective, the main providers of computing power for AI+DePIN products can be divided into three parts: traditional cloud computing service providers, idle personal computing power, and self-owned computing power. Among them, cloud computing service providers have a relatively large share, followed by idle personal computing power. This means that these types of products often act as intermediaries for computing power. On the demand side, there are various AI model development teams.

Currently, in this category, computing power is almost impossible to be used 100% in practice and is often in an idle state. For example, Akash Network currently has around 35% of its computing power in use, while the rest is idle. io.net is also in a similar situation.

This may be due to the current small number of AI model training requirements, and also the reason why AI+DePIN can provide cheap computing power costs. As the AI ​​market expands in the future, this situation will improve.

Akash Network: Decentralized peer-to-peer cloud service market

Akash Network is a decentralized peer-to-peer cloud service marketplace, often referred to as the Airbnb of cloud services. The Akash network allows users and companies of all sizes to use their services quickly, stably and affordably.

Similar to Render, Akash also provides users with services such as GPU deployment, leasing, and AI model training.

In August 2023, Akash launched Supercloud, allowing developers to set the price they are willing to pay to deploy their AI models, while providers with additional computing power host users’ models. The feature is very similar to Airbnb, allowing providers to rent out unused capacity.

Through open bidding, resource providers are encouraged to open up idle computing resources in their networks, and Akash Network achieves more efficient utilization of resources, thus providing more competitive prices for resource demanders.

At present, the total number of Akash ecological GPUs is 176 GPUs, but the active number is 62, and the activity level is 35%, which is lower than the 50% level in September 2023. Estimated daily income is around US$5,000. AKT tokens have a pledge function. Users can participate in the maintenance of network security by pledging tokens and obtain an annualized return of about 13.15%.

Akash’s current data performance in the AI+DePIN sector is relatively high, and its US$700 million FDV has greater room for growth compared to Render and BitTensor.

Akash has also connected to BitTensor’s Subnet to expand its development space. Overall, Akash’s project, as one of several high-quality projects on the AI+DePIN track, has excellent fundamental performance.

io.net: AI+DePIN with the largest number of GPUs

io.net is a decentralized computing network that enables the development, execution and scaling of ML (machine learning) applications on the Solana blockchain, leveraging the world’s largest GPU clusters to allow machine learning engineers to perform the equivalent of centralized services Rent and access distributed cloud service computing power at a fraction of the cost.

According to official data, io.net has more than 1 million GPUs on standby. In addition, io.net’s cooperation with Render also expands the GPU resources available for deployment.

There are many GPUs in the io.net ecosystem, but almost all of them come from cooperation with various cloud computing vendors and access to personal nodes, and the idle rate is high. Taking the RTX A6000 with the largest number as an example, among the 8426 GPUs, only 11% (927) of them are in use, while more GPU models are almost unused. But one of the major advantages of current io.net products is that they are cheaply priced. Compared with Akash’s GPU call cost of US$1.5 per hour, the lowest cost on io.net can be between US$0.1-1.

In the future, io.net will also consider allowing GPU providers in the IO ecosystem to increase the chance of being used by mortgaging native assets. The more assets you invest, the greater your chances of being selected. At the same time, AI engineers who pledge native assets can also use high-performance GPUs.

In terms of GPU access scale, io.net is the largest among the 10 projects listed in this article. In addition to the idle rate, the number of GPUs in use also ranks first. In terms of token economics, the io.net native token and protocol token IO will be launched in the first quarter of 2024, with a maximum supply of 22,300,000. Users will be charged a 5% fee for using the network, which will be used in Burn IO tokens or to provide incentives for new users on both the supply and demand sides. The token model has obvious pull-up characteristics, so although io.net has not issued a token, the market is very popular.

Golem: CPU-based computing power market

Golem is a decentralized computing power marketplace that supports anyone to share and aggregate computing resources by creating a network of shared resources. Golem provides a scenario for users to lease computing power.

The Golem marketplace consists of three parties: computing power suppliers, computing power demanders, and software developers. Computing power demanders submit computing tasks, and the Golem network assigns the tasks to suitable computing power suppliers (who provide RAM, disk space, and CPU cores, etc.). After the computing tasks are completed, both parties settle the payment using tokens.

Golem mainly uses CPUs for computing power stacking, although the cost is lower than that of GPUs (Inter i9 14900k costs about $700, while A100 GPUs cost $12,000-25,000). However, the CPU cannot perform high-concurrency operations and consumes higher energy. Therefore, using CPU to rent computing power may have a slightly weaker narrative than a GPU project.

Magnet AI: Assetization of AI models

Magnet AI provides model training services to developers of different AI models by integrating GPU computing power providers. Unlike other AI+DePIN products, Magent AI allows different AI teams to issue ERC-20 tokens based on their own models. Users can obtain different model token airdrops and additional rewards by participating in different model interactions.

In Q2 of 2024, Magent AI will be launched on Polygon zkEVM & Arbrium.

Similar to io.net, it focuses on integrating GPU computing power and provides model training services for AI teams.

The difference is that io.net focuses more on the integration of GPU resources, encouraging different GPU clusters, enterprises and individuals to contribute GPUs while receiving rewards, which is driven by computing power.

Magent AI seems to focus more on AI models. Due to the existence of AI model tokens, it may attract and retain users around tokens and airdrops, and promote the entry of AI developers by capitalizing AI models.

Brief summary: Magnet is equivalent to building a market with GPUs. Any AI developer or model deployer can issue ERC-20 tokens on it. Users can obtain different tokens or actively hold different tokens.

Render: Graphic rendering AI model professional player

Render Network is a decentralized GPU rendering solution provider aiming to connect creators and idle GPU resources through blockchain technology. Its goal is to eliminate hardware limitations, reduce time and costs, and provide digital rights management to further promote the development of the metaverse.

According to the Render whitepaper, artists, engineers, and developers can create a range of AI applications using Render, such as AI-assisted 3D content generation, AI-accelerated rendering, and training AI models using Render’s 3D scene graph data.

Render provides the Render Network SDK for AI developers, enabling them to leverage Render’s distributed GPUs to execute AI computing tasks ranging from NeRF (Neural Radiance Fields) and LightField rendering processes to generative AI tasks.

According to a report by Global Market Insights, the global 3D rendering market is expected to reach $6 billion. Render, with a valuation of $2.2 billion, still has room for development.

Currently, specific data on Render’s GPU usage cannot be found. However, due to the close association between Render and OTOY, which has demonstrated a connection with Apple multiple times, as well as its wide range of businesses, including OctaneRender, a renowned renderer under OTOY, which supports leading 3D toolsets in industries such as VFX, gaming, motion design, architectural visualization, and simulation, including native support for Unity3D and Unreal Engine.

Additionally, Google and Microsoft have joined the RNDR network. Render processed nearly 250,000 rendering requests in 2021, and artists in its ecosystem generated around $5 billion in sales through NFTs.

Therefore, the valuation of Render should be considered in relation to the potential of the general rendering market (approximately $30 billion). With the BME (Burning and Minting Equilibrium) economic model, Render still has some upward potential in terms of token price and FDV (Fully Diluted Valuation).

Clore.ai: Video Rendering

Clore.ai is a platform that provides GPU power rental services based on Proof of Work (PoW). Users can rent out their own GPUs for tasks such as AI training, video rendering, and cryptocurrency mining, allowing others to access this capability at a low cost.

The scope of our business includes AI training, movie rendering, VPN, and cryptocurrency mining. When there is a specific demand for computational power, we allocate tasks through the network. When there is no demand for computational power, the network identifies the cryptocurrency with the highest mining profitability at that time and participates in mining.

In the past six months, Clore.ai’s number of GPUs has increased from 2,000 to about 9,000, but in terms of the number of GPU integrations, Clore.ai surpasses Akash. However, its secondary market FDV is only about 20% of Akash.

In terms of token model, CLORE adopts POW mining mode, without pre-mining and ICO. 50% of each block is allocated to miners, 40% to lessors, and 10% to the team.

The total number of tokens is 1.3 billion. Mining will begin in June 2022 and will basically enter full circulation by 2042. The current circulation is approximately 220 million. The circulating supply at the end of 2023 will be approximately 250 million, accounting for 20% of the total token supply. Therefore, the current actual FDV is US$31 million. In theory, Clore.ai is seriously undervalued. However, due to its token economics, the miner allocation ratio is 50%, and the mining, selling and raising ratio is too high, so the currency price has a great potential to increase. resistance.

Livepeer: video rendering, inference

Livepeer is a decentralized video protocol based on Ethereum that issues rewards to parties that securely process video content at a reasonable price.

According to officials, Livepeer has thousands of GPU resources transcoding millions of minutes of video every week.

Livepeer may adopt a “mainnet” + “subnet” approach, allowing different node operators to generate subnets and perform these tasks by cashing payments on the Livepeer mainnet. For example, the AI ​​video subnet is introduced for AI model training specifically in the field of video rendering.

Livepeer will later expand the AI-related parts from simple model training to inference & fine-tuning.

Aethir: Focus on cloud gaming and AI

Aethir is a cloud gaming platform with decentralized cloud infrastructure (DCI) built specifically for gaming and artificial intelligence companies. It helps deliver heavy GPU computing loads on behalf of players, ensuring gamers get an ultra-low latency experience anywhere and on any device.

At the same time, Aethir provides deployment services including GPU, CPU, disk and other elements. On September 27, 2023, Aethir officially provided commercial services of cloud gaming and AI computing power to global customers, integrating decentralized computing power to provide computing power support for games and AI models on its own platform.

By transferring the computing power requirements for computing rendering to the cloud, cloud gaming eliminates the limitations of the terminal device’s hardware and operating system, and significantly expands the potential player base.

2.2 Bandwidth

Bandwidth is one of the resources provided by DePIN to AI. The global bandwidth market will exceed US$50 billion in 2021, and is predicted to exceed 100 billion in 2027.

As AI models become more numerous and more complex, model training usually adopts a variety of parallel computing strategies, such as data parallelism, pipeline parallelism, and tensor parallelism. In these parallel computing modes, the importance of collective communication operations among multiple computing devices has become increasingly important. Therefore, when building large-scale training clusters for large AI models, the role of network bandwidth becomes prominent.

More importantly, a stable and reliable enough bandwidth resource can ensure simultaneous responses between different nodes, technically avoiding the emergence of a single point of control (for example, Falcon uses a low-latency + high-bandwidth relay network model to seek delay and bandwidth needs), ultimately ensuring the entire network is trustworthy and censorship-resistant.

Grass: Mobile-compatible bandwidth mining product

Grass is the flagship product of Wynd Network, which focuses on open network data and raised US$1 million in financing in 2023. Grass allows users to earn passive income from their Internet connections by selling unused network resources.

Users can sell Internet bandwidth on Grass, provide bandwidth services to AI development teams in need, help AI model training, and obtain token returns.

Currently, Grass is about to launch a mobile version. Since the mobile version and the PC version have different IP addresses, this means that Grass users will provide more IP addresses to the platform at the same time, and Grass will collect more IP addresses to provide AI model training that provides better data efficiency.

Currently, Grass has two ways to provide IP addresses: PC download extension program, and mobile APP download. (PC and mobile terminals need to be on different networks)

As of November 29, 2023, the Grass platform has 103,000 downloads and 1,450,000 unique IP addresses.

Mobile terminals and PC terminals have different demands for AI, so the applicable AI model training categories are different.

For example, the mobile terminal has a large amount of data on image optimization, face recognition, real-time translation, voice assistants, device performance optimization, etc. These are difficult to provide on the PC side.

Currently, Grass is in a relatively pioneer position in mobile AI model training. Considering the huge potential of the mobile market on a global scale, the prospects of Grass are worthy of attention.

However, Grass has not yet provided more effective information on the AI ​​model. It is speculated that in the early stage, mining coins may simply be the main mode of operation.

Meson Network: Layer 2 compatible with mobile devices

Meson Network is a next-generation storage acceleration network based on blockchain Layer 2. It aggregates idle servers through mining, schedules bandwidth resources, and serves the file and streaming acceleration market, including traditional websites, videos, live broadcasts, and blockchain storage solutions.

We can understand Meson Network as a bandwidth resource pool, with the two sides of the pool representing the supply and demand parties. The former contributes bandwidth, while the latter utilizes bandwidth.

In the specific product structure of Meson, there are two products (GatewayX and GaGaNode) responsible for receiving bandwidth contributions from different nodes worldwide, and one product (IPCola) is responsible for monetizing these aggregated bandwidth resources.

GatewayX focuses on integrating commercial idle bandwidth, mainly targeting IDC centers.

According to Meson’s data dashboard, there are currently over 20,000 nodes connected to IDC centers worldwide, forming a data transmission capacity of 12.5Tib/s.

GaGaNode primarily integrates residential and personal device idle bandwidth, providing edge computing assistance.

IPCola is the monetization channel for Meson, performing tasks such as IP and bandwidth allocation.

Meson recently revealed that its half-year revenue is over one million US dollars. According to official website statistics, Meson has 27,116 IDC nodes and an IDC capacity of 17.7TB/s.

Currently, Meson plans to release tokens in March-April 2024, but has first disclosed the tokenomics.

Token name: MSN, with an initial supply of 100 million coins. The mining inflation rate is 5% in the first year and decreases by 0.5% every year.

Network 3: Integrated with Sei network

Network3 is an AI company that built a specialized AI Layer 2 and integrated it with Sei. Through AI model algorithm optimization and compression, edge computing and privacy computing, we provide services to AI developers around the world, helping developers quickly, conveniently and efficiently train and verify models on a large scale.

According to official website data, Network3 currently has more than 58,000 active nodes, providing 2PB of bandwidth services. Reached cooperation with 10 blockchain ecosystems including Alchemy Pay, ETHSign, and IoTeX.

2.3 Data

Unlike computing power and bandwidth, data supply currently has a relatively niche market. And has a distinct professionalism. The demand group is usually the project itself or the AI ​​model development team of related categories. Such as Hivemapper.

There is no logical difficulty in this paradigm of training your own map model by feeding your own data, so we can try to broaden our horizons to DePIN projects similar to Hivemapper, such as DIMO, Natix and FrodoBots.

Hivemapper: Focus on empowering its own Map AI products

HiveMapper is one of the top DePIN concepts on Solana and is committed to creating a decentralized “Google Map”. Users can obtain HONEY tokens by purchasing the driving recorder launched by HiveMapper, and by using and sharing real-time images with HiveMapper.

Regarding Hivemapper, Future Money Group provided a detailed description in the “FMG Research Report: Understanding the Automotive DePIN Business Model Represented by Hivemapper, with a 19-fold Increase in 30 Days“ article. We will not elaborate on it here. Hivemapper is included in the AI+DePIN section because it has launched MAP AI, an AI map creation engine that can generate high-quality map data based on data collected from dashcams.

Map AI has set up a new role, AI trainer. This role includes previous driving recorder data contributors and Map AI model trainers.

Hivemapper does not deliberately specialize in its requirements for AI model trainers. Instead, it adopts low participation thresholds such as remote tasks, guessing geographical locations and other game-like behaviors to allow more IP addresses to participate. The richer the IP resources of the DePIN project, the more efficient AI will be in acquiring data. Users who participate in AI training can also receive HONEY token rewards.

The application scenarios of AI in Hivemapper are relatively niche, and Hivemapper does not support third-party model training. The purpose of Map AI is to optimize its own map products. Therefore, the investment logic for Hivemapper will not change.

Potential

DIMO: Collecting data inside the car

DIMO is an automotive IoT platform built on Polygon that enables drivers to collect and share their vehicle data, including vehicle kilometers traveled, driving speed, location tracking, tire pressure, battery/engine health, and more.

By analyzing vehicle data, the DIMO platform can predict when maintenance is needed and alert users in a timely manner. Drivers not only gain insights into their vehicles, but they can also contribute data to DIMO’s ecosystem and receive DIMO tokens as rewards. Data consumers can extract data from the protocol to understand the performance of components such as batteries, autonomous driving systems and controls.

Natix: Privacy-enabled map data collection

Natix is ​​a decentralized network built using AI privacy patents. Based on AI privacy patents, it aims to combine the world with camera devices (smartphones, drones, cars) to create a medium-security camera network, while collecting data under the premise of privacy compliance, and analyzing decentralized dynamic maps. (DDMap) for content filling.

Users who participate in data provision can receive tokens and NFT for incentives.

FrodoBots: Decentralized network application based on robots

FrodoBots is a DePIN game that uses mobile robots as a carrier, collects influence data through cameras, and has certain social attributes.

Users participate in the game process by purchasing robots and interact with players around the world. At the same time, the robot’s own camera will also collect and summarize road and map data.

The above three projects all have two elements: data collection and IP provision. Although they have not yet conducted relevant AI model training, they all provide necessary conditions for the introduction of AI models. These projects, including Hivemapper, require cameras to collect data and form a complete map. Therefore, the adapted AI models are also limited to areas focusing on map construction. The empowerment of AI models will help projects build a higher moat.

What needs to be noted is that camera collection often encounters regulatory issues such as two-way privacy invasion: for example, the definition of the portrait rights of passers-by when external cameras collect external images; and the importance that users attach to their own privacy. For example, Natix operates AI for privacy protection.

2.4 Algorithm

The distinction between computing power, bandwidth, and data lies in the focus on resource allocation, while algorithms focus on AI models. Taking BitTensor as an example, BitTensor neither directly contributes data nor computing power. Instead, it uses blockchain networks and incentive mechanisms to schedule and select different algorithms, creating a model marketplace for free competition and knowledge sharing in the field of AI.

Similar to OpenAI, the goal of BitTensor is to maintain the decentralization of models while achieving inference performance comparable to traditional model giants.

The algorithm track has a certain level of foresight, and similar projects are not common. As AI models, especially those born from Web3, emerge, competition between models will become the norm.

At the same time, competition between models will also increase the importance of downstream processes in the AI model industry, such as inference and fine-tuning. AI model training is only the upstream of the AI industry. A model needs to undergo training to possess initial intelligence, and then undergo more careful model inference and adjustment (which can be understood as optimization) based on this foundation. Only then can it be deployed at the edge as a finished product. These processes require a more complex ecological architecture and computing power support. It also indicates tremendous potential for development.

BitTensor: AI model oracle

BitTensor is a decentralized machine learning ecosystem with an architecture similar to Polkadot mainnet + subnet.

Working logic: The subnet transmits activity information to the Bittensor API (the role is similar to an oracle), and then the API transmits useful information to the main network, and the main network distributes Rewards.

BitTensor 32 subnet

Roles within the BitTensor ecosystem:

Miner: Can be understood as the provider of various AI algorithms and models worldwide. They host AI models and provide them to the Bittensor network. Different types of models form different subnets.

Verifier: Evaluator within the Bittensor network. Evaluate the quality and effectiveness of AI models, rank AI models based on performance on specific tasks, and help consumers find the best solutions.

User: The final user of the AI ​​model provided by Bittensor. It can be an individual, or it can be developers seeking to use AI models for applications.

Nominator: Delegate tokens to a specific validator to show support, or you can delegate tokens to different validators.

Open AI supply and demand chain: Some people provide different models, some evaluate different models, and some use the results provided by the best model.

Unlike Akash and Render, which are similar to “computing intermediaries,” BitTensor is more like a “labor market,” using existing models to absorb more data in order to make the models more accurate. Miners and validators play the roles of “construction workers” and “supervisors.” Users pose questions, miners provide answers, validators evaluate the quality of the answers, and ultimately return them to the users.

The token for BitTensor is called TAO. TAO’s market value is currently second only to RNDR, but due to its long-term release mechanism of halving every four years, the ratio of market value to fully diluted value is actually among the lowest in several projects. This also means that the overall circulation of TAO is relatively low at the moment, but the unit price is high. Therefore, the actual value of TAO is currently considered undervalued.

Currently, it is difficult to find suitable valuation targets. If we consider architectural similarity, Polkadot (with a market cap of around $12 billion) can be used as a reference, indicating that TAO has nearly 8 times the potential for growth.

If we consider the “oracle” attribute, Chainlink (with a market cap of $14 billion) can be used as a reference, indicating that TAO has nearly 9 times the potential for growth.

If we consider business similarity, OpenAI (acquired by Microsoft for approximately $30 billion) can be used as a reference, suggesting that TAO’s upper limit for growth may be around 20 times.

Conclusion

Overall, AI+DePIN has brought about a paradigm shift in the AI track within the context of Web3. It has shifted the market’s focus from merely contemplating “what can AI do in Web3?” to a broader question of “what can AI and Web3 contribute to the world?”

If NVIDIA CEO Jen-Hsun Huang calls the release of large generative models the “iPhone” moment of AI, then the combination of AI and DePIN means that Web3 will truly usher in the “iPhone” moment.

As the easiest and most mature use case of Web3 in the real world, DePIN is making Web3 more widely accepted.

Due to the partial overlap between IP nodes and Web3 players in the AI+DePIN project, the combination of the two is also helping the industry to produce Web3’s own models and AI products. This will be beneficial to the overall development of the Web3 industry and open up new tracks for the industry, such as the reasoning and fine-tuning of AI models, and the development of mobile AI models.

An interesting point is that the AI+DePIN products listed in the article seem to be able to embed the development path of the public chain. In previous cycles, various new public chains emerged, using their own TPS and governance methods to attract various developers.

The same is true for the current AI+DePIN products, which attract various AI model developers based on their own computing power, bandwidth, data and IP advantages. Therefore, we currently see that AI+DePIN products tend to be biased towards homogeneous competition.

The key issue is not the amount of computing power (although it is an important prerequisite), but how to utilize this computing power. The AI+DePIN track is still in its early stages of “wild growth,” so we can have high expectations for its future pattern and product forms.

Disclaimer:

  1. This article is reprinted from [FutureMoney]. All copyrights belong to the original author [FMGResearch]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!
Create Account