introduction
If AI Agent is coming on strong in this round of crypto cycle, then Shaw, the founder of ai16z and Eliza, has undoubtedly grasped the direction of the tide.
Ai16z, which he initiated, is the first AI Meme-themed on-chain fund, which is a satirical expression of the well-known venture capital a16z. It started fundraising from 0 in October 2024 and grew into the first AI DAO on Solana with a market value of over 2.5 billion US dollars (currently it has been adjusted back) in just a few months; and the core of ai16z, ElizaOS, is a multi-agent simulation framework, based on which developers can create, deploy and manage autonomous AI Agents. Thanks to the first-mover advantage and the booming TypeScript community, the Eliza code base has collected more than 10,000 stars on GitHub, accounting for about 60% of the current market share of Web3 AI Agent development.
Despite the controversy on social media, Shaw has become a key figure in the field of crypto AI. There are already many interviews with him in the Chinese community, but we believe that the podcast conducted by Tom Shaughnessy, co-founder of Delphi Digital, a leading crypto investment research institution, and Ejazz from 26 Crypto Capital with Shaw on January 6 is the most in-depth interview with Shaw on the topic of Practical Thinking of AI Agents and is still forward-looking.
In this conversation, not only were the questions very insightful, but Shaw was also honest and outspoken as always, sharing a lot of his views on the current use cases of AI Agents in the Web3 industry and his judgment on the future, covering important topics from Agent development frameworks, token economics to the future of open source AGI platforms, full of practical information. In response, Coinspire has translated a complete version to share with readers, hoping to get a glimpse of the future of AI+Web3.
Key Highlights
▶ The creation of Eliza Labs and the rapid development of ai16z
▶ Dive into various aspects of Eliza framework technology
▶ Analysis of proxy platforms and the transition from Slop Bots to utilities
▶ Discussion of token economics and value capture mechanisms
▶ Explore cross-chain development and blockchain options
▶ The vision of open source AGI and the future of artificial intelligence agents
Part 1 Entrepreneurial experience and trip to Asia
Q1: Shaw, please tell us about your experience
Shaw: I worked on open source projects for many years, and created an open source space network project, but my partner removed me from GitHub and sold the project for $75 million, and I got nothing. He didnt write a line of code, and I was the lead developer of the project. Although I am suing him, this incident has made me lose everything and ruin my reputation.
Later, I restarted and focused on AI Agent research, but because the previous people took all the funds, I had to take all the responsibilities myself, even in debt, and do some service projects to make a living. In the end, the concept of the metaverse was not popular, and the direction gradually became unsuitable.
After that, I joined Webiverse as the lead developer. At first, things went well, but the project was hacked, the treasury was stolen, and the team had to transform. This experience was extremely difficult and almost broke me.
I went through a lot of setbacks, but I kept pushing forward. I worked with the founder of Project 89 (neural linguistic viral interaction AI) to launch a platform called Magic and completed a seed round of financing. He wanted to make the platform a no-code tool for users to build agent systems. I thought that if you provide a complete solution, users might just copy it directly; if you don’t provide it, they don’t know where to start. When the funds were running out, I decided to focus on the development of the agent system. At that time, I had already created the first version of Eliza on this platform. All this may sound crazy, but I have been trying and exploring new directions.
Q2: What is the situation of the developer community in Asia?
Shaw: I have been in Asia for the past few weeks, meeting intensively with the local developer community. Since our project was launched, especially after the AI Agent-related content (such as the ai16z project) gained attention, I have received a lot of information from Asia, especially China, and we found that there are many supporters here.
I met a lot of people through a community called 706. Some people helped us manage the Chinese channel and Discord and organized a small hackathon. I also met a lot of developers at the event. After reviewing their projects, I felt that I must come here to meet everyone in person. So we planned a trip to visit multiple cities to meet developers.
The local community is very enthusiastic and organized one event after another for us. I was able to communicate with many people, learn about their projects and establish connections. In the past few days, I have traveled from Beijing, Shanghai to Hong Kong, and now I am in Seoul. I will go to Japan tomorrow.
During these meetups, I saw many interesting projects, such as games, virtual girlfriend apps, robots, and wearables. Some projects involve data collection, fine-tuning, and annotation, which may have good development prospects when combined with our existing technologies. I am particularly interested in integrating AI Agents into DeFi protocols, which can lower the barrier to use for users and may become a killer application in the next few months. Although many projects are still in the early stages, the enthusiasm and creativity of the developers are impressive.
Part.2 AI Agent + DeFi combined with use cases and practicality discussion
Q3: Ai16z is now valued at billions of dollars, and the Eliza framework supports a large number of agents. Developers are very interested, and the project has been popular on GitHub for weeks. At the same time, people are gradually getting tired of chatbots on social media that can only automatically reply, and are looking forward to agents that can actually complete tasks, such as creating tokens, managing token economic systems, maintaining ecosystems, and even performing DeFi operations. Do you think the future development direction of agents will have these functions? Will Elizas agents focus on DeFi?
Shaw: Its an obvious business opportunity, and Im also tired of the Reply Robot situation where a lot of people are just downloading the tool, showing it off and pushing tokens, but I really hope we can get beyond that. There are three main categories of affiliates that Im most interested in right now: one is affiliates that make you money, two is affiliates that get products into the hands of the right customers, and three is affiliates that save you time.
We are still stuck in this auto-reply mode, I personally block all reply bots that are not called, and I encourage everyone to do the same, because it creates a social backlash that forces proxy developers to actually think and build something meaningful. Just blindly following a trend and commenting on everything doesnt actually help any token.
I am most interested in DeFi right now because it has a lot of arbitrage opportunities. DeFi fits the characteristics of there are opportunities to make money but many people dont know how to use it more than anything else. We are already working with some teams, such as Orca, and DLMM (Dynamic Liquidity Market Maker) on Meteora. Bot can automatically identify potential arbitrage opportunities. When the scope of the token changes, it will automatically adjust and transfer the proceeds back to your wallet. This way users can safely put their tokens in it, and the whole process is automated.
Additionally, Meme coins are very volatile. In fact, Meme coins rose so much in the early days of their launch that it was difficult to operate liquidity pools (LPs). But once they stabilize, volatility becomes a favorable factor, and profits can be made through liquidity pools. I basically dont sell tokens myself, but I make money through liquidity pools, and I always encourage other proxy developers to do so. But I was surprised to find that many people didnt do this. I have a friend who told me that he had a hard time making money. I asked him if he had considered using a liquidity pool, and he said he didnt have time, but in fact he should do a liquidity pool and make a lot of money through the trading volume of tokens.
Q4: In addition to liquidity pools, will these agents start managing their own funds for trading, such as projects such as Ai16z and Degen Spartan AI? How will they operate their own assets under management (AUM), and are these agents able to achieve this goal within this year?
Shaw: I think that currently large language models (LLMs) are not suitable for direct trading. Instead, if there is a suitable API to obtain market intelligence, it can make reasonable judgments. For example, I have seen AI systems with a trading success rate of about 41%, which is quite good because most cryptocurrencies are not stable, but LLMs are not good at making complex decisions. Their main role is to predict the next token and make more reasonable decisions based on contextual information.
Where LLM becomes valuable is in turning unstructured data into structured data. For example, turning a bunch of people promoting tokens to each other in a group chat into actionable data. We have a team doing research called Trust Markets, where the core question is whether we can make money if we treat recommendations in group chats or on Twitter as real and trade based on those recommendations. It turns out that a small number of people are really good traders and recommenders, and we are analyzing the recommendations of the top ones and may act on their advice in the future.
Its like prediction markets, there are a small number of people who are very good at predictions, and most people are poor, or susceptible to behavioral economics. So our goal is to track the performance of these people through some measurable indicators and use it as a training strategy. I think this method is not only applicable to making money, but also to more abstract areas such as governance and contribution rewards.
But making money is the easiest because it is like an easily measurable Lego block. I don’t think that just giving time series data to LLM and letting it predict the buying and selling of tokens is really the solution. If you design an agent to automatically buy and sell tokens, I think it can definitely do it, but it may not make money, especially when buying some volatile tokens, so I think we need a more flexible and reliable method than simple buying and selling.
Q5: If there is an agent that is very good at trading, why open source it and create a token around it instead of just doing the trading yourself?
Shaw: I was told about a company that claims to be able to predict coin prices with 70% accuracy. I thought, if I could do that, I wouldnt be here telling you this, I would just be printing infinite money. 70% accuracy for a short-term trade like Bitcoin means you can easily make infinite profits. Im sure companies like BlackRock are doing something similar to some extent, they are trying to process global data to make predictions about stocks and so on, and maybe they are successful in this regard, after all, they have a lot of people dedicated to this kind of work.
But I think in the lower-cap markets, things like behavioral drivers and social media influences are probably more important than any fundamental data you can predict. For example, a celebrity retweeting a contract address might be more effective than any algorithm you can predict. So I think the reason why meme coins are interesting is that they have very low market caps and are very susceptible to social dynamics. If you can track these social dynamics, you can find opportunities in them.
Part.3 The value of the Agent framework and the development advantages of Eliza
Q6: In combination with Eliza’s application scenarios, how can the team bring a new, innovative agent to market using Eliza? What is the main differentiator of this agent? Is it the model, the data, or other features and support provided by Eliza?
Shaw: There is a perception that it is just a wrapper around ChatGPT, but that is similar to thinking of a website as a wrapper around HTTP or an application as a wrapper around React. The key is the product itself and whether there are customers using it and paying for it, which is the core of everything.
Models are extremely commoditized, and training a basic model from scratch is very expensive, and may cost hundreds of millions of dollars. If we had the funding and market share like OpenAI, it would be easy to build an end-to-end training system and train models, but then we would be competing with Meta, OpenAI, XAI, and Google, who are all trying to improve their benchmark performance to prove that they are the best models in the world. At the same time, XAI will open source the previous version every time it releases a new version, and Meta will also open source everything they do, and occupy market share through open source.
But I dont think thats where we should compete. We should focus on helping developers build products. The key is the future of the Internet, how websites and products work, and how users use applications. There are already a lot of great products and infrastructure waiting to be used by users, but users dont know how to find them. You cant simply Google DeFi protocol to make money, you may find a list and do some research, but its not easy if you dont know what to look for.
Therefore, the real value point lies in connecting what already exists and changing the existing model. Instead of staying on a website and landing page, take it to social media to actually demonstrate the use case of the product and find those users who need your product. I think that AI agents should not only be products, but should be part of the product and an interface to interact with the product. I hope to see more similar attempts.
Q7: Why do you think Elizas framework or the platform you are building is the best place for developers and builders? Compared with other frameworks and languages (Zerepy team uses Python, Arc Team uses Rust)
Shaw: I think language does matter, but its not everything. There are more developers writing apps in JavaScript right now than any other language. Almost every messaging app, from Discord to Microsoft Teams, is also written in JavaScript, or uses some sort of native runtime, and the UI and interaction parts are written in JavaScript, or a lot of backend development, and there are more developers writing JavaScript and TypeScript right now than all other languages combined, especially with the rise of tools like React Native, a javascript-based framework for creating native mobile Android and iOS platform apps.
Many developers who have developed on EVM have downloaded Node.js and run Ethereum development tools such as Forge or Truffle, and are familiar with the ecosystem. We can reach out to developers who have done website development, and they can also make agents.
Although Python is not particularly difficult to learn, it is somewhat difficult to package into different forms, and many people are stuck at the step of installing Python. Pythons ecosystem is messy, and the manager is also complicated. Many people may not know how to find the right version to work. Although Python is a good choice for background development, I found that Python is not good enough for asynchronous programming when I did a lot of development before, and it is also troublesome in string processing.
When I realized the advantages of TypeScript in developing agents, I realized that this was the right direction. On the other hand, what we provide is an end-to-end solution, which works immediately after you clone it. I think Arc is a cool project, but it lacks connectors, there are no social connectors. Projects like Zeropy are also good, but they mainly do social connectors or reply to messages through loops. And a lot of other projects, although they let several agents talk to each other, dont really connect to social media.
I think of these frameworks as the body and the LLM (Large Language Model) as the brain. We build this bridge so that these frameworks can connect to different clients. By providing these solutions, we greatly lower the barrier to entry and reduce the amount of code developers need to write. Developers only need to focus on their products and pull the required APIs, and we provide simple abstractions for input and output.
Q8: As a non-developer, how do you understand the functions and processes released by the Eliza platform? From a non-developers perspective, what kind of functions or support can Agent builders get after connecting to Eliza or other competing platforms?
Shaw: You just download the code to your computer, change the character, and you fire it up and you have a basic bot that can do anything, like chat, which is the most basic functionality. We have a lot of plugins, and if you want to add a wallet, just enable the plugin and add the private key of the EVM chain, choose the chain you want; you can also add API keys, such as Discord, or your Twitter username and email, etc., all of which are set up and can be used directly without writing code. This is also why you see a lot of bots doing sales and replies.
After that, you can use some abstract tools to perform other operations, called actions. For example, if you want the robot to help you order a pizza, you just need to set an action of order pizza. Then, the system will automatically obtain the users information, which may be the information provider of the current user. An evaluator is also needed to extract the user information you need, such as name and address. If someone asks you to order a pizza through a private message, the system will first obtain the users address and then perform the operation of ordering pizza.
These three parts: providers, evaluators, and actions, are the basis for building complex applications. Any operation like filling out a form on a website, basic actions can be achieved through these three elements. We currently use this approach to handle tasks such as automatic LP management. This is like writing any website, mainly calling APIs, and developers should be able to get started easily.
For non-developers, I recommend you to choose a hosted platform and select the features or plugins you need without diving into the code. You can of course roll your own if you want.
Q9: How long does it take for a developer to build these features or stitch these components together from scratch? In comparison, how does the time cost of using the Eliza platform compare?
Shaw: It depends on what you want to do. If you just look at the codebase and understand the abstractions, you can probably build very specific functionality in a very short period of time, like I could probably build an agent that does what you want in a week. But if you want to have memory capabilities, extract information, or build a framework to support those capabilities, its a little more complicated.
For example, I made a pizza delivery app. It took me 5 hours and another person 2 hours to make it. Basically, it took me a day to make it. If I did it myself, it would probably take me several weeks. Although now everything such as writing code is accelerated by AI, the overall framework already provides you with a lot of things.
For example, like React, all the apps are built on React. You can throw together a website really quickly, but as the complexity of the project increases, it becomes very difficult to do. So when youre doing something simple, you just need an LLM, a blockchain, and a loop, and it can be done in a few days. But we support all models, it can run completely locally, it supports transcription, you can send an audio file to Discord and it will transcribe it, upload a PDF file and you can chat, all of this is already built in, and most people dont even use 80% of the features in it.
So if you just need to build a simple chat interface, you can definitely do it yourself. But if you want to build a full-featured agent that can do a lot of things, then youll need a framework that already takes care of most of it. I can tell you that it took me many months to build this.
Q10: Compared with other agent platforms that generally emphasize rapid design, deployment, and code-free operation, is Eliza more suitable for building customized and uniquely functional agents?
Shaw: If you take the entire Arc system, or the entire Zeropy, the entire Game framework, the number of lines of code is much less than Eliza, because Eliza includes so many different functions, and even if you only take out the plugin part, it includes a lot of core functions, such as speech to text, text to speech, transcription, PDF processing, image processing, etc. These are all built in. Although it is a bit too complicated for some people, it does make a lot of things possible, which is why so many people use it.
Ive seen some proxies that are just Eliza with some extra features, like they use the Pump.fun plugin that we provide, or Eliza with the ability to generate images and videos, which are actually built in. Id like to see more people try it out and see what happens if you enable all the plugins at once.
My goal is that eventually these agents will be able to write new plugins from scratch themselves, because there will be enough similar existing plugins as examples, and this will all be trained into the model. Once you achieve 100 stars and reach a certain codebase threshold, companies like OpenAI and Claude will grab this data and use it for training. This is part of our loop, and eventually you will be able to write new plugins yourself.
Q11: If Eliza becomes the most powerful codebase (not just in terms of wealth, but in terms of being able to provide the most powerful functionality to any agent developer), does that mean Eliza will be able to attract developers not only from the crypto space, but more from traditional AI and machine learning backgrounds?
Shaw: If there is a breakthrough. Eliza is not a crypto project in itself except for a lot of blockchain integrations (all plugins). I noticed that the trend of GitHub popularity helped us attract people in the Web2 field, and many people just thought it was a tool that was very suitable for developing proxy frameworks.
I personally really want people to accept this, I feel like some people are biased against cryptocurrencies, but I think its clear that 99% of brokers will trade 99.9% of tokens in the future. Cryptocurrency is the native token of brokers, try using a PayPal account, its really hard. We can just open a wallet, generate a private key, and its easy.
We do attract some non-crypto people, especially those who don’t actively trade crypto, who think crypto is fine, but are more interested in the proxy application.
Although some people are biased against crypto projects, they are willing to accept it as long as it can bring real value. Many people see only hype and empty words and feel disappointed, but when they see that our project has actual research and engineering support, they will gradually change their views. I hope to attract more people, and we have indeed made some progress, which is a huge differentiating advantage.
Part.4 The vision of open source AGI and the future of AI Agents
Q12: In the future, how will you compete with OpenAI and traditional AI labs? Is it through a group of agents built on Eliza working together as a differentiating advantage, or is this comparison fundamentally meaningless?
Shaw: Thats a great question. First of all, when you start Eliza, it starts with a new model by default, which is a fine-tuned Llama model, the Hermes model, which has been trained by Nous Research. I really like what they do, and one of the people who helped launch the God and Satan Bots, and some of the other bots, is Ro Burito, who is both a member of Nous Research and an agent developer in our community. So, we probably could have trained the model ourselves, but we have partners like them, and rather than compete with them, I would rather work with them and complement each others strengths.
Many people dont understand how easy it is to train a model, when it only takes one line of command. If I go to Together, I can start fine-tuning a Llama model in five minutes by typing a command and pointing to a Json file. The advantage of Nous is not their fine-tuning method, but the data. They collect and carefully curate data, which is their core competency. The collection, preparation and cleaning of data is very tedious work, and they focus on data that is different from OpenAI. This is also our market differentiation.
We chose to use their model because they dont reject as many requests as OpenAI does. We have a term called OpenAI models are neutered, and basically all agent developers feel that OpenAIs model is limited. And our market differentiation is that OpenAI will never let you make an agent that can connect to Twitter, they will never let you make your assistant very personalized or interesting. They are not bold enough, they are not cool enough, and they are under a lot of pressure.
If you go to ChatGPT right now and ask it a question about the 2024 election, it will probably give you a long answer, but for a long time it will just tell you Biden directly because thats how it was trained. Im not saying Im in favor of one side or the other, but I think its silly to have a leading model make such a simple political choice. OpenAI is very careful that they are largely just doing things without actually letting users get what they want.
So, but the real competition is how you collect data and where the data comes from. You dont see OpenAI doing this. If you look at Sam Altmans tweet, he said that users really want an adult mode, not NSFW (not suitable for public places), but adults in the room, that is, dont treat me like a child and cant see certain information. Moreover, because OpenAI is centralized, they face a lot of political pressure from the government. I think the open source movement has gotten rid of this constraint. Whats more important is to have diversity and a variety of different models to meet the real needs of users, give them what they want, rather than control their behavior. This approach will eventually win. For OpenAI, although they have huge funds, a very high market value, and a large number of talents. However, decentralized AI provides conditions for rapid development such as community support, incentive mechanisms, and funds, and there is no need to wait for hardware such as GPUs.
I think the path to AGI is not one or the other, but a combination of approaches. If the worlds largest companies are doing something, does competing with them really accelerate development? I think AI agents are the stepchildren of the AI world because they are not as easy to measure with standards as traditional AI, and it is difficult for PhD researchers to say that this agent is better than another through quantitative indicators. AI agents are more about basic engineering and creative problem solving, which is also the uniqueness of many developers who have invested in this field.
Q1 3: What does open source AGI (artificial general intelligence) mean? Is it through a group of agents that collaborate autonomously to eventually produce a super-intelligent whole, or are there other ways?
Shaw: If you have millions of developers using mostly open source models and tools, they will compete with each other to optimize the capabilities of the entire system. I think AGI is actually the shape of the Internet, which is composed of a large number of agents doing all kinds of things. And it doesnt need to be a unified system to be called AGI, but it depends on how you define AGI.
Most people think that AGI is intelligence that can do everything like humans do. In fact, this agent does not need to have all the knowledge in advance. It can get the required information by calling APIs or operating computers. If it can operate computers like humans, has a strong memory system and rich functions, and finally combines with actual robots, AGI will become obvious.
However, in the field of AI, we often say that AGI is something that computers cant do now, and this goal is always changing as new models are introduced. At the same time, there is a concept called ASI, which refers to a powerful model that can manipulate the world. I think if it is only built by large companies like Microsoft, it may have the potential for super intelligence. But if there are many different players, each open-sourcing their own models, and constantly fine-tuning and optimizing these models, it will eventually form a multi-agent system like the Internet, interacting with each other and having their own expertise, this system will look like super artificial intelligence.
This is a huge system, or even a collection of systems. If an agent wants to attack other agents, it will be very difficult because no agent is much more powerful than the others. As technology advances, we are also reaching an energy limit. The model cannot be infinitely expanded, otherwise it will require nuclear reactors to support it. Just like Microsoft is now investing in nuclear power plants, all companies are gradually improving their models.
OpenAI’s new model GPT-4 is very close to human intelligence, but again, other companies are actively developing similar models, and many people are paying attention to researching and implementing the latest technologies. Even though OpenAI’s model is close to AGI, due to the large number of users, its model has to compromise on quality and move towards low-scale models to reduce the burden on GPUs.
Overall, I think that competition between companies, increasingly efficient models, and open source enabling more developers to participate are all driving the emergence of super AI. I hope that in the future, on Twitter, I can easily find a robot that does something and choose the one that suits me best.
Q1 4: What role will tokens and markets in cryptocurrencies play in realizing future innovations and visions?
Shaw: If we look at it from the perspective of intelligence, the market itself is a kind of intelligence. It can discover opportunities, allocate capital, promote competition, and ultimately optimize the best solution. This process may continue to compete until a complete and mature system is formed. I think market intelligence and competition play an important role here.
The role of cryptocurrency is obvious. It has two key functions:
First, it provides a crowdfunding mechanism for projects that no longer relies on the old Silicon Valley venture capital model, based on what people really want rather than the value defined by a few venture capitalists. Although venture capitalists often have profound insights, their investment logic may also be limited to a certain geographical or cultural circle, ignoring the potential for more decentralized capital allocation.
Secondly, cryptocurrencies are able to accurately capture people’s emotional needs. If a product that meets this need can be delivered, users will be very excited. However, the main problem in the crypto space is that many projects hit the emotional point but ultimately fail to deliver on their promises. If these projects can actually achieve their goals, such as developing a robot that can provide perfect market insights, it will be extremely valuable.
In addition, the auditability of open source allows anyone with the ability to verify the authenticity of the project. This transparency can guide capital to flow more efficiently to opportunities with real potential. A big problem in the current world is that most people cannot invest in companies like OpenAI unless they go public, but by then, the returns are relatively limited. In contrast, cryptocurrency gives people the opportunity to invest directly in the early stages of a project, thereby realizing the dream of participating in the future and generational wealth.
In order to make these mechanisms more perfect, we need to better prevent fraud. I believe that open source and public development can greatly improve the efficiency of capital allocation in the market and accelerate the development of this field. At the same time, future agents will trade tokens with each other, and almost everything can be tokenized - trust, ability, money, etc. In short, cryptocurrency provides a new way to allocate capital and accelerates the realization of innovation and future vision.
Part.5 Discussion on Token Economy and Value Capture Mechanism
Q15: Is the ai16z platform fast enough in implementing the token economic value capture mechanism? How to deal with potential competitive threats?
Shaw: The problem with open source blockchains is that the incentive to fork is very high because when you hold network tokens, there is a direct economic interest. If we launch an L1, people may fork our L1 or feel that they cant really work with us because we are an L1.
Tribalism is strong in the crypto industry, largely because of this either-or competition rather than inclusive collaboration.
Realistically, our token economic model needs to continue to evolve and find new ways to make money. Launchpad is not the final token economic model, but an initial version. We have attracted a lot of attention, and many partners want to publish on our platform. They just need a hosted way to start their Agent projects. We can provide plugins and ecosystem capabilities for them to use directly.
We plan to open source Launchpad, but we can foresee that once we do, others will copy it. Projects that rely solely on launching platforms will need to rethink their long-term strategies, as a strategy of just setting up roles, burning tokens, and buying back may not be sustainable.
In the long run, we want to invest in technologies that can expand the value of the overall ecosystem. In the short term, we need to meet market demand and launch Launchpad. But three months later, the launch platform may become ordinary, many projects will fail, and only a few can continue to create value.
The focus in the future is not to simply launch Agents, but to invest in projects that are clearly creating value. We have already started investing and acquiring, and these also have their own token economic models, such as buying back tokens through revenue and using them for more investment. In addition, we are also looking for new ways to increase the value of tokens, such as increasing long-term yield pressure, rather than just simple mechanisms such as charging network fees or burning through token pairings.
My goal is to push us beyond these simple models and move towards a bigger vision. We want to build a platform like a production studio, where people can submit projects to DAOs and roles, verify popular projects, and then invest. I think the current token economy plan can last for six months, but we are also actively thinking about the next token economy model.
Q16: If ai16zs token economic model works successfully and the token has actual value, it will not only provide more financial support for the project development platform, but Agent will also further promote the development of the open source framework, bringing growth to the ecosystem in an indirect way?
Shaw: I think about this a lot. In AI, there’s a tool called “Fume,” which is an agent that writes its own code and continuously improves it at a much faster rate than humans can. They write code for every possible use case, submit pull requests (PRs), and other agents review and test it. This could happen in a few years, maybe even less than two years. If we can keep doing this, we’ll reach a kind of “escape velocity,” where systems will accelerate exponentially, and we could eventually get to the stage of AGI (artificial general intelligence), where they can completely build themselves.
We should do everything we can to accelerate the move towards this future. I’ve seen projects like Reality Spiral, Agent submitting PRs to GitHub, and this trend has already begun.
If we can allow tokens to accumulate value while investing in our ecosystem and driving its growth, this will form a positive cycle: the increase in token value drives the development of the ecosystem, and the ecosystem in turn increases the value of tokens. Eventually, this system will reach an automatic state.
However, there is still a lot of practical work to be done. The key is to ensure that the token accumulates value in the expected way and meets the needs of users. For example, Launchpad was developed based on user needs to help them realize what they are already building.
In the future, we can even let agents create specific projects directly, multiple agents compete for development, and finally the community votes to select the best result. This model may quickly become extremely complex and powerful, and our goal is to accelerate to this stage.
Part.6 Exploring cross-chain development and blockchain selection
Q17: Which blockchain do you think AI agents should be developed on? Solana or Base?
Shaw: From a user perspective, blockchains have become “normalized” and many people don’t even know which chain their tokens are on. Although the EVM and SVM models are very different in programming and functionality, to users, they are basically the same. Users just check their wallets to see if they have funds or to make a token swap.
For the future of Agent, I hope it can blur the difference between chains, and tokens will definitely be frequently bridged between the two. Currently we are SPL 2022 tokens with minting capabilities, so there are some technical challenges in cross-chain, but we are overcoming these problems.
I actually like the Base team, they are very supportive of us, so I dont have a particular preference. I chose Solana because the users are here. As product people, we should put aside personal ideas, focus on user needs, and provide the services they need in the place they like.
Currently, you can deploy an Agent on Base or on StarkNet, and the choice is completely open. The split of these ecosystems comes more from the price of their respective tokens, whether there are tokens, and the existing developer community and infrastructure. The main reason we chose Solana is because projects like DAOs.fun and users are on this chain. But in general, I don’t have a strong preference for platforms, and the best strategy is to cover all platforms, observe where the users are, and then provide services there.
Part 7: The transformation of slop bots into utilities
Q18: Is there a natural transition period between the current situation where some “slop agents that are of no practical use” are gradually losing their market and the emergence of “high-performance agents” in the future that can truly perform efficient and practical tasks?
Shaw: I think we’re going to be entering a phase very soon where agents are going to do amazing things, and if people can make money from them, then that agent is going to be very successful.
As for whether Slop Agents will disappear, I think they may not disappear completely. Their current situation is that the platform (such as X) realizes that they cannot eliminate these agents by force, nor can they judge whether they are robots or humans through manual review, especially when these agents are very close to the Turing test. Therefore, the platforms solution is to punish those people who cause trouble more through algorithms.
From a developers perspective, if they cant attract users, agents wont have any influence. My approach to this is to simply block those agents that dont make sense. I think if the agent is not specifically called upon and doesnt provide valuable content, we dont want this content to appear on the platform.
Agents in the DeFi space have not yet been fully developed, although teams are still working hard on research and development. But I believe that in the next month, we will see a lot of new developments. In addition, we have not seen agents that can find users for their products. Now many agents are just used for inefficient promotion, but imagine that an agent finds the solution you need. You will definitely not block it, but be grateful for it, just like you are using the new Google.
Right now, we’re at a stage where dogs play poker. Initially, if you walk into a room and see four dogs playing poker, you’ll think it’s incredible, but after a few weeks, you’ll ask, “How good are these dogs? Are they really making money, or are they just holding the cards?” When the novelty wears off, people will start to pay attention to who is the best poker dog, or who has the best poker algorithm.
Therefore, while “Internet celebrity agents” may always exist, we will see more useful agents in the future, just like in Web2, McDonald’s may launch a “Grimace agent”, or some Internet celebrities may have to build a reply robot to establish a virtual relationship with their fans because their private messages were flooded after publishing content.
Q19: Currently, detailed information such as the Agent’s architecture, model, hosting location, etc. is difficult to obtain and can only rely on the trust of developers. How can we visualize and view it?
Shaw: I believe someone will hear this need and build this platform, and I agree that there is an opportunity here. TEE has been around for a long time, and I have talked to many developers. Before the emergence of Agents, it was just a very obscure concept. The emergence of Agents made people start to ask: What if it is an autonomous agent, how to prevent it from directly taking the private key and stealing the money? So people started to pay attention to TEE, and I think Phala did a good job because they created an obvious need: a verifiable remote attestation system. This is also why we see the rise of products like ZKML (zero-knowledge machine learning), which give users peace of mind by providing the necessary trust mechanism.
We will see a lot of products dealing with this uncertainty, and this uncertainty itself is a great product opportunity. If someone can build a list that provides certification for these agents, it will be successful. Just like the trust score of decentralized exchanges, we can also see similar agent verification systems. Open source will become an important incentive, because if the code is relatively simple and the problem is trust, then why not open source it so that everyone can check it? This may lead to a new group of programmer influencers who will evaluate the legitimacy of these agents.
I think in five years, you will be able to look up information about any Agent at any time, and there will probably be a website dedicated to that information. If not, someone should start building such a platform this year.
Original video link:
https://www.youtube.com/watch?v=0WQAmmJJ34c
*All content on the Coinspire platform is for reference only and does not constitute an offer or recommendation of any investment strategy. Any personal decision made based on the content of this article is the responsibility of the investor. Coinspire is not responsible for any gains or losses arising therefrom. Investment is risky, so make decisions carefully!