Original author: Mohamed ElSeidy
Original translation: TechFlow
Introduction
Yesterday, Solana’s AI-related token $Dark was launched on Binance Alpha, and its market value has reached about 40 million US dollars so far.
In the latest crypto AI narrative, $Dark is closely related to MCP (Model Context Protocol), which is also an area that Web2 technology companies such as Google are paying attention to and exploring recently.
However, at present, there are not many articles that can clearly explain the concept and narrative impact of MCP.
The following is an in-depth article about the MCP protocol by Alliance DAO researcher Mohamed ElSeidy. It explains the principles and positioning of MCP in very popular language, which may help us quickly understand the latest narrative.
TechFlow compiled the full article.
In my years at Alliance, I have witnessed countless founders build their own proprietary tools and data integrations that are embedded into their AI agents and workflows. However, these algorithms, formalizations, and unique datasets are locked behind custom integrations and rarely used.
This is changing rapidly with the advent of the Model Context Protocol (MCP). MCP is defined as an open protocol that standardizes how applications communicate with and provide context to Large Language Models (LLMs). One of my favorite metaphors is that “MCP is like USB-C for hardware for AI applications”; it is standardized, plug-and-play, versatile, and transformative.
Why choose MCP?
Large language models (like Claude, OpenAI, LLAMA, etc.) are very powerful, but they are limited by the information they can currently access. This means they often have knowledge cutoffs, cannot browse the web independently, and cannot directly access your personal files or specialized tools unless some form of integration is done.
In particular, developers have previously faced three main challenges when connecting LLM to external data and tools:
Integration complexity: Building separate integrations for each platform (like Claude, ChatGPT, etc.) requires duplication of effort and maintaining multiple code bases.
Tool fragmentation: Each tool functionality (e.g., file access, API connection, etc.) requires its own dedicated integration code and permissions model.
Limited distribution: Proprietary tools are restricted to specific platforms, limiting their reach and impact.
MCP solves these problems by providing a standardized way for any LLM to securely access external tools and data sources through a common protocol. Now that we understand what MCP does, lets look at what people are building with it.
What are people building with MCP?
The MCP ecosystem is currently experiencing a burst of innovation. Here are some recent examples I found on Twitter of developers showcasing their work:
AI-driven storyboards: An MCP integration that enables Claude to control ChatGPT-4 o to automatically generate complete storyboards in the Ghibli style without any human intervention.
ElevenLabs Voice Integration: An MCP server that gives Claude and Cursor access to the entire AI audio platform through simple text prompts. The integration is powerful enough to create a voice agent that can make outbound calls. This shows how MCP can extend current AI tools to the audio space.
Browser Automation with Playwright: An MCP server that enables AI agents to control web browsers without the need for screenshots or visual models. This creates new possibilities for web automation by enabling LLMs to directly control browser interactions in a standardized way.
Personal WhatsApp Integration: A server that connects to a personal WhatsApp account, allowing Claude to search messages and contacts, and send new messages.
Airbnb Search Tool: An Airbnb apartment search tool that demonstrates the simplicity of MCP and the ability to create useful applications that interact with web services.
Robot Control System: An MCP controller for a robot. This example bridges the gap between LLM and physical hardware, demonstrating the potential of MCP in IoT applications and robotics.
Google Maps and local search: Connect Claude to Google Maps data to create a system that can find and recommend local businesses, such as coffee shops. This extension enables the AI assistant to provide location-based services.
Blockchain Integration: The Lyra MCP project brings MCP functionality to StoryProtocol and other web3 platforms. This allows interaction with blockchain data and smart contracts, opening up new possibilities for decentralized applications enhanced by AI.
What’s particularly striking about these examples is their diversity. In the short time since MCP launched, developers have created integrations spanning creative media production, communication platforms, hardware control, location services, and blockchain technology. These diverse applications follow the same standardized protocol, demonstrating the versatility of MCP and its potential to become a universal standard for AI tool integration.
If you want to see a comprehensive collection of MCP servers, you can visit the official MCP server repository on GitHub. Before using any MCP server, please read the disclaimer carefully and be careful about what you run and license.
Promise and Hype
As with any new technology, it’s worth asking: Is MCP truly transformative, or just another over-hyped tool that will eventually fade?
After watching many startups, I believe MCP represents a true turning point in the evolution of AI. Unlike many trends that promise revolution but only deliver incremental change, MCP is a productivity improvement that solves infrastructure problems that have held back the entire ecosystem.
What’s special about it is that it doesn’t try to replace or compete with existing AI models, but rather make them more useful by connecting them to the external tools and data they need.
Still, legitimate concerns about security and standardization remain. As with any protocol in its infancy, we may see growing pains as the community explores best practices around auditing, permissions, authentication, and server validation. Developers need to trust the functionality of these MCP servers and not blindly trust them, especially as they become more prolific. This article discusses some recent vulnerabilities exposed by blindly using an MCP server that has not been carefully reviewed, even when running locally.
The future of AI is contextualization
The most powerful AI applications will no longer be standalone models, but ecosystems of specialized capabilities connected through standardized protocols like MCP. For startups, MCP represents an opportunity to build specialized components that fit into these growing ecosystems. It’s a chance to leverage your unique knowledge and capabilities while benefiting from the significant investment in the underlying models.
Looking ahead, we can expect MCP to become a fundamental component of AI infrastructure, just as HTTP is to the web. As the protocol matures and adoption grows, we will likely see a market for specialized MCP servers emerge, enabling AI systems to tap into nearly any capability or data source imaginable.
Has your startup tried implementing MCP? I’d love to hear about your experience in the comments. If you’ve built something interesting in this space, please reach out to us at @alliancedao and apply.
appendix
For those interested in understanding how MCP actually works, the following appendix provides a technical breakdown of its architecture, workflow, and implementation.
Behind the scenes of MCP
Similar to how HTTP standardized the way the web accesses external data sources and information, MCP does the same for AI frameworks, creating a common language that enables different AI systems to communicate seamlessly. Let’s explore how it does this.
MCP Architecture and Process
The main architecture follows a client-server model with four key components working together:
MCP host: includes desktop AI applications such as Claude or ChatGPT, IDEs such as cursorAI or VSCode, or other AI tools that need to access external data and functions.
MCP Client: A protocol processor embedded in a host computer that maintains a one-to-one connection with an MCP Server.
MCP Server: A lightweight program that exposes specific functionality through a standardized protocol.
Data sources: These include files, databases, APIs, and services that can be securely accessed by the MCP server.
Now that weve discussed these components, lets look at how they interact in a typical workflow:
User Interaction: The user asks questions or makes requests in the MCP host (for example, Claude Desktop).
LLM Analysis: The LLM analyzes the request and determines that external information or tools are needed to provide a complete response.
Tool Discovery: The MCP Client queries the connected MCP Server to discover available tools.
Tool Selection: LLM decides which tools to use based on the request and available functionality.
Permission Request: The host requests permission from the user to execute the selected tool to ensure transparency and security.
Tool Execution: After approval, the MCP client sends the request to the appropriate MCP server, which uses its specialized access to the data source to perform the operation.
Result processing: The server returns the results to the client, which formats them for use by LLM.
Response Generation: LLM integrates external information into a comprehensive response.
User presentation: Finally, the response is presented to the end user.
The power of this architecture is that each MCP server focuses on a specific domain but uses standardized communication protocols. This way, developers do not need to rebuild integrations for each platform, but only need to develop tools once to serve the entire AI ecosystem.
How to build your first MCP server
Now lets see how to implement a simple MCP server in a few lines of code using the MCP SDK.
In this simple example, we want to extend Claude Desktops capabilities to be able to answer questions like What coffee shops are near Central Park?, using information from Google Maps. You could easily extend this functionality to get reviews or ratings. But for now, well focus on the MCP tool find_nearby_places, which will allow Claude to get this information directly from Google Maps and present the results in a conversational way.
As you can see, the code is very simple. First, it converts the query into a Google Maps API search and then returns the top results in a structured format. This way, the information is passed back to the LLM for further decision making.
Now we need to make this tool known to Claude Desktop, so we register it in its configuration file as follows:
macOS path: ~/Library/Application Support/Claude/claude_desktop_config.json Windows path: %APPDATA%\Claude\claude_desktop_config.json
Thats it, youre done! You have now successfully extended Claudes functionality to look up locations from Google Maps in real time.