Why MCP Is Mostly Bullshit

March 23, 2025

If you follow the AI space closely, you’ve surely noticed the increased interest in MCP (Model Context Protocol). Almost all of my Twitter feed is now filled with people talking about MCP, wondering what MCP is, or asking why MCP is trending. That alone should give you reason to pause. It’s classic hype — people are talking about MCP because people are talking about MCP. And now I have to write a blog about it. Smh.

Anyway, what the hell is MCP?

To put it simply, it’s a standard way to build tools that provide additional context to AI assistants. It was launched by Anthropic back in November 2024, so it’s not even something new in AI industry terms.

In a sense, it is a replacement for libraries like LangChain and LlamaIndex, which were built to provide ways to easily integrate LLMs with vector stores, filesystems, documents, and other tools.

What Anthropic was basically saying is that instead of relying on LangChain and LlamaIndex, you can simply write your own specific integration and make it accessible to your AI assistant as a tool.

Anthropic essentially wants everything that can provide more context to LLM assistants to be just another API the LLM can query. This is a result of the generalization of tool use. Your vector database can be a tool. Access to files can be a tool. Tool use is the underlying abstraction that unifies everything when it comes to building semi-autonomous AI assistants (what people often call AI agents).

Here, for example, is the MCP server for the Qdrant vector database:

import json
import logging
from contextlib import asynccontextmanager
from typing import AsyncIterator, List

from mcp.server import Server
from mcp.server.fastmcp import Context, FastMCP

from mcp_server_qdrant.embeddings.factory import create_embedding_provider
from mcp_server_qdrant.qdrant import Entry, Metadata, QdrantConnector
from mcp_server_qdrant.settings import (
    EmbeddingProviderSettings,
    QdrantSettings,
    ToolSettings,
)

logger = logging.getLogger(__name__)

@asynccontextmanager
async def server_lifespan(server: Server) -> AsyncIterator[dict]:  # noqa
    try:
        embedding_provider_settings = EmbeddingProviderSettings()
        embedding_provider = create_embedding_provider(embedding_provider_settings)
        logger.info(
            f"Using embedding provider {embedding_provider_settings.provider_type} with "
            f"model {embedding_provider_settings.model_name}"
        )

        qdrant_configuration = QdrantSettings()
        qdrant_connector = QdrantConnector(
            qdrant_configuration.location,
            qdrant_configuration.api_key,
            qdrant_configuration.collection_name,
            embedding_provider,
            qdrant_configuration.local_path,
        )
        logger.info(
            f"Connecting to Qdrant at {qdrant_configuration.get_qdrant_location()}"
        )

        yield {
            "embedding_provider": embedding_provider,
            "qdrant_connector": qdrant_connector,
        }
    except Exception as e:
        logger.error(e)
        raise e

mcp = FastMCP("mcp-server-qdrant", lifespan=server_lifespan)

tool_settings = ToolSettings()

@mcp.tool(name="qdrant-store", description=tool_settings.tool_store_description)
async def store(ctx: Context, information: str, metadata: Metadata = None) -> str:
    await ctx.debug(f"Storing information {information} in Qdrant")
    qdrant_connector: QdrantConnector = ctx.request_context.lifespan_context["qdrant_connector"]
    entry = Entry(content=information, metadata=metadata)
    await qdrant_connector.store(entry)
    return f"Remembered: {information}"

@mcp.tool(name="qdrant-find", description=tool_settings.tool_find_description)
async def find(ctx: Context, query: str) -> List[str]:
    await ctx.debug(f"Finding results for query {query}")
    qdrant_connector: QdrantConnector = ctx.request_context.lifespan_context["qdrant_connector"]
    entries = await qdrant_connector.search(query)
    if not entries:
        return [f"No information found for the query '{query}'"]
    content = [f"Results for the query '{query}'"]
    for entry in entries:
        entry_metadata = json.dumps(entry.metadata) if entry.metadata else ""
        content.append(f"<entry><content>{entry.content}</content><metadata>{entry_metadata}</metadata></entry>")
    return content

This is nice in a way. For starters, not having to rely on LangChain is a good thing. LangChain is a bloated hellscape. I can’t believe people are still using it and even listing it as a requirement for some jobs. Really tells you that a lot of recruiters don’t know what the fuck they are doing, and the world is running on nonsense.

I have to remind myself of this famous quote more often:

"Do you not know, my son, with how little wisdom the world is governed?" — Axel Oxenstierna

What’s interesting about MCP is that it provides a way to decouple your tools from the core of your LLM-based application.

With MCP, it’s easy to just add more tools and, in turn, potentially more interesting context to your LLM-based applications.

This doesn’t mean the agent will be more efficient. You still need to make sure your agent chooses the correct tools each time. You still have to handle hallucinations.

Is MCP the only way to decouple AI agents from their tools? Is it absolutely necessary to use a client-server architecture? Absolutely not. Several libraries handle tool creation and tool use differently (LangChain, LangGraph, SmolAgents, LlamaIndex…).

So why are people so excited about MCP, and so suddenly? I think you know the answer by now: hype. Mostly hype, with a bit of the classic fascination among software engineers for architecture. You just say Model Context Protocol, server, client, and software engineers get excited because it’s a new approach — it sounds fancy, it sounds serious.

The software engineering world is full of people who just follow trends or are obsessed with architecture over solving problems. Joel Spolsky famously called them “architecture astronauts.”

I do agree with Andrej.

I am not even saying MCP is bad tech or useless. It’s just one way among others to provide context to AI assistants/agents.

So what is it with MCP? It’s just the latest hype-infused hysteria of the architecture astronauts. And please, “don’t let architecture astronauts scare you.”