Blog / MCP is the Fastest-Growing Protocol in AI History
Agentic AI Tech Stack March 30, 2026 Aparna Sinha

MCP is the Fastest-Growing Protocol in AI History

Despite reports of its untimely death, MCP has grown 5,000% in 16 months with over 100 million monthly SDK downloads. Here's why the protocol is thriving and what it means for enterprise AI.

MCP agentic AI security observability governance

MCP Protocol

In February, Eric Holmes published “MCP is Dead. Long Live the CLI.” and the tech community on Twitter piled on. OpenClaw didn’t use MCP. And CLIs and Skills praised for being context efficient, composable, and debuggable were to replace MCP.

But as of April 2026 MCP is looking pretty alive. It’s the fastest-growing open-source AI project in history, growing 5,000% in 16 months. Here’s the current state:

  • Over 100 million monthly SDK downloads and over 10,000 active servers
  • First-class client support across ChatGPT, Claude, Gemini, Cursor, Microsoft Copilot, and VS Code
  • Anthropic now has 75+ connectors powered by MCP and launched Tool Search for production-scale deployments
  • OpenAI shipped dynamic tool search for MCP in GPT-5.4, reducing token overhead by 47%
  • Google announced fully managed, remote MCP servers across Google Cloud services
  • The Agentic AI Foundation co-founded by Anthropic, OpenAI, and Block, with platinum members AWS, Google, Microsoft, Bloomberg, and Cloudflare now stewards MCP under the Linux Foundation
  • The MCP Dev Summit in NYC (April 2-3), featuring 95+ sessions from Anthropic, Datadog, Hugging Face, Microsoft, and others, is oversubscribed

So what is going on here?

CLIs Are Great — Especially for Autonomous Agents

I love CLIs. Most developers do. LLMs, trained on CLI docs, are expert at using ‘grep’, ‘awk’, and other Bash commands. Chaining tools together is a requirement for AI workflows - with CLIs LLMs can write a long string of commands that executes in a single leap. It’s elegant. CLIs are also self documenting, so LLMs can discover features on the fly (using help/man). CLIs can be more context efficient than MCPs - describing tools in fewer tokens than large MCP schemas.

But what is elegant for individual developers is not always best for teams, enterprises, and the broad ecosystem of builders who are increasingly non-developers.

Agent Autonomy Needs Robust Controls

The “MCP is dead” thesis resonates when comparing CLIs to local MCPs which are clunky and limited. But when comparing to remote or gateway managed MCPs the security, observability and ease of use benefits are significant.

MintMCP founder Jiquan Ngiam uses CLIs for side projects, but MCPs at work to run background agents, each with scoped access. The tradeoffs he noted:

  • Access control: CLIs are hard to scope per session. MCPs have built-in allowedTools per session.
  • Auth: CLIs require on-device tokens and terminal commands. MCPs offer OAuth with single-click UIs.
  • Credential UX: CLIs are painful for non-developers. MCPs connect in one click.
  • Observability: CLIs produce ad-hoc output. MCP servers enable standardized OpenTelemetry metrics across teams.

Requirements CLIs Alone Don’t Solve

Charles Chen’s response, “MCP is Dead; Long Live MCP!”, was a balanced, widely-read take ranked highly on Hacker News. His core argument is that MCP isn’t replacing CLIs for senior engineers, it’s enabling organizational-scale agentic engineering with security, observability, and governance built in.

1. Security

In an enterprise, you need OAuth-based authentication with secrets managed server-side. You need to revoke access when someone leaves without worrying about API keys sitting in dotfiles. You need to know for sure that an agent running a background task can read call transcripts but can’t send emails. Remote MCPs accessed via Gateways are a common pattern enterprises use to ensure secure Agent tool use.

OpenClaw illustrates what happens when security is an afterthought. Researchers found over 135,000 OpenClaw instances exposed to the public internet, with 15,000 vulnerable to remote code execution. The root cause? A dangerous default: binding to 0.0.0.0:18789 on all network interfaces rather than localhost.

Would MCP have prevented this? MCP’s remote server model means there is no listening port on the user’s machine to expose. The 135,000+ exposed instances are architecturally impossible with remote MCP. MCP’s OAuth 2.1 model keeps credentials server-side with scoped, short-lived tokens, not stored in local dotfiles waiting to be exfiltrated. MCP is not a silver bullet. It had 30+ CVEs of its own in early 2026, but its design makes secure deployment the path of least resistance.

This isn’t just an enterprise problem. Anyone running AI agents needs security. The individual developer vibing in their terminal is one malicious dependency away from a compromised environment. The “agent autonomy at all costs” mindset can ironically slow AI adoption.

2. Observability

When an agent uses a CLI, what gets logged? Whatever the agent decides to capture. When an agent uses an MCP server, you get structured, standardized telemetry: what was requested, what was executed, what was returned, and how long it took.

The 2026 MCP roadmap makes this explicit: end-to-end audit trails enterprises can feed into existing logging and compliance pipelines. No parallel observability stack required. For regulated industries, and many of the large US banks and financial institutions I speak to, this is table stakes.

3. Governance

Who can use which tools? What versions are deployed? How do you roll out an upgrade across 200 agents? How do you ensure consistent behavior?

Charles Chen jokes that the CLI-only argument is “cowboy vibe-coding culture.” Every developer installs their own version, configures their own flags, manages their own upgrades. As Ngiam points out, there’s no “consistent interface across CLIs: credential management, flag conventions, error handling all work differently.”

MCP servers provide centralized configuration, version control, and dynamic content delivery, what Chen calls “server-delivered SKILL.md” that auto-updates across all tools without manual synchronization. This is well-managed. It means your OpenClaw agent, isn’t auto-installing skills and CLIs from the web on its own. The tradeoff of agent autonomy for security and governance is clear here.

Lastly, there is the practical consideration that many enterprise backends don’t have CLIs. They may have APIs — often REST, sometimes GraphQL, but wrapping them in a CLI means custom tooling for each system. MCP provides a standard protocol for wrapping these APIs once and making them available to any MCP-compatible client.

What About Skills and Code Mode?

The MCP vs. CLI framing might be too narrow. In practice, there are many ways an AI agent can interact with external systems, and each has a distinct role:

Skills are process documentation for agents, encoding domain-specific know-how, team conventions, and multi-step workflows. They’re context-efficient and useful, but have a real problem: agents often don’t pick them up. Vercel’s evaluation of their Next.js agent found that in the majority of eval cases, the agent never invoked the available skill at all.

Code Mode gives the agent an API spec and lets it write custom code in a sandbox for each request which is maximally flexible, but hard to audit, govern, or make repeatable.

The Path to Agent Autonomy

There’s a deeper tension beneath the MCP vs. CLI debate that’s worth naming: how much autonomy should we give agents?

The CLI-maximalist position implicitly argues for more autonomy. Give the agent shell access, let it figure out which commands to run, let it chain tools together however it sees fit. This is appealing as models get smarter. And they are getting smarter, fast.

But autonomy without guardrails produced the OpenClaw security crisis. The answer is to build the infrastructure that lets autonomy scale safely. MCP’s gateway pattern, scoped tokens, and structured telemetry are steps in that direction. As models improve and earn more trust, the guardrails can loosen. In a notable turn, OpenClaw’s founder Peter Steinberger shared that the next version will adopt MCP, replacing its proprietary messaging channel with the standardized protocol.

The enterprises I speak to — large US banks, financial institutions, technology companies are already using MCP or actively adopting it. The ecosystem has grown to over 5,800 community and enterprise servers spanning databases, CRMs, cloud providers, and developer tools. I expect the future to look like smarter models that use MCP, Skills, CLIs, and Code Mode fluidly. Picking the right approach for each sub-task, with the infrastructure to make that safe. We’re not there yet, but the trajectory is clear.

Hundred million monthly downloads clear.


Aparna Sinha is the host of the EnterpriseAligned AI podcast, where she speaks with enterprise leaders about AI adoption in practice. Upcoming episodes feature conversations with RBC and Coursera about their own MCP adoption journeys.

Also published on Substack
Share:

Subscribe to the Podcast and Blog

Get the latest insights and episode links delivered straight to your inbox.

Stay connected

You're subscribed!

Check your inbox to confirm your subscription. Welcome to the Enterprise Aligned AI community.