FEATURED

Model Context Protocol for Enterprise: Building Secure, Scalable MCP Infrastructure

Mubbashir Mustafa

15 min read

The Model Context Protocol has gone from an Anthropic side project to the most-adopted standard for connecting AI agents to enterprise tools in under 18 months. Search interest has grown 900% year over year. Over 5,800 official MCP servers are registered, with another 17,000 unofficial implementations in the wild. The Python and TypeScript SDKs have crossed 97 million downloads combined. And 28% of Fortune 500 companies have deployed MCP servers into their environments.

That last number is the one that should get your attention. When nearly a third of the largest enterprises in the world adopt a protocol, it's no longer a developer curiosity. It's enterprise infrastructure. And enterprise infrastructure requires enterprise-grade security, governance, and observability that the protocol itself was never designed to provide.

This is the gap that determines whether MCP becomes a transformative standard for enterprise AI or a compliance nightmare waiting to unfold.

What MCP Actually Is (And What It Is Not)

MCP is an open standard, originally developed by Anthropic and now backed by the Linux Foundation's Agentic AI Foundation alongside co-founders Block and OpenAI. It standardizes how AI agents connect to external tools and data sources. Before MCP, every agent-to-tool integration was a custom implementation. Each agent framework had its own way of defining tool schemas, handling authentication, and managing context. MCP replaces that fragmentation with a consistent client-server protocol.

An MCP server exposes capabilities: tools (functions the agent can call), resources (data the agent can read), and prompts (templates for common interactions). An MCP client, typically embedded in an AI agent or LLM application, discovers these capabilities and uses them through a standardized interface. The architecture is deliberately simple. A server runs alongside or within an enterprise application, exposes a set of capabilities, and an agent connects and uses them.

What MCP is not, at least in its current form, is an enterprise platform. The protocol handles connectivity. It does not handle authentication at enterprise scale. It does not enforce governance policies. It does not provide audit trails that satisfy SOC 2 or HIPAA requirements. It does not detect when an MCP server is exposing data it shouldn't, or when a developer has deployed an unauthorized server that bypasses security review.

These gaps are not design flaws. MCP was built to be lightweight and developer-friendly. Enterprise requirements are deliberately left to the infrastructure layer above the protocol. The problem is that most enterprises haven't built that layer yet. Learn more

The analogy is HTTP. HTTP provides transport. It says nothing about authentication, rate limiting, or data governance. Enterprises don't use raw HTTP for production APIs. They layer authentication (OAuth, JWT), authorization (RBAC, ABAC), observability (distributed tracing, structured logging), and governance (API gateways, management platforms) on top of it. MCP needs the same treatment. The protocol is the transport layer. Everything else is your responsibility.

The MCP Architecture in Detail

Understanding MCP's architecture is essential before discussing what enterprises need to build on top of it.

The protocol operates on a client-server model with three capability types. Tools are functions that agents can call. A Jira MCP server might expose tools like "search issues," "create issue," "update issue," and "add comment." Each tool has a defined schema (input parameters, output format, error handling) that agents discover at runtime. Resources are read-only data sources that agents can query. A documentation MCP server might expose company wikis, knowledge bases, or policy documents as resources. Prompts are templates that define common interaction patterns, guiding agents toward the intended usage of a server's capabilities.

The communication protocol supports two transport modes. Stdio transport runs the MCP server as a local process, communicating through standard input and output. This is the original MCP transport mode, designed for developer tooling where the server runs on the same machine as the client. HTTP/SSE transport (Server-Sent Events over HTTP) supports remote servers, which is the mode relevant for enterprise deployments. This transport mode enables centralized MCP servers that multiple agents across the organization can connect to.

The lifecycle of an MCP connection follows a predictable pattern. The client initiates a connection and negotiates capabilities. The server responds with its available tools, resources, and prompts. The client selects a tool based on its current task, constructs a request using the tool's schema, and sends it. The server processes the request, interacts with the backend system, and returns the result. The client incorporates the result into its reasoning and decides whether to make additional calls.

This lifecycle is where enterprise security and governance requirements intersect with the protocol. Each stage, connection, capability discovery, tool selection, request construction, backend interaction, and response handling, is a potential control point where enterprises can enforce security policies, log audit events, and validate behavior.

Why Enterprises Are Adopting MCP Now

Three forces are driving enterprise MCP adoption simultaneously, and a fourth is accelerating them all.

The first is AI agent proliferation. As enterprises deploy more agents (80% of Fortune 500 now run active agents, per Microsoft), each agent needs access to enterprise tools and data. Building custom integrations for each agent-tool pair is unsustainable. If you have 20 agents and 30 tools, that's 600 potential integration points. MCP reduces this to 30 server implementations that any agent can consume.

The second is ecosystem momentum. Gartner predicts that 75% of API gateway vendors will support MCP by the end of 2026. GitHub, Slack, Jira, Salesforce, Google Drive, and dozens of other enterprise tools already have official or community MCP servers. When the tools your enterprise already uses offer MCP interfaces, the adoption decision shifts from "should we evaluate MCP?" to "how do we govern the MCP servers already appearing in our environment?"

The third is developer velocity. MCP servers are straightforward to build. A competent developer can create a functional MCP server for an internal tool in a few hours. This low barrier to creation is both MCP's greatest strength and its most significant enterprise risk, because developers are building and deploying MCP servers faster than security teams can review them. Learn more

A fourth, often overlooked, force is protocol convergence. Before MCP, the question of how agents connect to tools had no clear answer. Every framework implemented its own approach. Now, with OpenAI joining the MCP steering committee alongside Anthropic and Block under the Linux Foundation's umbrella, the industry is converging. Google's Agent2Agent (A2A) protocol addresses agent-to-agent communication rather than agent-to-tool connectivity, making it complementary to MCP rather than competitive. For enterprises evaluating protocol bets, the convergence signal reduces the risk of adopting MCP and increases the urgency of building the infrastructure to support it.

The Enterprise MCP Security Gap

The security statistics for MCP deployments are sobering. A 2025 audit found that 53% of MCP servers use hard-coded credentials. Only 8.5% implement OAuth-based authentication. CVE-2025-6514, scored at CVSS 9.6, compromised over 437,000 developer environments through a vulnerability in MCP server implementations. The Asana incident in June 2025 demonstrated customer data bleeding across MCP instances due to insufficient tenant isolation.

These are not theoretical risks. They're documented incidents in a protocol that's barely two years old.

The root causes are predictable. MCP's specification prioritizes simplicity. The basic security model assumes HTTPS for transport encryption and leaves authentication, authorization, and audit logging to the implementation. In practice, this means each MCP server implements its own security model (or doesn't implement one at all). When a developer needs to ship an MCP server quickly to unblock an agent deployment, security shortcuts are the first casualty.

Enterprise MCP security requires infrastructure above the protocol: mutual TLS for transport security, standardized authentication through OAuth or workload identity, request-level authorization that evaluates permissions per tool call, data classification enforcement that prevents MCP servers from exposing sensitive data to unauthorized agents, and comprehensive audit logging that captures every interaction for compliance. Learn more

The McKinsey playbook on deploying agentic AI with safety and security identifies this exact gap: organizations adopt the protocol quickly but build the security infrastructure slowly, creating a window of exposure that grows with adoption. The organizations that close this window fastest are those that treat MCP as enterprise infrastructure from day one, not as a developer tool that eventually needs hardening.

The Three Pillars of Enterprise MCP Infrastructure

Enterprise MCP infrastructure rests on three pillars: security, governance, and observability. Each pillar addresses a different class of enterprise requirement.

Security: Beyond HTTPS

MCP's transport layer defaults to HTTPS, which provides encryption in transit. Enterprise security requires significantly more.

Authentication needs to move beyond hard-coded tokens. The 53% of servers using static credentials represent 53% of servers where a leaked token grants unlimited access with no expiration and no revocation capability. Enterprise MCP should use OAuth 2.0 with scoped tokens that expire, rotate automatically, and can be revoked instantly when a compromise is detected. For environments with stricter requirements, mutual TLS (mTLS) provides certificate-based authentication that verifies both the client and server identity.

Authorization needs to operate at the request level, not the connection level. A connection-level authorization model says "this agent is allowed to use this MCP server." A request-level model says "this agent is allowed to call this specific tool, with these specific parameters, on this specific data." The granularity matters. An agent that can read customer records should not automatically be able to modify them. An agent authorized to access one department's data should not see another department's data through the same MCP server.

Credential management for MCP requires integration with enterprise secret management platforms: HashiCorp Vault, AWS Secrets Manager, Azure Key Vault. Credentials should be generated per session, rotated on a schedule, stored in secure vaults (never in code or environment variables), and audited whenever they're accessed.

The gap between where most enterprises are today (hard-coded tokens, connection-level auth) and where they need to be (OAuth with scoped tokens, request-level authorization, vault-integrated credential management) is significant. But the path is well-understood because enterprises have walked it before with REST APIs. The same security engineering patterns that secured your API infrastructure apply to MCP. The implementation details differ, but the architecture is familiar.

Governance: Scaling Approval Systems

As MCP adoption grows within an enterprise, governance becomes the difference between controlled adoption and chaos.

The first governance requirement is an MCP registry: a central inventory of every MCP server deployed in the environment, including who owns it, what data it accesses, what approval it received, and what version is running. Without a registry, security teams can't assess risk because they don't know what MCP servers exist. The registry should capture server metadata (name, description, owner, team), capability inventory (tools, resources, prompts exposed), data classification (what sensitivity levels the server touches), approval status (pending, approved, deprecated, revoked), and dependency mapping (which agents consume which servers).

Approval workflows determine how new MCP servers enter the environment. The right model depends on organizational maturity. Centralized approval, where a security or architecture board reviews every server, provides maximum control but creates bottlenecks that drive shadow deployments. Federated approval, where team leads approve servers within their domain, balances speed and control. Risk-based fast-track, where low-risk servers (read-only, non-sensitive data) auto-approve while high-risk servers require full review, matches governance effort to actual risk.

Version management is the governance dimension most enterprises overlook. MCP servers evolve. New tools are added, permissions change, data access patterns shift. Without version pinning and upgrade review processes, a server that was approved six months ago might expose capabilities that were never reviewed. Enterprises need deprecation policies, rollback capabilities, and automated checks that flag when a server's capabilities diverge from its approved configuration. Learn more

Lifecycle management ties the governance system together. Every MCP server should have a defined lifecycle: proposal, review, approved, active, deprecated, decommissioned. Each stage transition triggers specific actions. Moving from "review" to "approved" requires security sign-off. Moving from "active" to "deprecated" triggers notifications to dependent agents. Moving from "deprecated" to "decommissioned" revokes credentials and removes the server from the registry. Without lifecycle management, decommissioned servers linger with active credentials, deprecated servers continue receiving traffic, and the registry becomes stale.

Data classification governance adds a critical constraint. Each MCP server should declare the maximum data sensitivity level it handles: public, internal, confidential, or restricted. The governance system should enforce that only agents with appropriate clearance levels connect to servers handling sensitive data. A server connected to the HR database (containing restricted employee data) should not accept connections from a general-purpose productivity agent, even if the agent's authentication credentials are valid. Classification-based access control is the governance mechanism that prevents sensitive data from being exposed through legitimate but inappropriate agent connections.

Observability: Seeing What Agents Actually Do

MCP observability is the third pillar, and the one most enterprises build last (if they build it at all). This is a mistake. Observability should be foundational, not an afterthought, because you can't govern what you can't see.

Enterprise MCP observability covers four signal types: metrics, logs, traces, and cost.

Metrics track the health and performance of MCP servers: request rates, error rates, latency percentiles, and availability. These are standard infrastructure metrics, and teams with existing APM infrastructure (Datadog, New Relic, Grafana) can extend their monitoring to MCP servers without starting from scratch. The MCP-specific addition is utilization tracking. Research suggests that 95% of deployed MCP servers are unused or severely underutilized. That's not just waste; it's a governance blind spot. Unused servers that maintain active credentials and data access are attack surface without value. Regular utilization reviews should identify servers that can be decommissioned, reducing the security footprint without affecting agent functionality.

Logs capture what happened: which agent called which tool with which parameters and received which response. Structured logging with consistent schemas across all MCP servers is essential for compliance (SOC 2 audit trails, HIPAA access records) and for incident investigation.

Traces follow a request through the complete chain: from the user's prompt, through the agent's reasoning, to the MCP tool call, through the backend system, and back. Distributed tracing is the only way to debug complex agent behaviors that span multiple MCP servers and tool calls.

Cost tracking attributes model spend, compute costs, and data transfer to specific MCP servers, agents, and teams. Without cost attribution, enterprises discover their MCP infrastructure costs only when the monthly cloud bill arrives, and by then it's too late to understand what drove the spike. Cost attribution also enables chargeback models where teams pay for the MCP infrastructure they consume, which creates natural incentives to decommission unused servers and optimize inefficient tool calls. Learn more

Deployment Patterns for Enterprise Scale

Enterprise MCP deployments follow one of several architectural patterns, each with distinct trade-offs.

The hub-and-spoke pattern routes all MCP traffic through a central gateway. Every agent connects to the gateway, and the gateway routes requests to the appropriate MCP server. This pattern provides a single point for security enforcement, audit logging, and traffic management. The trade-off is that the gateway becomes a bottleneck and a single point of failure at high scale.

The federated pattern lets teams operate their own MCP servers with decentralized management. Each team owns its servers, and a lightweight mesh layer handles discovery and routing. This pattern provides high developer autonomy and eliminates central bottlenecks. The trade-off is governance complexity: ensuring consistent security and compliance across independently managed servers requires strong policy enforcement infrastructure.

The cloud-native pattern deploys MCP servers as containerized workloads in Kubernetes, using service mesh capabilities for traffic management, security (mTLS), and observability. This pattern aligns well with enterprises that already have mature Kubernetes infrastructure and want to treat MCP servers like any other microservice.

The BYOC (Bring Your Own Cloud) pattern deploys MCP infrastructure in the customer's own cloud environment, providing full data sovereignty and infrastructure control. This matters for regulated industries where data cannot leave the customer's environment, and for enterprises that require infrastructure ownership rather than SaaS dependency. Learn more

The choice between these patterns often depends on the organization's existing infrastructure investments. Enterprises with mature Kubernetes deployments naturally gravitate toward the cloud-native pattern. Enterprises with strong API management platforms favor the hub-and-spoke pattern. The wrong choice is trying to force a pattern that conflicts with your existing architecture. The right choice is the pattern that integrates most naturally with what you've already built and the security model you've already established.

Many enterprises will evolve through these patterns as their MCP deployments mature. A common trajectory starts with hub-and-spoke (simple, centralized control for early adoption), moves to cloud-native as the Kubernetes team standardizes the deployment model, and eventually adopts a federated approach as adoption scales across business units with different infrastructure stacks. The key is designing the security and governance layers to be pattern-agnostic so they don't need to be rebuilt with each architectural evolution.

Common Failure Modes

Three failure modes account for the majority of enterprise MCP problems, and understanding them upfront helps organizations avoid the most expensive mistakes.

The first is shadow MCP: unauthorized MCP servers deployed by developers without security review. Shadow MCP is the agentic equivalent of shadow IT. It happens when governance processes are too slow or too cumbersome, and developers bypass them to ship agent integrations quickly. The fix is governance that's fast enough to keep pace with development velocity, not more restrictive policies that drive developers underground. Learn more

The second is credential leakage. The 53% of servers using hard-coded credentials are 53% of servers where a compromised developer machine or a leaked Git commit grants full access to enterprise systems. Secret scanning, credential rotation, and vault integration should be prerequisites for any MCP server deployment, not optional hardening steps.

The third is insufficient tenant isolation. MCP servers that handle requests from multiple agents or multiple organizational units need strict isolation between tenants. The Asana incident demonstrated what happens when isolation fails: customer data bleeds across boundaries. Enterprise MCP servers should implement request-level tenant isolation, separate credential stores per tenant, and strict data partitioning.

Building Your Enterprise MCP Program

Enterprises that successfully scale MCP follow a consistent pattern. They start with a small pilot (three to five MCP servers for a single team), establish governance and security standards during the pilot, and then scale the approved patterns across the organization.

The pilot phase (typically 60-90 days) should focus on establishing the MCP registry, implementing authentication standards (OAuth, not hard-coded tokens), deploying observability infrastructure, and documenting approval workflows. The goal is not to deploy as many servers as possible. It's to prove that your governance infrastructure can handle MCP adoption at scale. Pilot teams should include representatives from security, compliance, and infrastructure, not just the development team building the servers. Embedding these stakeholders early prevents the governance retrofit that slows down scaling later.

The pilot should also establish baseline metrics: how long does it take to deploy a governed MCP server? What's the approval turnaround time? How quickly can security respond to a detected shadow server? These metrics become the benchmarks that guide infrastructure investment during the scaling phase. If deployment takes three weeks during the pilot, the scaling goal should be three days. If shadow detection takes a week, the goal should be hours.

The scaling phase applies the patterns established during the pilot to broader adoption. Each new team that deploys MCP servers uses the same registry, the same authentication framework, the same observability pipeline, and the same approval process. The infrastructure team's job shifts from building individual servers to maintaining the platform that makes secure, governed MCP deployment fast and repeatable. Success in the scaling phase is measured not by the number of servers deployed but by the percentage of deployments that follow the governed path. If teams are bypassing the platform and deploying shadow MCP servers, the platform isn't meeting their needs. If adoption flows through the platform naturally, the governance infrastructure is working.

Gartner's prediction that 75% of API gateway vendors will support MCP by the end of 2026 signals that MCP infrastructure is converging with existing enterprise infrastructure. Enterprises that build their MCP governance and security patterns now will be positioned to adopt these integrated offerings as they mature. Enterprises that defer governance will face the same retrofit costs they incurred when they tried to add security to their cloud deployments after the fact. Learn more

MCP as Enterprise Infrastructure

MCP's trajectory mirrors that of REST APIs in the late 2000s. A lightweight, developer-friendly standard gains rapid adoption. Enterprises adopt it because the ecosystem demands it. Then the enterprise requirements, security, governance, observability, compliance, catch up. The organizations that build those requirements into their infrastructure early capture the benefits of the standard without the risks of ungoverned adoption.

The protocol itself is sound. The enterprise infrastructure around it is what most organizations are missing. Security that goes beyond HTTPS. Governance that scales with adoption. Observability that gives enterprises visibility into what their agents are actually doing through MCP. These aren't nice-to-have additions. They're the difference between MCP as a strategic capability and MCP as an audit finding.

Twenty-eight percent of Fortune 500 companies have MCP servers. The percentage that have enterprise-grade infrastructure around those servers is considerably lower. Closing that gap is the defining infrastructure challenge for enterprise AI in 2026.

The organizations that build MCP infrastructure now, while the protocol is still maturing and the vendor landscape is still forming, will shape how enterprise MCP evolves. The organizations that wait will adopt whatever the market provides, on whatever timeline the market dictates. In enterprise infrastructure, the early builders always have the advantage. Learn more

MCP is becoming the standard for agent-to-tool connectivity. Rebase provides the enterprise infrastructure, security, governance, and observability, to adopt it safely. See how it works: rebase.run/demo.

Related reading:

  • Agentic AI Infrastructure: The Complete Stack

  • Enterprise AI Infrastructure: The Complete Guide

  • Shadow MCP: The Unauthorized AI Agent Risk

  • MCP Security Architecture: Moving Beyond HTTPS

  • BYOC: Why Your AI Should Run in Your Cloud

  • AI Agent Orchestration: The Enterprise Guide

Ready to see how Rebase works? Book a demo or explore the platform.

SHARE ARTICLE

The AI Infrastructure Gap

Why scaling AI requires a new foundation and the nine components every enterprise ends up needing.

The AI Infrastructure Gap

Why scaling AI requires a new foundation and the nine components every enterprise ends up needing.

WHITE PAPER

The AI Infrastructure Gap

Why scaling AI requires a new foundation and the nine components every enterprise ends up needing.

WHITE PAPER

The AI Infrastructure Gap

Why scaling AI requires a new foundation and the nine components every enterprise ends up needing.

Recent Blogs

Recent Blogs

Ready to become AI-first?

Ready to become AI-first?

document.documentElement.lang = "en";