Every AI agent eventually hits the same wall: it needs to interact with external services, but the only option is shelling out to CLI tools and parsing stdout. The Model Context Protocol (MCP) offers a structured alternative — typed inputs, typed outputs, and a standardized discovery mechanism.
When an AI agent needs to create a GitHub issue, the typical approach is:
gh issue create --title "Bug" --body "Details" --repo owner/repo
This works, but has fundamental problems:
MCP standardizes tool interaction into three primitives:
┌──────────┐ stdio/HTTP ┌──────────────┐ API ┌──────────┐
│ AI Agent │ ◄──────────────► │ MCP Server │ ◄───────► │ Service │
│ (client) │ JSON-RPC 2.0 │ (your code) │ REST/DB │ (GitHub) │
└──────────┘ └──────────────┘ └──────────┘
The MCP server is a thin translation layer. It receives structured requests from the agent, calls the actual service API, and returns structured responses.
// tools/create-page.ts
export const createPage = {
name: "notion_create_page",
description: "Create a new page in a Notion database",
inputSchema: {
type: "object",
properties: {
database_id: { type: "string", description: "Target database UUID" },
title: { type: "string", description: "Page title" },
properties: { type: "object", description: "Additional properties" }
},
required: ["database_id", "title"]
},
handler: async ({ database_id, title, properties }) => {
const response = await notion.pages.create({
parent: { database_id },
properties: {
Name: { title: [{ text: { content: title } }] },
...formatProperties(properties)
}
});
return { id: response.id, url: response.url };
}
};
You don't need to rewrite everything at once:
MCP's biggest value isn't the protocol itself — it's the ecosystem effect. Once you standardize on MCP, any new service integration is a server you install, not custom code you write. The agent discovers available tools automatically. That's the difference between "my agent works with 5 specific services" and "my agent works with anything that has an MCP server."