Navigating Change
The way we interact with software is shifting. Instead of clicking through web interfaces and learning different UIs for each system, some people are moving toward natural language as the interface - telling an AI what they want done, and it handles the task completion for them.
This shift is happening now, but there's been a missing piece: how does the AI actually connect to and control those systems?
MCP Defined
MCP (Model Context Protocol) is an open protocol that gives AI models a standardised interface to external tools and data sources. Think of it as a USB-C for AI - before USB-C, every device needed its own connector. MCP does the same for AI-system integration.
Instead of building custom integrations for every combination of AI model and external service, you implement MCP once and your AI can talk to everything it needs to. Instead of context-switching between Slack, JIRA, GitHub, and dashboards for example, we'll be able to describe what we need and have an AI assistant handle the interactions.
Or as Anthropic defines it:
"MCP (Model Context Protocol) is an open-source standard for connecting AI applications to external systems."
Current State: A Nascent Protocol
Yes, MCP has gaps. Here are a few:
- It's relatively new (launched November 2024) and the ecosystem is still figuring things out.
- Authentication and authorisation have a standard but implementation is patchy.
- Discovery is a work in progress, with a real need for global registry mechanics.
- Security is largely left to implementers - no built-in sandboxing, no secret management standard, no consistent audit trails.
The community is actively working on all of these, and thing are moving very fast.
But these are solvable problems and a lot of the changes are happening on the client side so building out servers feels a little more stable. These gaps shouldn’t prevent you from getting value today - they just mean you'll need to handle some plumbing yourself and take an abundance of caution if you’re handling sensitive information.
I haven't deployed MCP in production yet - the security gaps (or perhaps my gaps in comprehension) are a bit too concerning for me at the moment. These gaps will need to be resolved for my confidence level to improve. But I can see why it's could be useful once these issues are addressed. Even during the course of writing this post, there have been numerous changes and updates and I’m hopeful the community will solve other issues soon. Right now I can see a lot of value running MCP locally and am having fun building out some small servers in order to gain competency.
If you're building AI applications, you'll likely need something like MCP eventually. With Anthropic, OpenAI, and others converging on MCP as the standard, now is a good time to understand it, test it in sandboxed environments, and prepare for when the security model matures.
The UI Paradigm Shift
One way to think about MCP is as a new kind of UI. Instead of:
Open browser → Navigate → Click through menus → Fill forms → Submit
You get:
Tell the AI what you want → It handles the system interactions
Think of it like giving your AI agent hands to carry out tasks. Instead of clicking through forms you can just say: "Add a new user with admin privileges".
Example below.
// The agent remembers it has a tool specifically
// for this use case, and calls it directly
const result = await mcpClient.callTool('create_user', {
role: 'admin',
permissions: ['read', 'write', 'delete']
});
So the conversation becomes the interface, and the MCP servers the connectors that make it functional.
Getting Started
- Install the SDK: npm install @modelcontextprotocol/sdk
- Pick one system to expose (start read-only)
- Build a basic server
- Test with Claude Desktop or another MCP client
- Iterate and expand
The best way to understand MCP is to build something with it.
How It Works
Two of the most important capabilities MCP servers expose are:
- Resources - Read-only data sources (databases, files, APIs)
- Tools - Actions the AI can perform (write files, send emails, execute queries)
Here's a minimal MCP server:
import { Server } from '@modelcontextprotocol/sdk/server';
const server = new Server({
name: 'my-service',
version: '1.0.0',
});
// Expose a resource
server.setRequestHandler('resources/list', async () => {
return {
resources: [{
uri: 'db://users',
name: 'User Database',
mimeType: 'application/json',
}],
};
});
// Response when AI queries resources:
// {
// resources: [
// {
// uri: 'db://users',
// name: 'User Database',
// mimeType: 'application/json'
// }
// ]
// }
// Handle resource reads
server.setRequestHandler('resources/read', async (request) => {
if (request.params.uri === 'db://users') {
// Fetch and return actual data
const users = await database.getUsers();
return {
contents: [{
uri: 'db://users',
mimeType: 'application/json',
text: JSON.stringify(users)
}]
};
}
});
// When AI calls: resources/read with uri='db://users'
// Response:
// {
// contents: [{
// uri: 'db://users',
// mimeType: 'application/json',
// text: '[{"id": 1, "name": "Alice"}, {"id": 2, "name": "Bob"}]'
// }]
// }
// Expose a tool
server.setRequestHandler('tools/list', async () => {
return {
tools: [{
name: 'send_notification',
description: 'Send a notification to a user',
inputSchema: {
type: 'object',
properties: {
userId: { type: 'string' },
message: { type: 'string' },
},
},
}],
};
});
// Response when AI queries tools:
// {
// tools: [
// {
// name: 'send_notification',
// description: 'Send a notification to a user',
// inputSchema: {
// type: 'object',
// properties: {
// userId: { type: 'string' },
// message: { type: 'string' }
// }
// }
// }
// ]
// }
// Handle tool execution
server.setRequestHandler('tools/call', async (request) => {
const { name, arguments: args } = request.params;
if (name === 'send_notification') {
// Actually send the notification
const result = await notificationService.send(args.userId, args.message);
return {
content: [{
type: 'text',
text: `Notification sent to user ${args.userId}`
}]
};
}
});
// When AI calls: tools/call with name='send_notification' and arguments={userId: '123', message: 'Hello'}
// Response:
// {
// content: [{
// type: 'text',
// text: 'Notification sent to user 123'
// }]
// }
Security Considerations
The security gaps are worth understanding in detail. At a high level, and at the time of writing this article, these are:
- No sandboxing - MCP servers run with whatever permissions you give them.
- No secret management - Credentials often end up in configs or AI context.
- No rate limiting standard - A confused AI could hammer your systems.
- No audit standard - Tracking what happened when is on you.
- Context confusion - Risk of data leaking between requests.
// Even "safe" servers have risks
const safeServer = {
tools: [{
name: 'query_users',
permissions: ['read_only'],
allowedOperations: ['SELECT'],
// But what about SELECT queries that cause performance issues?
// What about querying tables you didn't consider?
}]
};
Start with read-only access and build your security model deliberately and make sure you implement auth.
Efficiency at Scale
Anthropic has warned in the past that there some issues with MCP.
As you connect more systems, token consumption becomes an issue, and this was a big criticism of MCP’s first iteration. Anthropic has proposed a solution (following Cloudflare’s suggestion) and advised an approach called code execution can reduce token usage by 98.7%.
Instead of loading thousands of tool definitions upfront, agents explore them on-demand. Instead of passing data through the model repeatedly, they process it in an execution environment.
// Traditional: 10,000 rows flow through context
TOOL CALL: gdrive.getSheet(sheetId: 'abc123')
→ returns all rows to model
// Code execution: filter in environment
const allRows = await gdrive.getSheet({ sheetId: 'abc123' });
const pending = allRows.filter(row => row.status === 'pending');
console.log(`Found ${pending.length} pending orders`);
Advanced Patterns Emerging
Anthropic suggests following certain patterns for handling tools at scale which provides some insight into where MCP is headed next. In this article they shared that when you have hundreds of MCP servers available, the AI needs tool retrieval - searching for relevant tools rather than loading them all.
Well-written tool descriptions become critical — the difference between "update_record" and detailed descriptions with examples and warnings. Complex tasks get broken into sequences across multiple MCP servers. Error handling needs to be AI-friendly, with clear messages the model can understand and recover from.
The discovery problem I mentioned earlier? It's not just about finding servers. If a user asks "send a summary of this week's PRs to the team Slack," the agent needs to retrieve the GitHub tools and the Slack tools on demand, rather than having all 2,000 tool definitions pre-loaded. This is about performing intelligent retrieval based on the task at hand, so well-written tool descriptions become critical. These tool descriptions are essentially what gets searched.
What's Next
MCP has moved quickly from experiment to something that looks a lot like an emerging standard. The foundation is there, the major players are aligned, and the hard problems are being worked on in the open — that's a reasonable basis for cautious optimism.
It’s great to see that the issues are being tackled seriously and that there is an active community working on them. And recently MCP was donated by Anthropic to the Agentic AI Foundation, a directed fund under the Linux Foundation. This move ensures MCP remains vendor-neutral and community-governed which I’m very hopeful will make it even more robust and allow for some great improvements.
We're watching the early days of something that will fundamentally change how we interact with computers. And despite the rough edges, that's pretty exciting.
Want to talk to us about how to get the most out of MCP? Please reach out directly.