Derex.dev

The Gold Rush for AI Assistants

Quick Fix
Learning MCP

There’s a particular feeling I get when the tech world starts buzzing like everyone just heard the same secret at once. Right now, that secret is: AI assistants are the next big platform.

But beneath the noise, I’ve been wrestling with a quieter question:

If everyone’s racing to build the next AI assistant, where does someone like me a developer who’s not at OpenAI or Google actually fit in?

This is me talking to my past self, the one who felt both inspired and slightly paralyzed by this wave. I’m writing this because I think the most valuable seat at this table might not be the one building the assistants, but the one forging the tools they use.

Assistants Need Tools, Not Just Intelligence

Every assistant, no matter how fancy the model, becomes useless the moment it hits a wall. Ask it to book a flight? It needs a travel API. Want it to manage your files? It needs a storage service. Assistants are only as capable as the tools they can call.

And here’s the thing: most developers are focusing on the assistant layer the chat UIs, the voice apps, the agent runners. But there’s a quieter, more foundational layer underneath all of that: the tools. The APIs. The MCP servers.

What is an MCP Server?

MCP stands for Model Context Protocol. It’s a standardized way for AI assistants to interact with software. Think of it as a shared grammar that allows an AI to command a wide range of apps without custom code for each.

An MCP server acts as a translator sitting beside a particular application. It knows how to take a natural-language request and turn it into something the app understands whether that’s a Python call in Blender or a GitHub API query.

It handles:

  • Tool discovery: What can this app do?
  • Command parsing: What is the AI asking for, and how do we do that here?
  • Response formatting: How do we return something meaningful?
  • Error handling: What happens when something goes wrong?

On the other end, the MCP client lives with the AI assistant. It talks to the MCP server, relays messages, and feeds responses back into the AI. All of this happens via a protocol that’s transport-agnostic (HTTP, WebSocket, even stdin/stdout), and uses JSON Schema to describe capabilities.

This architecture allows an assistant to use any MCP-compliant tool the same way a GitHub server, a database query engine, or a local file browser. It’s the glue that binds natural language to real action.

What That Looks Like for Me

I’m starting to shift my mindset from “app developer” to “toolmaker for assistants.”

That means I’m asking:

  • Can I build an MCP server around something I already use or maintain?
  • Can I turn some internal service like a chunked file uploader or a tagging system into a callable tool for AI?
  • Can I write my tools in a way that feels native to this protocol-first world?

We’re not just building web apps anymore. We’re building AI-accessible utilities.

Where I’m Investing My Time

Here’s the roadmap I’m following loosely, but intentionally:

1. Master MCP-friendly APIs

Whether it’s OpenAPI specs or JSON Schema definitions, I want my tools to be discoverable and callable without extra glue code.

2. Wrap Existing Services in MCP Servers

That chunked upload system I’ve been building? I’m wrapping it in an MCP server so any AI assistant can manage file uploads through it.

3. Understand Orchestration

I’m learning how AI assistants string tools together using orchestration layers like LangChain and AutoGen. This helps me make my tools composable.

4. Go Deep in One Domain

My focus is on developer tools and healthcare. I want to create AI-usable services that are actually helpful in these spaces.

5. Keep an Eye on Standards

The MCP spec is evolving. I’m watching how it handles things like authentication, permissions, and plugin discovery so I can build forward-compatible tools.

A Safe Prediction and a Big Shift

Here’s the shift I can’t stop thinking about:

In the near future, apps won’t be things people install. They’ll be installing services assistants who know enough tools for the day to day.

Instead of getting a user to download your app, onboard, understand your UI, sign in, and learn your flow you’ll just have an MCP-compliant server. The user tells their assistant (GPT, Gemini, whoever), “I need this done,” and your tool quietly powers the response.

There’s no app store. No tutorials. No language barrier or digital literacy hurdle. Whether the user is tech-savvy or not, the assistant becomes the interface and your MCP server becomes the service.

This changes everything. Customer support, ecommerce, onboarding, payment flows all the traditional barriers of software UX fade into the background. It’s not about building the prettiest UI anymore. It’s about being callable.

The moment that clicked for me, I realized: building tools for this world isn’t just a new opportunity. It’s a whole new lens.

Why This Matters (Zooming Out)

There’s a deeper shift happening here. AI assistants aren’t just a trend they’re a new way of interacting with software. But assistants can’t do anything alone. They need ecosystems. They need you.

If you’re a developer wondering how to catch this wave, you don’t need to reinvent intelligence. Just build something useful and make it callable.

I think the next generation of successful devs won’t be the ones who built the assistants. It’ll be the ones who quietly empowered them.

And if you’re unsure where to start look at the tools you’ve already built. Chances are, one of them just needs a spec and a little polish to become someone else’s missing piece.

Did I make a mistake? Please consider Send Email With Subject