#53 - Building an MCP Server
The unexpected challenges of building, testing, and deploying an MCP server—and what I learned along the way.
Introduction
Over the past month, I’ve come across the term MCP multiple times—first in internal discussions with my team, then on LinkedIn, and finally in recent product release notes. Since my team and I are actively working on building AI agents, I figured it was worth exploring to see what the hype is all about. I decided to share my experience with adding, using, and eventually building an MCP server.
What is an MCP?
A quick Google or ChatGPT search would probably provide an answer, though I have to admit I was surprised that Claude—Anthropic’s own AI, the company that actually introduced MCP—didn’t recognize the term.
In short, MCP is a protocol that enables AI agents to interact with third-party tools. For example, an AI assistant could fetch data from Jira to answer questions about open issues or even send a Slack message on your behalf. It acts as a bridge, allowing AI to extend its capabilities beyond just text-based responses and custom-built tools.
What Is It Good For?
The first thing I did was look for existing MCP servers I could connect to and see how they might be useful. The easiest place to start was the Claude, as it's the most straightforward LLM to integrate MCP into. I began by setting up a weather server using the sample instructions. This allowed me to ask Claude about the future weather forecast, and it would retrieve and provide the data. Nice, but not groundbreaking—any LLM with web search capabilities could do the same.
Next, I moved on to installing the Slack MCP server. To test its usefulness, I gave Claude the following prompt:
"I used to check the Product Team channel daily to see if there were any updates I needed to address. Could you do that for me and summarize any pending action items?"
After a few searches and attempts—done entirely on its own—Claude provided a summary. It informed me that I had missed five messages and that there was one key action item requiring my attention. Now that was starting to feel genuinely useful.
Taking it a step further, I then asked Claude:
"Could you reply in the thread to the message that requires my attention with my response?"
I provided my response, and Claude handled the rest. In just about three minutes, I reviewed my morning updates in a Slack channel, identified key action items, and even responded—something Claude also helped me formulate. Now this was productivity.
Exploring MCP Servers and Their Potential
Browsing through different MCP server directories, you’ll find a variety of integrations—GitHub, Linear, AWS, Notion, and more. Think of it as a library of APIs, but one that doesn’t require you to write code or fully understand API contracts. Instead, your AI assistant can integrate with these services, fetch data, and take action on your behalf.
Consider some practical use cases:
"I have this PRD in Notion—here’s the link. Can you review it and suggest improvements?"
"This is an EPIC in JIRA, and we're building a test plan for it. Can you suggest five test cases for each user story?"
MCP cuts the need to manually navigate APIs by simply issuing natural language commands. It makes AI-driven workflows significantly more accessible and efficient.
Building My First MCP Server
When speaking with key customers about interacting with AI agents, a common theme kept emerging: meet developers where they work. We repeatedly heard requests for VS Code extensions, plugins, or even Slack apps—tools that seamlessly integrate into existing workflows.
The MCP server presented itself as a potential quick win, allowing us to connect Cursor and support GitHub Copilot while integrating our platform into these developer-centric tools. Imagine interacting with our product over the platform with simple queries like:
"What was the latest PR merged in the payment repository?"
“Why is this service failing to meet the desired security standards? Suggest action items”
That alone is useful, but doing it directly from your IDE while connecting to other servers? Now that’s a real productivity boost.
Figuring Out Where to Start
Anthropic provides a quick start guide—which, to be honest, isn’t all that quick to start with. It took me some time to go through example repositories, review open-source code, and have multiple conversations with both Claude and ChatGPT before I finally settled on an approach.
I decided to use two example repositories as a foundation and worked with Claude to generate the initial boilerplate code. From there, I ensured my server's structure followed best practices, particularly for a Python-based MCP server.
Writing the core logic for the MCP server, given the boilerplate, wasn’t particularly complex. I even leveraged the recently published PyPort SDK, which made the process fairly straightforward.
The only real challenge came from the fact that the AI agent now operates asynchronously. Instead of simply handling a POST request via a REST API, the flow became slightly more involved. However, even with this added complexity, the implementation itself was still manageable and took no more than 20 minutes to complete.
The real challenge? Testing and deploying it.
Testing and Debugging
Following the MCP server documentation, I configured the server to run and connect it to either Claude or Cursor. However, when I first attempted this, I kept running into errors—the server simply wouldn’t start.
One key difference between the two tools is Claude’s built-in logging interface, which stores logs from the initialization process. Cursor doesn’t provide the same level of visibility. This meant I spent about an hour manually copying logs from Claude, pasting them into Cursor, and debugging the issues.
Ultimately, the problem boiled down to how the MCP server handles authentication arguments and environment variables. I later discovered that many others struggle with this as well, and some directories are even trying to bypass the issue by offering managed MCP servers that take care of authentication and configuration for you—like Composio.
How did I finally figure out the solution? Using OpenAI's deep research feature, which identified a very niche suggestion in a Reddit thread.
The feeling of asking Claude or Cursor a question and seeing it successfully trigger the AI agent we built was enormous—a mix of excitement and relief. It was a clear validation that everything was finally coming together.
With the server working, I was ready to publish it. However, it turned out to be more time-consuming than expected—taking even longer than the actual building and testing process.
Publishing the MCP Server
When looking at how other MCP servers were built, I decided to package mine using uv, Python’s package manager. I already knew the installation command, but I relied on Claude to walk me through the steps to properly package and publish it.
The process was relatively straightforward:
Create a PyPI account.
Register the package.
Bundle it with a uv.lock file.
Run a series of packaging and publishing commands.
That was it—or so I thought.
Similar to the testing challenges, when running commands via Claude and Cursor, the server failed to initialize due to authentication and environment variable issues. After hours of debugging logs and iterating, I decided to switch from environment variables to passing arguments directly. This was mainly due to Cursor’s limitations in handling environment variables during installation (which Composio for example solves).
Finally, after several iterations, it worked! The MCP server was now published, and others could install it.
But publishing it for myself was one thing—getting others to successfully use it was another challenge entirely.
To ensure it’s production-ready, I worked with two colleagues to test the installation process using the same commands I had documented.
First test: A small typo in the installation instructions caused an issue, but after fixing it, everything worked smoothly.
Second test: This one was more challenging. As package managers and different Python versions were already defined, we had some compatibility issues.
To resolve this, I turned to Claude once again and created a shell script that automates the setup, making it easier for others to install and configure the server without running into environment conflicts.
With that, I could finally consider my work done—our MCP server was built, tested, published, and ready for others to use.
Lessons learned
This experience reinforced the power of AI-assisted development and how much ownership product managers can take over technical solutions. My main takeaways include:
AI is not a magic bullet: Complex problems still require deep research, trial and error, and critical thinking.
Testing and deployment were the hardest parts: Cursor offered little support, requiring multiple iterations before success.
Making it work for others is a different challenge: Enterprise users expect stability, and ensuring compatibility across environments is not trivial.
Despite the challenges, this was a fun and rewarding experience, proving that even new, uncharted technologies can be tackled and brought to life.