#52 - Wearing a Developer’s Hat for a Few Weeks
Lessons from developing an internal tool to integrate systems
Introduction
Some time ago, I wrote about why product managers need to code. However, it wasn’t a deep dive; it was more of a tip or a suggestion. Recently, I took on a task for an internal tool improvement and built it from scratch. From this experience, I learned a lot about what it means to develop something, and how to better understand developers. In this post, I will share a deep dive into the process I went through, as well as the lessons I learned.
Task background
To keep things simple, the task was about integrating data between two third-party services. In one system, our product team manages customers’ feature requests and general feedback. The other is our company’s CRM that houses all our customer data. The goal was to create better views and a smoother management experience for feature requests in the context of customers.
For example, being able to see, for each company, which feature requests they submitted, as well as identifying the top feature requests by total company votes and ARR (annual recurring revenue). This context cannot be achieved easily—if at all—in the existing product where we manage our feature requests. The task was about integrating the data so we could create these views.
Lessons learned
As the post will explain, this project took longer than I originally planned (and so did writing about it). For that reason, I’m including the lessons I learned upfront:
Development is Iterative - Each cycle revealed more about the scope, limitations, and allowed me to refine my estimations. What I imagined before starting was vastly different from what it became by the end.
Estimation vs. Reality - Estimating is easy; being accurate is almost impossible. Focusing on the scope, goals, desired state, and making tradeoffs along the way was the best way to influence the timeline.
Balancing Perfection and Delivery - Owning a project makes you want to perfect it, but it’s a challenge to balance releasing something fast that works—even with potential flaws—against the desire for perfection.
POC
The POC stage was about validating what was possible and what it would require from a technical point of view. Since I needed to work with two third parties, the first step was to obtain access to each, understand their API documentation, and see if I could interact with their APIs. Luckily, for one of them, I was already familiar with its API, so that part was pretty straightforward. For the CRM, I had no prior experience. I needed to work with our CRM manager to get access and some basic knowledge, and then, using GPT and their API documentation, I was able to authenticate with their API.
The next step was to identify what I actually needed to do: what data to map, which objects to handle, how the data related to one another, and how to address duplication issues, among other things. At this point, I didn’t know all the open issues or questions, but that was acceptable for a proof of concept, where the main goal was simply to confirm that I could integrate with the two APIs and move data from A to B.
I also looked for an SDK for each API and successfully found one for the CRM. That came in handy when I needed to create a more complex integration. I finished this step with a better understanding of feasibility and started to get a sense of the scope of the task.
Technical design
At the POC stage, everything was local in my own repository and didn’t include any integration with existing infrastructure. In reality, the integration needed to sit somewhere, which meant a service had to host it and run it on the schedule we decided. The goal of the technical design phase was to determine the best way to integrate the two systems within our current infrastructure, now that I had confirmed they could be integrated.
I started by getting access to our development repository and discussing the best flow with the DevOps team. They suggested creating a specific folder for the code and using a GitHub workflow. This setup also dictated how the code should be built, what it should consume and output, and provided insights on troubleshooting and optimization I might need to include down the line to detect any issues.
This step was relatively short—taking a little more than a day. Looking back, I believe I was only about 10–15% of the way through completing the entire task at this point.
The first development iteration
Now the fun begins. I got access to the repository, created my own branch, and started developing the integration. I won’t go over every step in detail, but I will note that I relied heavily on Cursor—the Chat, the Composer, and the Agent—which was probably the main enabler for completing this task in a reasonable timeframe.
The scope of the first iteration was the “happy flow.” I didn’t focus on performance, optimization, error handling, or duplication yet. Instead, I just wanted to validate that what worked in the POC would also work in the real environment. Concretely, I took around 10 feature requests from the feature request system and synced them one by one to the CRM without any optimization, mostly relying on debug logs.
Looking back, this step was one of the hardest because it established the infrastructure and assumptions for future use cases. The decisions I made here affected later iterations, even though I could have refactored further down the line—but it would have been harder as the code got more complicated than I thought.
Identifying missing use cases
Once the integration was up and running, I started to notice various missing use cases. A feature request might already exist in the CRM, and I might need to update it rather than just create a new entry. Simply creating a feature request wasn’t enough—I also needed to associate it with the company and the user who submitted it. Sometimes, when pulling a feature request from a third party, certain fields are null, which can cause errors.
Moreover, some parts of the code were not optimized. Running on 10 feature requests took about half a minute, which could translate to hours for an entire database. This was just the tip of the iceberg, and I didn’t have any organized requirements document. Instead, I shared daily updates via a Slack channel, always including the next steps or tasks I identified. The more I worked, the more additional scope I found.
The first iteration was successful in that it worked, proving I had the skills to move forward. However, it was clear that I needed to continue implementing these missing use cases.
The second development iteration
The second iteration focused on implementing the missing use cases and preparing the integration for production readiness. This involved optimizing network requests to avoid rate limits, properly logging errors, and adding visibility into the process. If I’m integrating two third-party tools daily without knowing what actually happens—even if everything appears successful—it’s bound to become a problem later.
During this phase, while I still relied heavily on Cursor, I noticed its limitations as the codebase grew larger and more complex, containing multiple files and modules. For example, when I wanted to implement a new flow, Cursor often rushed into suggestions that didn’t align with my technical vision. To address this, I had to revisit my prompting techniques. I made sure to ask it to thoroughly analyze the code, understand the business logic, and treat the flow as intended. When needed, I corrected it and refined the prompts until we reached a decision together on the implementation.
As the code and business logic became more intricate, I had to get more involved in verifying everything worked as expected. This included regularly running end-to-end tests and executing the code from scratch to ensure nothing broke. By the time I completed this step, I believe I was 70–80% done. The integration was running, optimized, and included monitoring and logging for processing larger batches of feature requests end-to-end.
Unit tests
Unit tests are designed to test individual modules or functions in isolation, without running the full flow or interacting with third-party services. For example, I had to write business logic to extract the domains of users who voted on a feature request, filter for unique domains, and sync those to the CRM. This is a self-contained module that could be tested independently of the rest of the code.
For such cases, I built unit tests. I had to consider various scenarios: multiple domains, a single domain, missing email fields, and more. This ensured the code was resilient and handled errors while adhering to the business logic. While I manually tested these cases earlier, unit tests served as a fail-safe, catching unexpected issues that could arise later.
Although I wrote unit tests throughout the process, at this point, I paused to ensure that all internal functions and modules were properly tested. I also added some basic testing for external interactions by mocking third-party services. While mocking is inherently limited, it allowed me to have at least some safeguards in place. This became particularly important as the codebase grew more complex, and I wanted to ensure I wasn’t overlooking anything.
Finalizing the desired scope
The final step was about clarifying the deliverables based on everything completed so far. This meant solidifying decisions such as running the integration in a GitHub Workflow, printing a summary of how long the process took, breaking down the actions performed, and sending this summary to a Slack channel. It also included defining the scope we could handle, understanding the system’s limits, specifying which fields we would sync, identifying the environments we would work on, and so on.
While most of this had already been implemented, this step was about scoping and finalizing what would eventually reach production. It gave me clarity and confirmed that I was around 80–90% ready with the code.
The third development iteration
The third development iteration was simpler and shorter, with what I can only describe as a miracle. When I ran the dry run for the first time, it encountered a bug or two, which I managed to solve within two or three hours. And that was it—it worked. I was quite amazed at how smoothly it went.
I repeated the process several more times, using different variations, and it continued to perform as expected. At this point, it was time to finalize everything and align it with the infrastructure. This included building the GitHub workflows, ensuring everything worked locally, and beginning preparations for production deployment.
End-to-end testing
At this stage, I wanted to test the integration from a local instance using the production environment. My assumption was that if it worked in the sandbox, it should work in production as well. Unsurprisingly, it failed. There were notable differences between the sandbox and production environments, such as discrepancies in IDs, schema variations, and differences in how the environments were structured.
To address these issues, I worked closely with our CRM manager to resolve the discrepancies. In some cases, I had to adjust the code to account for differences between the two environments. Each fix was accompanied by linter checks, unit tests, and end-to-end testing on both environments.
While this process took longer than expected, by the end of it, the main code was stable and ready for the next step.
Bug fixes
At this point, I faced a dilemma—a trade-off between perfection and speed. On one hand, I wanted the code to be perfect: well-structured, clear, simple, and thoroughly tested. On the other hand, there was a push for a shorter time to market. I realized this is a common challenge for engineers. Once a feature is released, attention typically shifts to the next task, and unless there’s a critical bug, the released code might not get revisited.
Despite the pressure, I chose to invest time in improving the code during this phase. I focused on making the code more readable, ensuring there were enough logs to identify and fix potential bugs, and testing thoroughly. This phase also revealed bugs that weren’t apparent earlier. For example, just a day before the code was set to reach production, I discovered a critical issue: when updating a specific object, it didn’t work as expected and caused undesired behavior in the CRM.
Through this experience, I learned that finding bugs early is often more difficult than fixing them. This phase was dedicated to properly testing, fixing bugs, and covering as much ground as possible to ensure the code was production-ready.
Monitoring and visibility
Monitoring—particularly visibility into what the code is doing—was a concern throughout the process, but by this point, I adopted a more structured approach to ensure the application’s actions were well-documented and communicated. Ultimately, this meant refactoring parts of the code to ensure the statistics were accurate and building a system that relied heavily on tracking and reporting the integration’s operations.
I implemented a feature to send detailed summaries as Slack messages. These messages included key metrics such as the time taken for the process, the number of API requests, and the objects identified, created, or updated. Without this step, no one would have known what the application was doing, leading to potential assumptions that it wasn’t functioning properly.
Adding visibility not only provided transparency but also gave me valuable insights into the system’s behavior—such as update frequency, latency, and the overall performance of the code.
Preparation for production
In preparation for production, I performed a final full sync between the systems using the production environment and shared the results with key stakeholders, including the CRM manager, product team, and GTN teams, for review. While the code wasn’t technically on production yet, the data was live. I ran the next couple of syncs locally from my computer to catch errors early before fully transitioning to production.
This phase highlighted a familiar challenge for me: navigating the process of integrating code into the main branch (where the production code lives), the Pull Request process (requesting approval for the code integration), and understanding where everything fits. I had to ask a team leader where to place the code, realize I lacked the necessary permissions for the repository, request those permissions, and deal with a new repository structure that differed from what I was used to.
Once the code was in the correct location, I built the GitHub workflow file (a format to automate tasks within the repository). Here, I encountered another challenge—there’s no effective way to test a GitHub workflow file until it’s already on the main branch. This meant going through the pull request review process. Since this was for an internal tool that didn’t impact customers, the review process was relatively quick and lean. However, I know many developers face lengthy review cycles with numerous comments, sometimes taking days or even weeks. I got lucky this time—my code was approved and merged into production.
Even though the code was on production, it didn’t work perfectly yet. I spent a few more hours iterating, ensuring everything functioned as intended, including secrets management, Slack notifications, and the complete working process.
Launch
Launching the new process and integration was straightforward. An automated process now runs every few hours, syncing data between the two third-party systems. It also sends a clear message summarizing its actions, making monitoring effortless. Internal teams quickly noticed the improvement and provided very positive feedback, along with suggestions for further enhancements. It was a gratifying moment to see the results recognized and appreciated.
Monitoring for performance and stability
The maintenance phase has just begun, but for this type of internal tool, I’m confident we have the basics in place to monitor performance and stability. I’ve also started maintaining a to-do list of improvements I’d like to make in the code. Whether I’ll get to those items remains uncertain—probably about as likely as any other developer addressing their personal backlog of desired code refinements.
That said, this project has been a great experience. Not only did it allow me to promote and deliver a successful side project, but it also made a meaningful impact for the broader teams.