Building a SaaS with Claude Code
Coding with AI reminded me what good Engineering Management actually looks like.
👋 Hey, it’s Stephane. I share lessons, and stories from my journey to help you lead with confidence as an Engineering Manager. To accelerate your growth see: 50 Notion Templates | The EM’s Field Guide | CodeCrafters | Get Hired as an EM | 1:1 Coaching
Paid subscribers get 50 Notion Templates, The EM’s Field Guide, and access to the complete archive. Subscribe now.
Last week I had an idea for an app as I do from time to time. This time I think it’s well worth building it, so I am doing it with AI! How exciting!! During this process, I am having so much fun and realised that it kinda feels like managing a team but being a lot more hands on.
Let me explain.
I made some initial technical decisions on the tech stack that I am familiar with and I enjoy using. Other than that I have created agents to help me with research, getting up-to-date documentation on libraries I use, creating technical plans and implementing them.
So… my job on this has become and it’s very depending on me communicating clearly what I want to see, reviewing code, and making small tweaks.
How?
My tech stack
Auth: Clerk
Backend: Convex (API, Database, File Storage, Cronjobs)
Email: Resend
Analytics: PostHog
Hosting: Vercel
Bot Protection: Vercel BotID
These choices came very naturally to me as they are tools that are easy to set up, offer exactly what I need and prioritise developer velocity.
Convex
The only one that I haven’t used in the past is Convex. And I decided to use it for pretty much my whole backend. API layer, database, file storage, background jobs.. it’s all Convex.
Most engineers would maybe object and say: "What about separation of concerns? What about vendor lock-in? What about best-of-breed solutions?"
Here's my thinking: those concerns are very valid for teams, but irrelevant for my side project and maximally efficient AI-assisted development. When I need to implement a new feature, I don't want to coordinate between four different services, each with their own authentication model, deployment pipeline, and error handling patterns. I want to write a function that works immediately.
Convex gives me that. Need real-time data? It's built in. Need file uploads? One function call. Need a cron job? Add a scheduler annotation. The entire backend surface area fits in my head, and more importantly, in an AI agent's context window.
Within a team or a company there are obviously a lot more considerations to think about.
How do I use AI?
Here's where it gets interesting. I'm not just using AI to generate code, I'm experimenting with adopting a completely new way of working.
I am using Claude Code with subagents which are awesome!
That’s my system for now and it works pretty amazingly.
I have created 4 Specialised Experts:
Product Expert: Gets a feature request, analyses it and breaks it down into user stories and requirements in a feature.md file
Frontend Expert: Takes a feature.md file and creates a detailed UI/UX implementation plan in a feature-tech-implementation.md file
Backend Expert: Takes the feature.md file, designs the backend implementation for it, and updates the feature-tech-implementation.md file
Copywriter Expert: Reviews all text and suggests improvements
Notice that none of the experts actually write any code.
The only thing I want from them is high quality documentation on what needs to be done to implement a feature.
And this is where MCP servers come in.
MCP is an open protocol that standardizes how applications provide context to large language models (LLMs)
I mostly rely on Context7. Context7 provides up-to-date documentation for LLMs and AI code editors. That means that for any library pretty much you can think of you can ask claude-code (or a subagent) to use this MCP to get up to date documentation on XYZ library.
And this is the key detail in my implementation. I get this information and stick it in the feature-tech-implementation.md files.
Once this file has input from the frontend and backend subagent, it’s time to review the implementation plan.
The review process
The most surprising part of this workflow is how much time I spend reviewing documentation rather than writing code. Each subagent produces detailed specs:
Features
Database schema
API endpoint definitions
UI component architecture plans
I read through these specs like I'm reviewing an important design document. Is the data model normalized correctly? Are we handling edge cases? Does this API design make sense for future features?
Only after I approve the specs do I feed them to Claude Code for implementation. And even then, I implement in phases. Reviewing the output at each step.
Putting it into practice
Let me give you a concrete example. When I needed to add calendar integration in my app I did the following:
Product Expert Subagent mapped out user stories: "As a user, I want to see my availability in real-time so I don't double-book meetings."
Backend Expert Subagent designed the data models: webhook handlers for calendar events, conflict detection algorithms, timezone normalization strategies.
Frontend Expert Subagent specified the UI components: availability grids, booking flows, confirmation states.
Copywriter Expert Subagent handled all the microcopy: error messages, confirmation emails, loading states.
The entire spec was 15 pages of detailed technical documentation. I reviewed it like I would any other design proposal, suggested changes, and got back revised specs. Only then did the implementation begin.
The result actually blew my mind. I had a calendar integration that worked correctly across timezones, and handled edge cases gracefully. All implemented in a day instead of weeks.
Building this way has oddly highlighted a few things that relate to team leadership:
Documentation is everything. When your "developers" are AI agents, you realize how much software problems come from unclear requirements. Agents force you to be precise about what you want.
Review cycles matter more than implementation speed. I catch architectural problems in the spec phase instead of during code review.
Context sharing is critical. All agents work from the same shared understanding of the project.
What I’ve learnt so far from this experiment
Invest heavily in documentation standards. Not because AI needs it (though it helps), but because clear specs prevent 90% of implementation problems before they start.
Create explicit review stages. Don't let implementation begin until architectural decisions are documented and approved. This isn't micromanagement, it's preventing expensive mistakes.
Invest in documentation quality. If you can't explain a feature clearly enough for an AI agent to implement it correctly, your human developers probably won't understand it either.
AI didn't just change how I build software. It reminded me some crucial capabilities of great engineering management.
I would love to hear from you, what are you working on this period?
That’s all, folks!
See you in the next one,
~ Stephane
PS. Let me know if you’re interested in getting the subagents I have created so far.
Sometimes the hardest part isn’t coding, it’s defining the “what” and “why.”