How I Wrote a Million Lines of Code Without Being a Developer
The FeedSquad origin story: building a full SaaS product using AI coding assistants without a traditional engineering background.
FeedSquad's codebase crossed a million lines. The person who built it — me — isn't a developer. Not by training, not by career, not by any traditional definition.
I'm a marketer who learned to direct AI coding assistants well enough to build a production SaaS product that real customers use daily.
This isn't a flex. It's a data point about where we are in the AI development timeline, and what it means for founders who have ideas but don't have engineering teams.
What "Not a Developer" Actually Means
Before starting FeedSquad: I could write basic HTML and CSS. I understood how APIs worked conceptually. I'd used WordPress and no-code tools for marketing sites. I had zero experience with React, TypeScript, databases, authentication systems, payment processing, or any of the infrastructure that a real SaaS product requires.
I wasn't starting from absolute zero. I understood how software worked at a surface level. The gap between "can customize a WordPress theme" and "can build a multi-tenant SaaS application" is enormous.
AI coding assistants bridged that gap. Not by magic, and not without pain. They bridged it.
I'm not the only one leaning on them. The 2025 Stack Overflow Developer Survey found 84% of developers are using or planning to use AI tools, up from 76% the year before. JetBrains' State of Developer Ecosystem 2025 puts regular use at 85%. The tools that made my build possible are now standard issue for actual engineers.
The First Version Was Terrible
Let's not romanticize this. The first version of FeedSquad was a mess.
The AI assistant generated code that worked — in the sense that buttons did things when you clicked them — but the architecture was a disaster. No consistent patterns. Duplicated logic everywhere. Security practices that would make an actual developer cringe.
It took three rewrites over six months to get to something resembling professional-quality code. Each rewrite happened because I learned enough to recognize what was wrong with the previous version.
This is the key insight: AI coding assistants didn't skip the learning curve. They compressed it. Instead of spending years learning fundamentals before building anything useful, I learned by building something useful and fixing it iteratively.
How AI-Assisted Development Actually Works
The day-to-day process looks nothing like what most people imagine. It's not "describe your app and get code." It's closer to being an extremely junior developer with a very patient, very fast senior engineer sitting next to you.
A typical session: I describe what I want to build in plain language — "I need an API endpoint that takes a post ID and reschedules it to a new time." The AI generates code, usually 80% correct on the first try. I run it. Something breaks. I paste the error back. It fixes it. Repeat anywhere from once to twenty times. The feature works but the code is messy, so I ask for a refactor — improve the structure, add error handling, follow the patterns used elsewhere in the codebase. I review the refactored code, now understanding it better than before.
That's one feature. Multiply by hundreds, and you have a product.
Worth noting: the productivity story isn't one-way. A randomized trial from METR in 2025 found that experienced open-source developers working in codebases they already knew were actually 19% slower with AI, even though they believed they were 20% faster. The gap between perceived and measured productivity is real. My honest read of my own work: AI dramatically expanded what I could start, while the senior-engineer verification step — the thing I didn't have — is still where most of the risk lives.
What I Actually Had to Learn
AI assistants handle syntax and boilerplate. There's a layer of understanding you can't skip.
Architecture decisions. Should this be a server component or a client component? Where does the state live? How do you structure your database tables? AI can suggest options, but you need enough understanding to evaluate them.
Security. Authentication, authorization, input validation, CSRF protection, rate limiting. AI assistants will generate insecure code if you don't know to ask for secure code. I spent weeks just learning what security concerns exist for web applications.
Performance. The difference between code that works and code that works at scale. Database queries that load fine with 100 rows but choke on 10,000. Components that re-render unnecessarily and make the UI sluggish.
Debugging intuition. When something breaks, knowing where to look. Is it a frontend issue, a backend issue, a database issue, or an infrastructure issue?
System design. How different parts of the application connect. How data flows from the user interface through the API to the database and back. This mental model is essential, and it comes from building, breaking, and rebuilding.
I estimate I spent 40% of my time writing code (via AI) and 60% learning the concepts needed to direct the AI effectively.
The Tools That Made It Possible
For writing code, the key is an assistant that understands your full codebase, not just the file you're working on. Context is everything. An assistant that can see your database schema, your existing patterns, and your type definitions produces dramatically better code than one working with a single file.
For debugging, the same assistants, but the key is feeding them complete error messages, relevant code context, and clear descriptions of expected vs. actual behavior.
For learning, asking the AI to explain what the code does and why, not just to write it. Every feature became a learning opportunity.
For code review, having the AI check its own output against a set of standards: "Check this code for security issues, performance problems, and violations of our project conventions." This catches a surprising number of issues.
The Numbers
Here's the honest accounting:
- Time to first working version: 3 months of full-time work
- Time to production-ready product: 9 months total
- Total rewrites: 3 major, dozens of minor
- Hours spent: Roughly 2,500 over the first year
- AI tool costs: €200–€400/month
Would a skilled developer have built it faster? Absolutely. Would the code have been better from day one? Without question.
I wouldn't have been able to afford a developer for 9 months. And more importantly, the process of building it myself — even mediated by AI — gave me a deep understanding of my own product that I wouldn't have if I'd handed specs to an engineering team.
What's Actually Possible Today
Let me be concrete about what a non-developer can realistically build with AI coding assistants in 2026.
Definitely buildable: Full-stack web applications with authentication. Payment processing and subscription management. Content management systems. API integrations with third-party services. Real-time dashboards and analytics. Multi-platform content scheduling tools.
Buildable but challenging: Real-time collaborative features. Complex data pipelines. Mobile applications. Products requiring heavy computation. Anything with strict compliance requirements.
Still too hard without real engineering skills: Low-latency systems. Custom ML model training and deployment. Infrastructure at true scale (millions of users). Hardware integration. Embedded systems.
The sweet spot for AI-assisted development is web applications with moderate complexity. That happens to cover a massive portion of the SaaS market.
The Mindset Shift
The hardest part wasn't technical. It was accepting that my role is director, not developer.
I don't write code. I direct an AI that writes code. The skill isn't programming — it's specification. Can you describe what you want precisely enough that an AI can build it? Can you evaluate the output and identify what's wrong? Can you break a complex system into small enough pieces that each piece is within the AI's capability?
Those are product skills, communication skills, and systems thinking. They're learnable by anyone who can think clearly about problems.
What This Means for Founders
If you have an idea for a software product and you're waiting until you can afford a developer or find a technical co-founder, reconsider.
The bar for building functional software has dropped. Not to zero — you'll still spend months learning and building. The capital requirement has gone from "raise money to hire engineers" to "dedicate time and a few hundred euros a month in tools."
That changes who can be a founder. It changes what ideas get built. And it changes the entire calculus of starting a company.
The million lines of code aren't the point. The point is that the barrier between "having an idea" and "building the thing" has never been lower.
If you've built a product this way and now need distribution, FeedSquad is the content layer I built for founders in exactly that spot.
Sources:
- Stack Overflow — 2025 Developer Survey: AI
- JetBrains — The State of Developer Ecosystem 2025
- METR — Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity
Ready to create content that sounds like you?
Get started with FeedSquad — 5 free posts, no credit card required.
Start freeReady to try FeedSquad?
Create content that actually sounds like you. 5 free posts to start, no credit card required.
5 posts free • No credit card required • Cancel anytime
Related Articles
You Can Build Anything Now. You Still Can't Get Anyone to Notice.
The solopreneur distribution problem: AI lets you build products fast, but distribution is still the bottleneck. Here's what to do about it.
LinkedIn Strategy When You're a Team of One
Three hours a week on LinkedIn, not thirty. The batching system, post types that don't require research, and the parts of 'thought leadership' I've stopped doing as a solo founder.
Building a Team of AI Agents as a Solo Founder
Why specialized AI agents beat one general-purpose chatbot for solo founders — and the honest limitations. What I actually use, what I stopped using, and the math behind it.