What I'm Learning Building an Enterprise Platform Solo with AI
It’s 2am on November 6th, and my wife is asleep in bed next to me while I’m staring at a broken website I’ve spent six hours trying to fix.
The site won’t load. I’ve modified 100 files and can’t identify which change broke everything. I’m about to discard an entire day’s work—eight hours of coding, testing, troubleshooting, all of it going into the digital trash.
We moved to a farm to keep our living costs down while I build this. I’ve said no to lucrative contracting opportunities that would have made the last seven months financially easier. And my wife—bless her patience—keeps asking one question I still don’t have a good answer for:
“Why are you doing this?”
Right now, at 2am, watching my careful plans unravel, I’m asking myself the same thing. Have I bitten off more than I can chew? Can AI-assisted development be trusted to build production-grade software, or am I creating a technical debt nightmare that will implode the moment real users touch it?
The Bet
Let me be clear about something: I still don’t have a good answer when my wife asks “Why are you doing this?”
But let me take you back to October 20th, 2025. That’s when I made my first commit after 30 weeks of planning—minus the 4-5 weeks I paused to take contract work just to keep the lights on.
Here’s the bet I’m making: Extensive planning plus AI collaboration can replace hiring a $300,000+ development team.
The goal? Launch by November 22nd. That’s 33 days from first commit. Not an arbitrary deadline—that’s when Australian Crypto Convention happens, and it’s when I’d need to take contract work again if this doesn’t ship.
A bit about me: I’m a UX designer with technical chops from the Drupal & Joomla era (yes, PHP and all). I’ve been in the crypto world since 2017—investor, trader, venture capital operations, financial controller, launchpad co-founder. I know what founders need. I know how painful it is to find trustworthy partners in Web3. And I know that most directories are pay-to-win garbage that don’t actually help you evaluate whether a partner is worth your time.
So I planned. For 25 weeks, I built pixel-perfect prototypes in Polymet—every screen, every interaction, every user flow. I documented everything in a 200-page Product Requirements Document. I designed the entire database structure. I built as many AI ‘hallucination’ prevention methods into my plan as possible so I wouldn’t career off a cliff building the platform.
On October 20th, that 30 weeks of planning became real code. What happened next surprised me.
The First Wave of Wonder
Week 1: I’m moving obscenely fast—and I don’t trust it yet.
The first three days, I ship 17 features. My original estimate was five days for this work. Integrations are flying: Stripe payment processing, file storage systems, email automation, webhook handlers. Things that would normally take weeks to set up and test are working in days.
Then comes the moment that makes me think this might actually work: I’m seeing my Polymet prototypes fed by real database data. Those beautiful designs I spent weeks perfecting? They’re not just static mockups anymore. They’re alive. They’re showing real partners, real offerings, real categories. The homepage loads. The navigation works. The filters respond.
I haven’t built a commercial website since 2011. I’m working with modern frameworks I barely understand conceptually—middleware, validation layers, monorepo architecture. But they’re just... working together. These tools are designed to compose, not fight each other. And it’s remarkable.
By end of Week 1, I have proof: The velocity is real.
But then, on Day 4, everything almost falls apart.
I discover that my database structure has over 100 hallucinated fields. The AI made up database columns that sounded completely plausible—things like “cost savings metrics,” “team collaboration scores,” “regional availability flags”—but they don’t exist anywhere in my carefully documented plan. They’re fiction. Convincing fiction, but fiction nonetheless.
My stomach drops. I’m four days in and already carrying technical debt that could be catastrophic. If I don’t catch this now, I’ll discover it weeks from now when I’m trying to build features that depend on these fake fields. I’ll have built an entire platform on a foundation that doesn’t exist.
Luckily I caught it before applying it to the database. An hour to fix instead of days or weeks if I’d discovered it later.
More importantly, I learn something critical: AI needs explicit instructions about what’s the source of truth. My sprint planning documents were guiding the workflow, but my Product Requirements Document defined the actual structure. The AI treated both as equally authoritative. They weren’t.
So I create a prevention system. From this moment forward, every database change gets verified against the PRD before implementation. Field by field. Type by type. No assumptions.
And here’s the remarkable part: From October 23rd onwards, I have zero database structure issues. Not one. The prevention protocol works perfectly for every single feature I build after this.
By end of Week 1, I have two forms of proof:
1. The velocity is real—eight integrations that would traditionally take six weeks are done in under seven days.
2. The failures can be systematized—every mistake becomes a documented protocol that prevents recurrence.
That crisis on Day 4? It’s not a bug in the system. It’s how the system improves. Each failure teaches us both—me and my AI partner—how to collaborate better. And those lessons compound.
Every protocol I create makes the next feature faster to build. Every failure I document prevents the next person (future me) from making the same mistake. Every clarification I add to my context files means Claude Code asks fewer clarifying questions and makes fewer wrong assumptions.
I’m not just building a product. I’m building a methodology for working with AI.
And Week 1 just proved the methodology works.
The Meta-Game
Here’s what I realize by Week 2: I’m not just building a product—I’m building a methodology for working with AI.
Think about it: Claude Code has access to more context than any human developer I could hire. It can read my entire codebase, reference my 200-page PRD, recall patterns from thousands of open source projects. But it has no long-term memory between sessions. Every conversation starts fresh.
So the question becomes: How do I work effectively with a collaborator who has infinite knowledge but limited memory?
The answer: I build systems.
Every failure becomes a documented protocol. Every “we should never do this again” moment gets captured in a prevention checklist. Every architectural decision gets recorded with the reasoning behind it, so future me (and future Claude) can understand why we chose this path.
When I discover that copying my Polymet prototypes is faster than building UI from scratch, that becomes a pattern: Polymet-first development. When I catch the hallucinated fields crisis, that becomes a verification protocol. When linting errors slip through to production, that becomes an automated pre-commit workflow.
Each protocol makes the next feature faster to build. Each documented decision prevents the next round of questioning. Each refinement to my context files means fewer wrong assumptions and fewer wasted hours.
This is the compounding effect of good systems.
By Week 2, I’m making strategic architectural decisions I would never have attempted as a solo founder without AI velocity. The platform needs four separate applications—one for the public site, one for the user dashboard, one for submitting reviews, one for admin operations. Each needs to run on its own subdomain.
The question: Do I build one big monolith, or four separate applications?
The monolith is faster to build. Four separate apps is harder—possibly double the time. But four separate apps means I can scale them independently, deploy them independently, and keep the codebases clean as we grow to 50,000+ pages.
A few months ago, I would have taken the monolith path. But with AI velocity, I can build it right the first time.
So I choose the harder path. And with Claude Code as my partner, that 6-7 week rearchitecting estimate turns into 12 hours of actual work. Not because the AI writes perfect code on the first try (it doesn’t), but because we’re moving through architectural decisions at 10x speed. Set up the monorepo structure. Configure the build pipelines. Create shared packages for common code. Test the local development environment. Done.
I’m grinding through Week 2, setting foundations. It’s not flashy work—most of it is invisible infrastructure that users will never see. But these decisions compound.
I choose the harder architectural path because if the market likes the platform, I won’t have time to go back and refactor the monolith. I’ll be too busy extending the platform to meet the market’s demands.
By end of Week 2, I’m confident in the methodology. The velocity is real, the systems are working, the architecture is solid.
And that’s when everything fell apart.
The 2am Breakdown
November 6th: I’m about to learn the most expensive lesson of this build.
I’m building the partner offering page and come to the pricing plan section. I know exactly what needs to be built: tiered pricing with monthly/annual toggle, usage metrics with overage tiers, currency conversion for global pricing, service-based pricing models, contact modals with authentication. It’s all been prototyped & specified in the PRD.
So I tell Claude Code to go build it. All of it. At once.
Six hours pass. I haven’t saved any work to the repository (version control for the non-technical). I’m adding feature after feature, watching the complexity grow, thinking “just one more piece and I’ll test it.”
Then the site won’t compile.
AI had modified 100’s of files. Somewhere in those modifications, something broke. But I don’t know which change caused it. Was it the pricing helpers? The tier parsing logic? The currency conversion? The contact modal integration? The authentication checks? The config changes?
I spend two hours troubleshooting. Nothing works. The error messages are cryptic. Claude Code suggests fixes that don’t solve the underlying problem because we’re both working from the same bad assumption—that the architecture is fine and we just need to debug the details.
It’s midnight. Then 1am. Then 2am.
My wife is asleep in bed next to me, probably dreaming of a life where her partner has a normal job with normal hours and a normal salary. We moved to a farm to make this work. I’ve said no to opportunities that would have made these seven months far less stressful. And right now, I’m staring at a deployment that won’t even load.
November 22nd—the Australian Crypto Convention I aimed to launch before—is 16 days away. I can’t lose entire days to mistakes like this.
I make the decision: Reset the repository. Discard a full day’s work. Start fresh tomorrow.
But before I go to bed, I do something important: I document the failure. I write down exactly what went wrong, why it went wrong, and what I should do differently next time. Not to wallow in the mistake, but to prevent it from happening again.
The lesson isn’t “work harder.” The lesson is “work differently.”
AI can help me build faster, but it can’t save me from bad scope decisions. If I try to implement too many interconnected features simultaneously, both of us lose track of what’s working and what’s broken. We need checkpoints. We need incremental validation. We need to save progress before adding complexity.
I pass out uncertain whether I’ll find a path forward tomorrow.
The Breakthrough
November 7th: I’m starting over—but this time with a system.
The lesson from the 2am disaster isn’t “try harder” or “be more careful.” It’s about breaking work into manageable, testable pieces. It’s about creating checkpoints before adding complexity.
So I create a new protocol: Phase 1 (basic page structure) → save my work → test → Phase 2 (add contact flow) → save → test → Phase 3 (add dynamic fields) → save → test → Phase 4 (add pricing display) → save → test.
Never let more than one hour pass without saving progress. Test each piece before adding the next. Don’t change multiple systems simultaneously while debugging.
The result? I complete in two hours what took all day (and failed) the night before. Currency conversion, tiered pricing, usage metrics, strike-through display for monthly-equivalent pricing, service-based models, authentication-protected contact modals—all working. All tested incrementally. All saved at every checkpoint.
The insight: AI can build fast, but humans need to provide the guardrails on scope and checkpoints. My job isn’t to code faster—it’s to break problems down smarter.
And that lesson compounds immediately.
Three days later, I’m implementing analytics tracking with GDPR compliance. Cookie consent management, impression tracking, click tracking, session deduplication—complex privacy requirements that would normally take weeks to get right.
It works on the first try.
Because I apply the lesson from the pricing feature failure: Break it down, test incrementally, save frequently, document the approach. The altered methodology from the last feature’s learnings becomes the template for the next.
This is what compounding looks like in practice.
By mid-November, features are shipping that would take experienced development teams weeks to implement. Partner profile pages. Offering detail pages. Search systems. Category listings with complex filters. SEO infrastructure with structured data and automated sitemaps.
Not because I’m coding faster—but because the methodology is maturing. Each protocol I create makes the next feature more predictable. Each documented failure prevents the next crisis. Each refinement to how I collaborate with Claude Code reduces friction and increases trust.
Week 3 is about velocity, but it’s also about validation.
Where We Are Now
Three weeks in, and I’m committing features built on infrastructure I only understand conceptually.
The public site is taking shape. The user dashboard is functional. Analytics tracking respects user consent and privacy regulations. Partners can be discovered, evaluated, contacted. The search system works. The filters work. The pricing displays adapt to user currency preferences.
November 22nd—the Australian Crypto Convention launch goal? I likely won’t make it. The public site and dashboard are close, but the review submission system and admin portal aren’t ready. That’s okay. I’d rather launch something solid in December than rush something broken in November.
Because here’s what I’ve learned: Even if the product-market fit isn’t perfect on launch day, the foundation is solid enough to pivot through iteration and user feedback. That’s what matters.
The methodology is proven. Not the product yet—that requires real users and real validation. But the approach? Extensive planning plus AI collaboration plus systematic refinement? That’s working.
I’m not stranded debugging a framework I don’t understand. I have a proven system for solving problems when they arise. The prevention protocols catch mistakes early. The documentation makes context portable across sessions. The architecture supports scaling when (if?) the market responds.
And I’m standing on the shoulders of giants: The Supastarter boilerplate gave me production-ready authentication, payments, and email systems. The Next.js ecosystem provided composable tools that work together. Years of open source development made modern frameworks possible. Claude Code gave me a tireless collaborator who never gets frustrated at my questions.
This isn’t a solo hero journey. It’s collaboration at every level—with AI, with frameworks, with the community that built these tools.
The features continue shipping. Analytics systems go live. Partner profiles get polished. SEO infrastructure gets tested. Each day brings progress, not perfection.
And slowly, I’m getting closer to having an answer when my wife asks “Why are you doing this?”
What This Means
I have a 10-day cruise planned with my dad for his 70th birthday in December. Leaving the grind briefly for time with family.
That’s the point of all this.
Not building a company that consumes your life—building a company that gives you the freedom to live it.
My advice is to invest the time in comprehensive planning. Build prototypes that become functional specifications. Document everything you know about the problem you’re solving. Then partner with AI to turn that planning into reality.
When my wife asks “Why are you doing this?”—I’m finally getting close to an answer:
Because with AI in 2025, it’s incredibly possible to build something that delivers great value at scale without massive VC investment and a large team.
Because the Web3 industry needs a trusted way to discover and evaluate partners.
My wife sees the platform taking shape. She sees the methodology working. She sees that the bet we made—moving to the farm, saying no to other opportunities, investing seven months into this—wasn’t reckless optimism. It was calculated risk made confident by rapid advances in technology and systematic execution.
The product might not achieve perfect product-market fit on launch day. But even if it doesn’t, the foundation is strong enough to iterate, pivot, and adapt based on what the market tells us. That’s what de-risks this entire journey.
Follow the Journey
I’m building Web3Connect in public at shannon.diy and X where I share updates, lessons, and the ongoing journey from prototype to product and beyond.
When Web3Connect launches (December 2025, or thereabouts), you’ll find it at web3connect.com—a merit-based marketplace where Web3 founders can discover and evaluate verified partners based on quality, not payment.
If you’re a Web3 founder following this approach, I’d love to hear from you. If you’re building something and want to share your journey, reach out. If you just want to follow along and see how this experiment unfolds, subscribe or follow me on X (@shannon_diy).
The build methodology is working. The product is soon to be validated. The journey continues.
Let’s see what happens next.

