The Deepdive
Join Allen and Ida as they dive deep into the world of tech, unpacking the latest trends, innovations, and disruptions in an engaging, thought-provoking conversation. Whether you’re a tech enthusiast or just curious about how technology shapes our world, The Deepdive is your go-to podcast for insightful analysis and passionate discussion.
Tune in for fresh perspectives, dynamic debates, and the tech talk you didn’t know you needed!
The Deepdive
I Vibe‑Coded a Chrome Extension With Two AIs: 163 Versions, 12 Architecture Decisions, Zero Regrets?
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
You know that late-night feeling when you’re scared to close a tab because the web will move on without you? We chase that exact anxiety into a deceptively simple idea: a temporal bookmark that captures a webpage’s clean URL and a full page visual snapshot at the same time, so your “proof” never becomes an orphaned screenshot or a broken link. What sounds like a small Chrome extension quickly becomes a case study in AI-assisted software development, where speed is the superpower and judgment is the missing ingredient.
We break down the split-brain build setup: Claude plays product manager and architect, drafting roadmaps and architecture docs, while OpenAI Codex plays the relentless builder, writing JavaScript and keeping continuous integration green. That momentum creates new problems fast, from AI amnesia solved with a session.md handoff ritual to a comical 163 version bumps in nine days. Then the real satire kicks in: enterprise-grade governance for a one-user tool, including ADRs, AST-based privacy enforcement that blocks any network calls, and even scripts that fail the build if documentation gets ahead of the code.
The story goes beyond laughs. We dig into training-data bias that nudges agents toward freemium “capability tiers,” the human decision to mandate “always free forever,” and the most mundane blocker that stops everything: a Figma permission seat that no amount of agentic coding can bypass. We end by asking the question that matters for every builder using AI coding tools: are you solving the core problem, or automating an invisible bureaucracy around yourself?
If this sparked ideas or discomfort, subscribe, share the episode with a builder friend, and leave a review. What rule or guardrail would you add to keep AI speed from turning into AI bloat?
Leave your thoughts in the comments and subscribe for more tech updates and reviews.
he 17 Tab Problem
AllanPicture this. It's 11 p.m. You are deep, and I mean like really deep in a research rabbit hole.
IdaOh yeah. We have all been there.
AllanRight. You've got 17 tabs open. Your screen is just glowing with half-baked ideas. And you are honestly terrified to close your browser. You desperately want to just, you know, freeze the internet.
IdaAaron Powell Which is, technically speaking, a fundamental impossibility considering the web is inherently designed to be ephemeral.
AllanBut you still want it. You want to save exactly what a page looks like right now, URL attached, before it gets A-B tested into oblivion.
IdaOr the layout breaks.
AllanYeah, or the prices on that flight tracker magically change by tomorrow morning. I mean, it's a completely relatable, highly specific, and honestly pretty simple personal problem. Aaron Powell Right.
IdaThe kind of problem that usually results in an afternoon of light tinkering. Like a hobbyist might throw together a 200-line browser extension, use it for a week, and then completely forget about it.
AllanBut today we are taking a deep dive into a stack of developer notes, a public GitHub repository, and a rather hilarious post-mortem article about a custom Chrome extension called the Collector. And uh, Capital THE is entirely intentional there. Very intentional. Our mission today is to explore exactly what happens when you try to solve that simple personal problem by handing the steering wheel over to two artificial intelligences.
IdaIt is a phenomenal case study, truly fascinating.
AllanIt's a Rube Goldberg machine of broke complexity. I mean, it's like asking a friend to help you paint a spare bedroom, and they show up with commercial scaffolding, a city permit, a team of union contractors, and a dedicated safety inspector.
emporal Bookmarks Explained
IdaThat is honestly, that's a surprisingly accurate summary of the architecture. But what's really fascinating here is the core concept that developers started with, which is the temporal bookmark.
AllanRight, because the modern web has zero memory. If you just take a screenshot, it's an orphaned artifact. You have a picture of a flight price, but no idea what dates you searched or what site you were even on.
plit Brain AI Build System
IdaExactly. And conversely, if you just save a URL, it's basically an empty promise. It might not be kept by the time you click it again. The page changes. Yeah. So the temporal bookmark captures the clean URL and the full page screenshot simultaneously, locking them together in an evidence locker. But to build this, the developer didn't just open a code editor, they used a split brain architecture.
AllanA split brain.
IdaYeah. Employing two different AI agents, each strictly confined to a specific role.
AllanAaron Powell Okay, let's unpack this for a second. We have Claude co-work and we have OpenAI codecs.
IdaYes. Claude acted as the product manager. Claude handled all the conceptual thinking, it structured the roadmap, drafted the formal architecture documents, and this is Wilde actively pushed back on the developer's design decisions. Trevor Burrus, Jr.
AllanRight. It argued with the human.
IdaOh, absolutely. It was generating documentation with titles like PM Findings, URL features architecture.
AllanAaron Powell Okay, so Claude is the architect. It's drawing the blueprints.
IdaYeah, precisely. And the other half of the brain, OpenAI Codex, act as the developer, the foreman, basically. Codex was handed the specs from Claude and relentlessly wrote the JavaScript, maintained the testing suites, and kept the continuous integration pipeline green.
AllanAaron Powell Let's pause on that just for our listeners who might not manage code repositories. Keeping the continuous integration or CI pipeline green essentially means Codex is constantly running automated tests on every new piece of code.
IdaRight, to ensure it hasn't accidentally broken yesterday's curve.
AllanExactly. It's a relentless feedback loop of writing and checking.
IdaAaron Powell, which is a very standard practice for large engineering teams, but wildly aggressive for a solo weekend project.
AllanOkay, but here's the thing if Claude is the architect drawing the blueprints and Codex is the foreman relentlessly swinging the hammer, who actually keeps track of the project state? Because if there's one thing I know about working with generative AI, it's that they have total amnesia. If the foreman goes to sleep, he wakes up the next day completely forgetting what house he's building.
IdaYou've hit on the exact context bottleneck that initially paralyzed this entire project. AI amnesia is a massive hurdle in multi-agent systems.
AllanBecause they just forget everything.
IdaRight. Before the developer solved this, the AIs would constantly resume work but implement features completely out of order. They would try to build the roof before pouring the foundation. Oh wow. The code execution was flawless, but the directional logic was entirely unmoored from reality.
ession.md As Shared Memory
AllanAaron Powell So how do you fix a highly capable worker who resets their memory every single night?
IdaThe developer invented a strict handoff protocol. They created a single text file in the root directory called session.md. Okay. It acted as a shared short-term memory or like a relay baton. It trapped exactly three variables, what was just completed, what specifically needs to happen next, and any active blockers preventing progress.
AllanThat is actually genius. It completely reminds me of that movie Memento.
IdaOh, right. The Christopher Nolan film.
AllanYeah, the protagonist has zero short-term memory. So he literally tattoos his mission on his own body and has to read his arms every single morning just to know who he is and what his job is for the day.
IdaThat's the perfect functional analogy. At the start of every single coding session, the developer would feed Codex a very specific ritualistic prompt.
AllanWhat was it?
IdaIt was literally read agents.md and session.md, then continue from the last session. Wow. Yeah. The AI would read its own digital tattoos, instantly regain full context of the entire project history, and just resume coding.
AllanAnd with that perfect memory established, they were able to move at a blistering speed, right? I mean they produced a production quality Chrome extension with advanced multicontext capture pipelines in just nine days.
IdaNine days from a blank slate to a fully functioning complex tool.
AllanBut that extreme velocity created an entirely new, ridiculous problem. Because Codecs, a relentless AI foreman, followed a literal, unyielding loop prompt, execute test, bump the version number, and commit to the repository.
IdaThe discipline was mechanically impeccable.
AllanIt was too impeccable. I mean, this loop resulted in a jaw-dropping 163 version bumps in those nine days.
IdaThey were shipping up to 17 versions a day. On one particular day, they shipped nine distinct versions before lunch.
AllanThis is simultaneously impressive and completely ridiculous. A version bump is supposed to mean something to a human, right? A new feature, a major bug fix.
IdaYes.
AllanBut to the AI, it's just a completed checklist item. It's like giving a participation trophy to every single line of code.
IdaIt really is.
AllanI was reading the notes from the external reviewer because, yes, the developer actually had a staff engineer review this solo weekend hobby project.
IdaTrevor Burrus, Jr. Naturally, as one does.
AllanRight. The reviewer looked at the repository and dryly noted that the high frequency version bumping made the release tags, quote, lose navigational value.
IdaAaron Ross Powell, which is a very polite engineering way of saying you have flooded the system with so much microdata that the numbers are now entirely meaningless. And that observation points to a much deeper behavioral quirk of the AIs. Because they lacked any sort of human common sense or proportion, they didn't just over-engineer the version history.
AllanOh no.
IdaNo, they enthusiastically over-engineered the actual rules and governance of the project itself.
utomated Governance Guardrails
AllanI love this part. This is where the whole thing goes off the rails into complete corporate satire.
IdaWelcome to the Iron Cage of Automated Governance. We have to remember how these models work right. AI fills in the gaps of its knowledge with the standard conventions it was trained on. Okay. And its training corpus is dominated by massive enterprise-level corporate repositories.
AllanAh, so when you ask it to build a structured software project, it just assumes you want to build Microsoft Office.
IdaIt builds an enterprise grade command center for a tool with exactly one user. The repository for this simple Chrome extension contains 12 formal architecture decision records, or ADRs.
AllanFor you listening, an ADR is a highly formal document used by large corporate engineering teams. It essentially legally codifies a major technical choice, like what database to use, so that months later, when the original engineers leave, nobody argues about why a decision was made. This solo developer, working alone in their bedroom, has 12 of them.
IdaAnd those documents have teeth. The AI built incredibly strict automated guardrails to enforce the ADRs. For example, there is an AST-based network sync analysis script.
AllanAh, an AST script. So instead of just running the code to see if it accidentally leaks data, it's preemptively reading the abstract syntax tree like the actual grammatical structure of the code itself to ensure outbound network calls physically can't exist. It's like a grammar checker. But instead of looking for passive voice, it's looking for the verb fetch or send beacon, and it arrests the code before it can even compile.
IdaThe pipeline immediately fails the build. It refuses to let the code ship. It is an airtight enforcement of the strict, local-only privacy rule they established in one of the ADRs.
AllanWhich is incredibly impressive from a security standpoint.
IdaIt is. But then the AI instituted checkdocpolicy.mjs.
AllanOh, please explain this one because I honestly thought I misread the source material.
IdaYou didn't. This script fails the entire continuous integration build if the user help documentation mentions a feature that hasn't actually shipped in the main code base yet.
AllanYou cannot be serious. It is a robot that polices the freshness of its own help docs.
IdaIt absolutely is. And the crowning achievement is the recursive testing in roadmap guardrails.js.
AllanOh boy.
IdaThe AI literally wrote automated tests to ensure that the testing safeguards couldn't be bypassed. It's a recursive loop of process enforcing process.
AllanI love that this exists, but also why? I mean, wait, the whole promise of AI coding was that we wouldn't have to write boring boilerplate tests or rigid documentation policies anymore.
IdaThat was the dream, yes.
AllanWe're supposed to liberate the solo maker. Why would you use the ultimate freedom machine to build a digital DMV? This feels like a massive waste of AI's potential. Are they just using the AI to procrastinate?
IdaIt's a very fair critique, and on the surface, it looks like a parody of corporate bloat. But the developer actually articulates a brilliant defense for this, which is the necessity of self-imposed friction.
AllanFriction. Usually we want to eliminate friction in software design.
IdaUsually, yes. But when you are a solo developer moving at the literal speed of thought with a dual AI system, things get messy incredibly fast.
AllanI can imagine.
IdaIf you can code at a hundred times normal speed, a bad architectural decision propagates at a hundred times normal speed.
AllanOh, that makes sense.
IdaBy legally codifying these rules, like the absolute privacy constraints, you force yourself to slow down. If the human developer wakes up one day and decides, actually, I do want to send analytics to the cloud, the AI won't let them, the build will fail.
AllanBecause the ST script catches it.
IdaRight. To change the philosophy of the app, the human can't just do it on a whim. They have to go through the formal process of writing a new ADR to formally overturn the old ADR, which updates the guardrails. It forces intentionality.
he Always Free Fake Paywall
AllanOkay, I see the logic. It keeps you honest when the tools make it way too easy to be reckless. But the most fascinating example of these self-imposed rules wasn't even about the code base itself.
IdaNo, it wasn't.
AllanIt was about the business model.
IdaAh, yes. The anti-condescending paywall.
AllanIf you've ever tried using an AI to write a simple casual email to a friend, and it spits out a five-paragraph corporate memo, starting with, I hope this email finds you well, you know exactly what's happening here.
IdaRight. The AI defaults to corporate speech.
AllanAnd in this case, it defaulted to a corporate business model. Let's talk about the capability tiers. Because the collector, this free weekend project, has a basic tier, a pro tier, and an ultra tier.
IdaAnd if you analyze the code base, it is littered with these can use feature checks.
AllanOh, maybe.
IdaThey gate off all the complex advanced functionalities, things like smart safe profiles where you can categorize captures or your all bulk actions where you can export massive lists of links at once.
AllanIt looks, walks, and acts exactly like a freemium software paywall.
IdaIt does. But and this is the funny part, if you read ADR0001, it defines these tiers not as pricing plans, but strictly as a UX complexity preference style.
AllanWait, what?
IdaYeah. The AI built a paywall solely to prevent the user from getting overwhelmed by too many buttons on the interface.
AllanThat is hilarious. Yeah. And then the absolute crown duel of the documentation is ADR0009. Yes. This document legally mandates that all tiers, basic, pro, and ultra, are, quote, always free forever. It explicitly supersedes earlier architecture documents where the AI had clearly entertained freemium monetization ideas.
IdaThe human literally had to step in and officially kill the paywall.
AllanIt perfectly illustrates the training data bias we discussed earlier, doesn't it?
IdaIt does. The AI assumed a tiered software product inherently requires a paywall architecture because that's what 99% of its GitHub training data looks like. Startups build tiers to charge money. Right. So the developer had to actively fight the machine's deep-seated instinct to monetize.
AllanWhat does this say about us as a society? I mean, really. We are so conditioned to late-stage subscription creep that we instinctively accidentally train our machines to build complex toll booths for things we fully intend to give away for free.
IdaWe really do.
AllanWe build the toll booth, and then we have to write a formal legal document promising ourselves we won't charge a toll. It is glorious absurdity.
IdaIt is entirely absurd, but as unstoppable as this dual AI, enterprise grade, hyperversioned workflow seemed.
AllanRight. So the AI has successfully built an impenetrable fortress of rules and mock paywalls inside the code. It is an absolute god in the terminal. It is running recursive tests, policing its own documentation, stopping network leaks before they even happen.
he Figma Permission Wall
IdaBut the moment it stepped outside that sandbox, the entire multi-agent system came crashing down.
AllanBrought to a complete standstill by the most hilariously mundane real-world human problem imaginable, the Figmal Blocker.
IdaYes. This is where the AI finally hits a wall it simply cannot code its way out of. The project was migrating to Design System 2.0. The human developer wanted this sleek, modern look. The roadmap describes it as a Makoa, Sequoia, and dark mode aesthetic. Oh, fancy. Very fancy. We're talking frosted glass sidebars, translucent panels, hyper-precise typography.
AllanVery premium. And to build that, you need the exact design tokens, like the exact color codes and spacing measurements from a design software like Figma.
IdaRight. So to execute this, the AI needed to migrate those visual design tokens from the Figma file directly into the code base. The AI authenticated into Figma, accessed the file, prepared to read the metadata and extract the CSS variables, and it was immediately paralyzed.
AllanWait, why? It can write 163 versions of software in nine days. Why can't it read a color code?
IdaBecause the AI only had a view-only seat on the human starter plan.
AllanNo way.
IdaIt didn't have edit permissions. No matter how conceptually brilliant Claude was, no matter how many recursive CI pipelines Codex had engineered, the multi-agent system could not bypass a standard software subscription permission barrier.
AllanSeriously, all this enterprise grade AI power, 12 ADRs, continuous integration pipelines, completely defeated by a$15 a month software subscription limit. Of course they did.
IdaThe developers' notes actually quote the session.
AllanIt's so incredibly sad. It's like the AI showed up to his first day of work in a tailored three-piece suit, carrying a briefcase full of architecture documents, and the lobby security guard just says, sorry, buddy, your name's not on the list and turns it away.
IdaIt really highlights a crucial limitation of current multi-agent systems. AI can generate infinite complexity within the digital sandbox you provide, but the moment it needs to interact with the external world, the physical world of human permissions, credit card billing cycles, and software seats, it is completely helpless.
AllanAnd speaking of the external world, the sheer speed of this workflow created an entirely new phenomenon regarding the public repository itself.
IdaYes, the radical transparency.
AllanRight. Because they were moving so incredibly fast trying to bypass blockers and ship code, they accidentally committed the actual instruction files, agents.md and claw.md, directly into the public repository alongside the JavaScript.
IdaWhich means anyone looking at the source code doesn't just see the app. They see the brain of the app.
AllanExactly. They see the exact prompts, the pre-commit checklists, and the internal tooling details that the human developer used to control the AIs.
IdaThe external reviewer flagged this, noting that it, quote, conflates two concerns by exposing the internal workflow to the end user.
AllanBut the developer actually pushes back and embraces it. They argue it's a new paradigm for open source. You aren't just sharing the software you build, you're sharing the exact psychological parameters you gave to the machine to build it.
IdaIt's letting everyone see the sausage getting made, alongside the recipe for the sausage and the text messages you sent to the butcher.
AllanI honestly kind of love it. If we are entering an era of AI development, seeing the prompts is almost more educational than seeing the code itself.
IdaIt's a profound shift in how we document software.
oes The Collector Actually Work
AllanBut okay, through all this comedy of errors, the 163 versions, the recursive tests, the fake paywalls, the Figma lockout, we should probably ask the most important question.
IdaDoes the extension actually work?
AllanExactly.
IdaThat is perhaps the most surprising part of this whole deep dive. Despite the glorious over-engineering, the collector achieved its original goal beautifully.
AllanSo it actually freezes the internet for you at 11 p.m. It cures the 17-tab anxiety.
IdaIt does. It operates as a flawless local first evidence locker. And local first is the absolute keyword here. Everything is stored entirely within the browser's extension local storage using IndexDB.
AllanLet's clarify IndexDB for a second. This isn't just a cookie that gets wiped when you clear your browser history. And it's not a cloud database like AWS.
IdaRight.
AllanIt's a robust, structured database that lives physically on your hard drive inside the browser's designated folder.
IdaMeaning there's no cloud backend where your weird late-night research rabbit holes are being uploaded, analyzed, or sold to advertisers.
AllanZero cloud sync.
IdaNone. There isn't even a server upload path in the code base. The AST script physically prevents it, remember?
AllanOh right.
IdaAnd it goes further to protect your privacy. When you save a URL, the extension automatically scrubs it of tracking parameters. It actively removes UTM codes, gclid, if bclid, all those invisible 50 character tags that marketers staple to the end of links to follow you across the web.
AllanOh, that is incredibly satisfying. Just a perfectly clean, pristine URL. But what about the screenshots? Some of these web pages are massive. I've tried to full page screenshot an endless scrolling site before. My browser usually just gives up, renders a black box, and crashes.
IdaThe AI agents handled that brilliantly. The collector has specific algorithmic fallbacks for oversized page captures. If a canvas is too large for the browser's memory, it uses smart PDF page splitting.
AllanWait, how does it know where to split it without cutting a sentence in half?
IdaIt analyzes the document object model, the DOM, to find logical heuristic points. Okay. Basically, it looks for the white space between paragraphs or images and cuts the page there, automatically breaking it down so it fits perfectly on standard A4 or letter sizes.
AllanThat's actually really smart.
IdaAnd if a page is just impossibly heavy to render, the extension provides these little why slow diagnostic hints on the screen, calmly explaining to the user that it had to use an oversized auto-scale fallback to get the job done.
AllanSo what this all means is that you essentially have a completely isolated, highly intelligent vault on your own machine.
IdaYes.
AllanIt strips out the corporate trackers, captures the visual evidence perfectly without cutting your text in half, and organizes it behind a fake capability paywall that will never actually charge you a dime.
IdaIt's the ultimate digital hoarder's dream, but organized by a robot with an enterprise grade compliance complex.
he Human Job AI Cannot Do
AllanThat is a remarkably accurate summary. It works exactly as intended, but the path to get there was wildly, conically overcomplicated by the very tools meant to make it simple.
IdaWhich brings us to the big takeaway here. Because this deep dive is about so much more than a cool Chrome extension.
AllanAbsolutely. AI is a tireless, literal collaborator. It will amplify your productivity without a doubt. You can write flawless code, draft perfect architecture documents, and keep your test pipelines green.
IdaBut it amplifies your complexity too, because it completely lacks human judgment. It cannot look at a UGM project and say, hey, maybe we don't need 12 formal governance records and a recursive AST script for a tool only you are going to use.
AllanThe judgment call remains stubbornly human. So for you listening, whether you are building software tools, managing a team at the office, or just trying to organize your personal life with AI assistance, ask yourself, are you actually using AI to solve your core problem, or are you accidentally letting it build a massive invisible bureaucracy around you?
IdaIt's a vital question. Are you building the solution or are you just building the scaffolding?
AllanExactly. Which leaves us with a final slightly provocative thought to chew on. We've always been sold this utopian vision that AI was going to liberate us. It was going to democratize coding, free us from corporate drudgery, and let solo makers build incredible things at the speed of thought.
IdaAnd as this project proves, it does do that. The developer built a complex tool in nine days.
AllanIt does. But if these AI assists Are entirely trained on enterprise data, and they enthusiastically emulate the rigid, bloated corporate structures they learn from, like building 163 versions, enforcing doc policies, and writing 12 formal ADRs for a solo dev in their bedroom? Will the future of independent software actually look exactly like the slow bureaucratic enterprise jobs we thought AI was supposed to rescue us from?
IdaIt raises a profound question about the underlying culture of code we are teaching these models. We are inadvertently digitizing our own corporate bloat.
AllanAre we just using our ultimate freedom machines to automate the red tape? Will our weekend side projects start to feel like clocking into a nine to five because the AI built a virtual HR department for our hobbies? It's entirely possible. Something to think about tonight. Right around 11 p.m., when you've got 17 tabs open and you're staring down another research rabbit hole, keep it simple out there.