The Deepdive
Join Allen and Ida as they dive deep into the world of tech, unpacking the latest trends, innovations, and disruptions in an engaging, thought-provoking conversation. Whether you’re a tech enthusiast or just curious about how technology shapes our world, The Deepdive is your go-to podcast for insightful analysis and passionate discussion.
Tune in for fresh perspectives, dynamic debates, and the tech talk you didn’t know you needed!
Read the companion article on https://medium.com/@allanandida
The Deepdive
The Superintelligence That Can’t Handle Tuesday Traffic
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
A system smart enough to generate thousands of lines of code can still collapse into silence when too many people ask it to summarize a PDF. That’s the central absurdity of the AI boom, and we lean into it: the promise of near-AGI colliding with the messy reality of inference bottlenecks, overloaded memory, and very expensive servers that still need “unscheduled naps.”
We dig into the boldest claims shaping the conversation right now, including aggressive AGI timelines, the idea of an AI-powered billion-dollar solopreneur, and the provocative “AI writing AI” loop where coding tools generate huge chunks of the software stack. Then we contrast that hype with what real-world usage data suggests: AI can accelerate work, but the last mile is where correctness, security, and accountability live. The key concept is feedback loops, because tasks with fast verification (like code you can test immediately) automate far more safely than long-loop domains like law, consulting, or strategy where mistakes can surface years later as billion-dollar problems.
From there we follow the infrastructure story behind the headlines: why Claude outages happen, what a “frozen state” means mechanically, and how companies use load balancing to prioritize the Claude API for enterprise clients while consumer traffic gets throttled. We also connect the traffic surge to geopolitics and corporate strategy: backlash to military deployment deals, user migration, revenue growth, and the incentives wrapped up in “AI safety” lobbying and chip policy.
We end with a calm, actionable approach for knowledge workers: hedged preparation. Use AI to kill the drudgery, keep humans responsible for judgment, and maintain enough offline resilience to function when the digital genius buffers. If you found this useful, subscribe, share it with a friend who’s doomscrolling about AI job automation, and leave a review with your take: which part of your work has the longest feedback loop?
Leave your thoughts in the comments and subscribe for more tech updates and reviews.
The Modern AI Paradox
AllanI want you to imagine just for a second the glorious um absurdity of the current technological moment we're living in right now.
IdaOh, it's completely absurd.
AllanRight. So picture this. You possess an omnipotent digital assistant. A system that is so profoundly advanced, it can just sit down and build a working C compiler in Rust, completely unassisted in like two weeks.
IdaAaron Powell Which, by the way, if you don't write code, that is a breathtaking feat of complex engineering.
AllanTrevor Burrus, Jr.: Exactly. It's like I don't know, it's like asking someone to build a functioning combustion engine out of scrap metal while blindfolded. But then you ask this exact same digital god a basic question on a random Tuesday morning, and it essentially has a panic attack.
IdaYep. Completely freezes up.
AllanGives you the spinning wheel of death.
IdaAaron Powell It's the ultimate modern paradox. We have somehow managed to build a digital Einstein, but it apparently requires a mandatory, unscheduled nap the moment too many people try to talk to it at once.
Big AGI Promises And Timelines
AllanAnd that gap, you know, that hilarious and really profound chasm between Silicon Valley's utopian promises and the very human reality of buffering servers, that is exactly what we are exploring today.
IdaAaron Powell We have a lot to get through.
AllanWe really do. We're taking a deep dive into a massive stack of recent sources from the spring of 2026. We've got server outage logs, in-depth tech analyses, economic impact reports, and uh the sweeping 20,000-word essays from anthropic CEO Dario Amoday. 20,000 words. I know. Who has the time, right? But our mission for this conversation is to unpack what happens when the unstoppable force of artificial general intelligence meets the immovable object of global web traffic.
IdaAnd the timing on this deep dive couldn't be better for you listening right now. If you're a knowledge worker or honestly just anyone with a Wi-Fi connection, you are caught right in the middle of this tension. Oh, absolutely. You're being told simultaneously that your job is about to disappear in months and that the software meant to replace you is currently experiencing a quote unquote major outage.
AllanRight, right. So to really understand this, I feel like we have to start with the baseline of what we're expecting this technology to do, or at least what the people building it are promising us it will do.
IdaRight, the hype cycle.
AllanExactly. So Dario Amade recently published this sprawling essay called The Adolescence of Technology. And the predictions in there are, well, I mean, they're terrifying if you enjoy being employed and receiving a paycheck.
IdaAaron Powell The timelines he puts out are incredibly aggressive. He's predicting that we could reach artificial general intelligence AGI in just one to three years.
AllanOne to three years.
IdaYeah. He talks about AI matching a country of geniuses by 2035 and superhuman AI arriving by 2027.
AllanAaron Powell And he gets super specific about the economics of it, too. He's forecasting the first billion-dollar solopreneur by 2026.
IdaThat concept is wild.
AllanTrying to visualize that. One person sitting in a room managing a billion-dollar empire entirely powered by AI. The AI does the coding, the finance, the legal work. He even stated that up to 50% of entry-level white-collar jobs could simply vanish in the next one to five years.
IdaAnd the underlying mechanism driving this confidence, at least according to Amo Day, is what they call the AI writing AI loop.
AllanOkay, explain that.
IdaWell, Anthropic's chief product officer recently revealed that effectively 100% of the code for their flagship model, Claude, is now written by their internal tool, Claude Code.
AllanAaron Powell Wait, wait. I want to make sure I understand the mechanics of that. Are we talking about a literal Escher painting here?
IdaTrevor Burrus Like the drawing of a hand drawing itself.
AllanYeah. Is the AI actually just building itself?
IdaAaron Ross Powell That is a fantastic way to visualize it. Yeah. So in software development, there's something called a pull request. Basically, an engineer writes a chunk of new code and submits a request to pull it into the main project. Okay. Historically, a human writes it and another human reviews it line by line to make sure it won't break anything. But the sources indicate anthropic engineers are now regularly shipping two to three thousand line pull requests that were generated entirely by the AI.
AllanTwo to three thousand lines? That's a massive amount of code.
AI Writing AI And Code Reality
IdaIt is. The human engineers aren't really writing anymore. They're just acting as exhausted editors for an extremely prolific machine. And because of this, Amaday claims they are maybe six to twelve months away from AI doing end-to-end software engineering without any human intervention at all.
AllanOkay, let's unpack this. Because if you look closely at the history of these tech promises, something really funny starts to happen.
IdaOh, I know where you're going with this.
AllanI look at these Silicon Valley CEOs making these grand predictions, and honestly, they remind me of an end-of-the-world cult. Yes. You know the ones? Yeah. The leader predicts the apocalypse is coming on a Tuesday, and then Wednesday rolls around and the world is still here, so they quietly run to the printer to make new flyers with an updated date for the doom.
IdaI completely see the parallel you're drawing. It's the shifting goalposts.
AllanExactly that. Because in late 2024, MODA predicted transformative AI could arrive as early as 2026.
IdaAnd we are currently in 2026.
AllanRight. But in these more recent essays we're looking at, the timeline has quietly shifted backward to one to two years or before 2030. The impending doom just keeps getting rescheduled. I feel like we need to look at what's actually happening on the ground, not just the marketing pitch.
IdaWell, the empirical data actually backs up your skepticism. ZMS 2026 Economic Index analyzed two million real-world AI interactions.
AllanOkay, so actual usage data.
IdaExactly. And the reality on the ground contradicts the most alarming hype. Amode says his engineers are handing over almost all of their code to AI, but everyday developers report it's actually closer to 80 or 90 percent, not 100%.
AllanAaron Powell Which, I mean, that sounds like a small difference 10, 20 percent. But that missing 10 to 20 percent is the entire ballgame, isn't it?
IdaIt absolutely is.
AllanHaving an AI draft a quick Python script to scrape a website is completely different from asking an AI to architect a secure scalable software system from scratch.
IdaRight. And it comes down to the fundamental difference between automating a task and automating a whole job. The ZMF report introduces a crucial concept here: the feedback loop.
AllanOh, I love this part of the sources. Let me see if I can translate this for everyone. A feedback loop is basically how long it takes to find out if you messed up.
IdaYep, perfectly said.
AllanSo coding has a very short feedback loop. If the AI writes a piece of code, you compile it, run a test, and you know within seconds if it works or if the whole program crashes.
IdaThe failure is immediate.
AllanRight. So it's relatively safe to hand that off to a machine because you can catch the mistake instantly.
IdaYou nailed it. Now contrast that short loop with fields like law, consulting, or corporate strategy.
AllanOh wow. Yeah, that's completely different.
IdaRight. If an AI drafts a complex legal contract for a corporate merger, you might not know that it hallucinated a terrible liability clause until three years later when the company gets sued for a billion dollars.
AllanAnd you can't just hit undo on a billion-dollar lawsuit.
IdaExactly. That long feedback loop is the ultimate bottleneck for automation. You simply cannot afford to automate high-stakes jobs with long feedback loops without intense high-level human oversight.
AllanBecause the liability is just too massive.
Feedback Loops Decide What Automates
IdaPrecisely. This is why the ZMS data shows that AI is primarily augmenting jobs, handling the routine short loop tasks rather than completely displacing workers and creating this permanent underclass that the essays keep warning us about.
AllanOkay. So we've established that this AI is supposedly smart enough to do 90% of our coding and is allegedly a year away from replacing the white-collar workforce, but it still requires humans to check its homework so we don't get sued. Right. But surely, surely this divine superintelligence can at least manage its own web traffic.
IdaWell, the spring of 2026 would strongly suggest otherwise.
AllanThis might be my favorite part of the entire stack of sources. The chronic infrastructure exhaustion. I mean, Anthropics platform has essentially been buckling under its own weight for months.
IdaIt's been a rough season for them.
AllanWe saw major disruptions in March. Then back-to-back crashes on April 7th and 8th, then a massive outage on April 28th that logged 12,000 complaints on down detector. And then just weeks later on May 8th, Claude went down globally for nearly two hours.
IdaAnd that April 8th incident was especially revealing from a mechanical standpoint. The supposedly omnipotent Sonnet 4.6 model entered a literal frozen state.
AllanOkay, wait. What does a frozen state actually mean technically? Like, did it just get confused by a prompt?
IdaNo, no, it's not a cognitive failure. It's a hardware traffic jam. When you send a prompt to an AI, it requires a massive amount of instantaneous computational math, what they call inference.
AllanInference, right.
IdaSo if too many prompts hit the servers at the exact same millisecond, the system's memory gets completely overwhelmed. It literally can't process map fast enough, the cues back up, and the server essentially drops the connection to protect itself from physically overheating.
AllanSo you get high error rates or just a total halt in output. People on social media were joking that Claude goes down and the entire global economy instantly reverts to the Stone Age. I just love the visual of a sci-fi super brain essentially putting up an out-to-lynch sign.
IdaIt's pretty funny.
AllanLike, are we really ready to hand over the global economy to a system that requires a mandatory, unscheduled nap when too many college students log on to summarize PDFs?
IdaWell, in the industry, they are actually calling this a success disaster.
AllanA success disaster. That is a fantastic piece of corporate spin.
IdaIt is, but honestly, it's technically accurate. The infrastructure is crashing because the computational demands of generative AI are astronomical, and Anthropic is facing completely unprecedented demand.
AllanAaron Powell But there is a vital technical nuance we have to highlight here regarding how traffic is routed.
IdaOh, right. Because if you were at work on May 8th, you might have noticed something strange.
AllanRight. This is the difference between the API and the consumer site.
IdaOkay, break that down for us. An API or application programming interface is essentially the digital plumbing that allows two different software programs to talk to each other without a human clicking buttons. Enterprise businesses use the API to bake Claude's brain directly into their own internal company tools.
AllanSo to use an analogy, the API is like the massive industrial kitchen in the back of a restaurant fulfilling huge catering orders.
IdaYes.
AllanWhile Claude.ai, the website you and I log into on our laptops, is just the walk-in dining room in the front.
IdaAaron Powell That's a perfect analogy. And when Anthropic runs out of computing power, they don't shut down the industrial kitchen.
AllanThey lock the front doors to the dining room.
IdaExactly. For the vast majority of these artiges, the underlying API actually remains stable. So while the consumer-facing site was giving everyday individuals a blank screen or a spinning wheel, the core technology powering major enterprise clients who pay a massive premium, by the way, was largely humming along.
AllanBecause they use load balancing to prioritize the VIP traffic. Okay, so the servers are melting because of the success disaster, and they are rationing compute power like water in a drought. But what exactly is driving this massive, unprecedented surge in traffic right now? Why are millions of people suddenly flocking to Claude to the point of literally breaking it?
IdaWell, the sources point to a really wild mix of geopolitical controversy and some incredibly cutthroat corporate strategy.
Why Claude Keeps Going Down
AllanYeah, this part was fascinating. It all started when OpenAI Anthropic's biggest rival signed a highly controversial deal to deplay their models within the U.S. Defense Department's classified network.
IdaAnd a lot of users immediately pushed back on that.
AllanBig time. You had privacy advocates and tech workers worrying about domestic mass surveillance or the integration of AI into autonomous weapons systems. So Anthropic took a stand. They publicly refused to do the same kind of deal with the military.
IdaWhich led the Pentagon to label Anthropic a supply chain risk.
AllanWow.
IdaYeah, it was an unprecedented move against an American company. The government essentially said, if you won't build inside our classified walls, we can't trust you.
AllanBut the blowback actually worked in Anthropic's favor, didn't it? Silicon Valley completely rallied behind them.
IdaThey really did. The sources show a mass migration of users deleting ChatGPT and moving to Claude. Claude saw a 60% spike in free users, and their paid subscribers doubled in a very short window. That's nuts. And their projected 2026 revenue is hitting a staggering$15 billion.
Allan15 billion.
IdaThat is a 10x year-over-year growth. Analysts actually think they could overtake OpenAI in revenue by mid-2026.
AllanAnd this ties directly back to Dario Amade's behavior on the world stage, doesn't it? Because he is out there loudly warning about the most dangerous window in AI history. He's painting pictures of totalitarian nightmares to lobby the U.S. government to block the sale of advanced computer chips to China. Right. But I have to stop you there and play devil's advocate for a second. Because the sources suggest this might be a brilliant, cynical business mode.
IdaIt's definitely a theory.
API Vs Website Who Gets Priority
AllanThey claim anthropic is essentially weaponizing safety to crush international competition. Because lobbying to block chips to China in the name of global safety also perfectly prevents Chinese AI labs like Alibaba, who makes the highly efficient Quen models from getting the hardware they need to create a model that is 97% as good as Claude, but for a fraction of the price.
IdaIt does happen to work out perfectly for them.
AllanRight. But isn't it also possible Ambude genuinely believes the existential risk is real? I mean, given that he just refused a lucrative Pentagon contract, maybe y he actually is terrified of autonomous weapons.
IdaNo, it's entirely possible he believes every word he is saying about the existential risks. We can't know his internal motives. And as we're analyzing this, we're not taking a side on his personal beliefs.
AllanRight, of course.
IdaBut if we look purely at the strategic outcome of his lobbying, regardless of his intent, the effect is undeniable. By pushing for policies that slow down foreign AI development under the banner of safety, Anthropic protects its own rapid$15 billion growth from cheaper international alternatives.
AllanIt's masterful. Whether you mean to or not, you throttle your cheapest market competitors, sustain a 10x revenue growth, and manage to sound like the savior of humanity while doing it.
IdaExactly. And the apocalyptic predictions sustain the massive hype needed to attract venture capital, which they desperately need to fix those crashing servers we talked about.
AllanBut here is where the narrative hits a brick wall. Because even with$15 billion in revenue and brilliant geopolitical maneuvering, Silicon Valley cannot escape the physical limits of the universe. Nope.
IdaPhysics always wins.
AllanAlways. Despite all the sci-fi promises of digital gods writing their own code in Escher-like loops, there are hard, unyielding bottlenecks. And Dario Amade admits this in his essay. The AI might be able to design software at lightning speed, but it cannot instantly mine silicon out of the earth.
IdaRight.
AllanIt cannot magically speed up the manufacturing of physical computer chips and a factory in Taiwan.
IdaLet's actually look at the mechanism of that bottleneck. When we talk about training a massive new AI model, it requires literal temporal compute time.
AllanMeaning just time on the clock.
IdaExactly. The chips process data, but there is a physical limit to how fast data can travel from the memory banks into the processing cores. It's called the von Neumann bottleneck.
AllanThe von Neumann bottleneck.
IdaYeah. You can have the smartest algorithms in the world, but if they are waiting on electrons to physically move across a microscopic piece of silicon, you just have to wait. You cannot fast forward physics.
AllanAnd the physical constraints aren't just hardware either. It's the humans.
IdaOh, the OpenClaw story.
AllanYes. We have to talk about OpenClaw because this story perfectly encapsulates the fragility of this entire ecosystem.
Safety Politics And A User Migration
IdaSo OpenClaw was meant to be the agile, open source alternative to the corporate AI giants. It was a wildly popular project that allowed developers to run powerful AI tools locally.
AllanBut in late April 2026, it suffered a catastrophic meltdown. Users were reporting that their installations were hopelessly trapped in plug-in dependency repair loops. The entire system basically choked.
IdaIt did.
AllanSo what actually is a dependency loop, mechanically speaking? Like why did it break?
IdaThink of modern software like a massive digital house of cards. A dependency graph means program A relies on a specific piece of code in program B to function, and program B relies on program C. Okay, makes sense. So if someone updates a single comma in program C, it can cause a cascade of failures all the way up to program A. Managing those dependencies requires meticulous oversight.
AllanAnd the sources reveal that the entire OpenCLOM project heavily relied on one human being to do that oversight. Peter Steinberger, the founder.
IdaHe was the linchpin. He was manually reviewing the code, handling the packaging, resolving those dependency conflicts.
AllanWait, so this massive global alternative to Claude and ChatGPT relied on one guy. What happened when he quit?
IdaWell, he transitioned to a job at OpenAI. And without him actively maintaining that house of cards, the sheer weight of the complex dependency graphs simply crushed the system. The automated plugins tried to fix themselves, got caught in conflicting loops, and completely melted down.
AllanI love that this exists, but also why? We are trying to build this sci-fi digital god, but underneath it all, it's held together by duct tape and exhausted humans frantically trying to patch leaky code.
IdaLiterally duct tape.
AllanIf the guy who updates to spreadsheet takes a different job, the open source revolution just grinds to a halt.
IdaIt really makes you ask what does this say about us as a society right now? If we connect all these dots, the apocalyptic predictions, the server crashes, the single points of human failure, the real danger highlighted in these sources isn't just the AI itself. Right. It's the human panic surrounding it.
AllanHow so? Like what's the actual danger there?
IdaThe tragedy here is that the hype, the claims that 50% of white-collar jobs are vanishing in months is causing real harm today. You have young people dropping out of college to chase AI startups or making fear-based career choices because they genuinely believe they have almost no time before they become part of a permanent underclass.
AllanWow.
IdaThey are reacting to the aggressive marketing, not the reality of the buffering servers and the Jenga Tower dependency loops.
AllanThat is such a vital takeaway. Which brings us to the ultimate question for you, listening to this right now. You're probably sitting at your desk wondering what does this actually mean for my job? Right. Am I going to be replaced by Claude 4.6 while I'm out getting a sandwich?
Hedged Preparation And Offline Resilience
IdaAnd the data strongly suggests no. The transition we are seeing is about augmentation, not total replacement. The future belongs to people who practice what some analysts and the sources are calling hedged preparation.
AllanAaron Powell Hedged preparation. I really like that terminology. It means you don't panic, but you don't stick your head in the sand either.
IdaExactly. You learn to work alongside these tools, whether it's Claude or ChatGPT or an aggregator like Zemeth, where you can access 25 different models at once. You build a daily habit of using them to handle the drudgery, the short feedback loop tasks we talked about earlier.
AllanYou let the AI do the execution, but you provide the judgment.
IdaYes. You focus your career on the skills that require deep human context, empathy, and navigating those long feedback loops where a mistake isn't caught for years. Because while AI might be getting brilliant at how to execute a task, it is still fundamentally terrible at knowing what task actually needs executing in the real world.
AllanIt's the centaur phase of work. The human steers the machine.
IdaExactly.
AllanAnd here's a final thought I want to leave you with something to mull over the next time you find yourself staring at an AI error message.
IdaOh, I'm sure it'll happen, so undoubtedly.
AllanIf our entire global economy is rushing headlong to rely on a handful of incredibly complex AI models, models that require$15 billion of infrastructure and mind-bending algorithms, but can still freeze up and enter a panic state on a random Tuesday morning because too many people logged on.
IdaRight.
AllanPerhaps the ultimate superpower in the workforce of the future won't actually be prompting AI at all. That's a great point. Maybe. Just maybe. The most valuable skill you can possess will simply be retaining the resilience, the mechanical knowledge, and the ability to just do things the old fashioned offline way when the digital god is buffering.