The Deepdive

The Superintelligence That Can’t Handle Tuesday Traffic

Allen & Ida Season 3 Episode 57

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 20:42

Send us Fan Mail

A system smart enough to generate thousands of lines of code can still collapse into silence when too many people ask it to summarize a PDF. That’s the central absurdity of the AI boom, and we lean into it: the promise of near-AGI colliding with the messy reality of inference bottlenecks, overloaded memory, and very expensive servers that still need “unscheduled naps.”

We dig into the boldest claims shaping the conversation right now, including aggressive AGI timelines, the idea of an AI-powered billion-dollar solopreneur, and the provocative “AI writing AI” loop where coding tools generate huge chunks of the software stack. Then we contrast that hype with what real-world usage data suggests: AI can accelerate work, but the last mile is where correctness, security, and accountability live. The key concept is feedback loops, because tasks with fast verification (like code you can test immediately) automate far more safely than long-loop domains like law, consulting, or strategy where mistakes can surface years later as billion-dollar problems.

From there we follow the infrastructure story behind the headlines: why Claude outages happen, what a “frozen state” means mechanically, and how companies use load balancing to prioritize the Claude API for enterprise clients while consumer traffic gets throttled. We also connect the traffic surge to geopolitics and corporate strategy: backlash to military deployment deals, user migration, revenue growth, and the incentives wrapped up in “AI safety” lobbying and chip policy.

We end with a calm, actionable approach for knowledge workers: hedged preparation. Use AI to kill the drudgery, keep humans responsible for judgment, and maintain enough offline resilience to function when the digital genius buffers. If you found this useful, subscribe, share it with a friend who’s doomscrolling about AI job automation, and leave a review with your take: which part of your work has the longest feedback loop?

Leave your thoughts in the comments and subscribe for more tech updates and reviews.

The Modern AI Paradox

Allan

I want you to imagine just for a second the glorious um absurdity of the current technological moment we're living in right now.

Ida

Oh, it's completely absurd.

Allan

Right. So picture this. You possess an omnipotent digital assistant. A system that is so profoundly advanced, it can just sit down and build a working C compiler in Rust, completely unassisted in like two weeks.

Ida

Aaron Powell Which, by the way, if you don't write code, that is a breathtaking feat of complex engineering.

Allan

Trevor Burrus, Jr.: Exactly. It's like I don't know, it's like asking someone to build a functioning combustion engine out of scrap metal while blindfolded. But then you ask this exact same digital god a basic question on a random Tuesday morning, and it essentially has a panic attack.

Ida

Yep. Completely freezes up.

Allan

Gives you the spinning wheel of death.

Ida

Aaron Powell It's the ultimate modern paradox. We have somehow managed to build a digital Einstein, but it apparently requires a mandatory, unscheduled nap the moment too many people try to talk to it at once.

Big AGI Promises And Timelines

Allan

And that gap, you know, that hilarious and really profound chasm between Silicon Valley's utopian promises and the very human reality of buffering servers, that is exactly what we are exploring today.

Ida

Aaron Powell We have a lot to get through.

Allan

We really do. We're taking a deep dive into a massive stack of recent sources from the spring of 2026. We've got server outage logs, in-depth tech analyses, economic impact reports, and uh the sweeping 20,000-word essays from anthropic CEO Dario Amoday. 20,000 words. I know. Who has the time, right? But our mission for this conversation is to unpack what happens when the unstoppable force of artificial general intelligence meets the immovable object of global web traffic.

Ida

And the timing on this deep dive couldn't be better for you listening right now. If you're a knowledge worker or honestly just anyone with a Wi-Fi connection, you are caught right in the middle of this tension. Oh, absolutely. You're being told simultaneously that your job is about to disappear in months and that the software meant to replace you is currently experiencing a quote unquote major outage.

Allan

Right, right. So to really understand this, I feel like we have to start with the baseline of what we're expecting this technology to do, or at least what the people building it are promising us it will do.

Ida

Right, the hype cycle.

Allan

Exactly. So Dario Amade recently published this sprawling essay called The Adolescence of Technology. And the predictions in there are, well, I mean, they're terrifying if you enjoy being employed and receiving a paycheck.

Ida

Aaron Powell The timelines he puts out are incredibly aggressive. He's predicting that we could reach artificial general intelligence AGI in just one to three years.

Allan

One to three years.

Ida

Yeah. He talks about AI matching a country of geniuses by 2035 and superhuman AI arriving by 2027.

Allan

Aaron Powell And he gets super specific about the economics of it, too. He's forecasting the first billion-dollar solopreneur by 2026.

Ida

That concept is wild.

Allan

Trying to visualize that. One person sitting in a room managing a billion-dollar empire entirely powered by AI. The AI does the coding, the finance, the legal work. He even stated that up to 50% of entry-level white-collar jobs could simply vanish in the next one to five years.

Ida

And the underlying mechanism driving this confidence, at least according to Amo Day, is what they call the AI writing AI loop.

Allan

Okay, explain that.

Ida

Well, Anthropic's chief product officer recently revealed that effectively 100% of the code for their flagship model, Claude, is now written by their internal tool, Claude Code.

Allan

Aaron Powell Wait, wait. I want to make sure I understand the mechanics of that. Are we talking about a literal Escher painting here?

Ida

Trevor Burrus Like the drawing of a hand drawing itself.

Allan

Yeah. Is the AI actually just building itself?

Ida

Aaron Ross Powell That is a fantastic way to visualize it. Yeah. So in software development, there's something called a pull request. Basically, an engineer writes a chunk of new code and submits a request to pull it into the main project. Okay. Historically, a human writes it and another human reviews it line by line to make sure it won't break anything. But the sources indicate anthropic engineers are now regularly shipping two to three thousand line pull requests that were generated entirely by the AI.

Allan

Two to three thousand lines? That's a massive amount of code.

AI Writing AI And Code Reality

Ida

It is. The human engineers aren't really writing anymore. They're just acting as exhausted editors for an extremely prolific machine. And because of this, Amaday claims they are maybe six to twelve months away from AI doing end-to-end software engineering without any human intervention at all.

Allan

Okay, let's unpack this. Because if you look closely at the history of these tech promises, something really funny starts to happen.

Ida

Oh, I know where you're going with this.

Allan

I look at these Silicon Valley CEOs making these grand predictions, and honestly, they remind me of an end-of-the-world cult. Yes. You know the ones? Yeah. The leader predicts the apocalypse is coming on a Tuesday, and then Wednesday rolls around and the world is still here, so they quietly run to the printer to make new flyers with an updated date for the doom.

Ida

I completely see the parallel you're drawing. It's the shifting goalposts.

Allan

Exactly that. Because in late 2024, MODA predicted transformative AI could arrive as early as 2026.

Ida

And we are currently in 2026.

Allan

Right. But in these more recent essays we're looking at, the timeline has quietly shifted backward to one to two years or before 2030. The impending doom just keeps getting rescheduled. I feel like we need to look at what's actually happening on the ground, not just the marketing pitch.

Ida

Well, the empirical data actually backs up your skepticism. ZMS 2026 Economic Index analyzed two million real-world AI interactions.

Allan

Okay, so actual usage data.

Ida

Exactly. And the reality on the ground contradicts the most alarming hype. Amode says his engineers are handing over almost all of their code to AI, but everyday developers report it's actually closer to 80 or 90 percent, not 100%.

Allan

Aaron Powell Which, I mean, that sounds like a small difference 10, 20 percent. But that missing 10 to 20 percent is the entire ballgame, isn't it?

Ida

It absolutely is.

Allan

Having an AI draft a quick Python script to scrape a website is completely different from asking an AI to architect a secure scalable software system from scratch.

Ida

Right. And it comes down to the fundamental difference between automating a task and automating a whole job. The ZMF report introduces a crucial concept here: the feedback loop.

Allan

Oh, I love this part of the sources. Let me see if I can translate this for everyone. A feedback loop is basically how long it takes to find out if you messed up.

Ida

Yep, perfectly said.

Allan

So coding has a very short feedback loop. If the AI writes a piece of code, you compile it, run a test, and you know within seconds if it works or if the whole program crashes.

Ida

The failure is immediate.

Allan

Right. So it's relatively safe to hand that off to a machine because you can catch the mistake instantly.

Ida

You nailed it. Now contrast that short loop with fields like law, consulting, or corporate strategy.

Allan

Oh wow. Yeah, that's completely different.

Ida

Right. If an AI drafts a complex legal contract for a corporate merger, you might not know that it hallucinated a terrible liability clause until three years later when the company gets sued for a billion dollars.

Allan

And you can't just hit undo on a billion-dollar lawsuit.

Ida

Exactly. That long feedback loop is the ultimate bottleneck for automation. You simply cannot afford to automate high-stakes jobs with long feedback loops without intense high-level human oversight.

Allan

Because the liability is just too massive.

Feedback Loops Decide What Automates

Ida

Precisely. This is why the ZMS data shows that AI is primarily augmenting jobs, handling the routine short loop tasks rather than completely displacing workers and creating this permanent underclass that the essays keep warning us about.

Allan

Okay. So we've established that this AI is supposedly smart enough to do 90% of our coding and is allegedly a year away from replacing the white-collar workforce, but it still requires humans to check its homework so we don't get sued. Right. But surely, surely this divine superintelligence can at least manage its own web traffic.

Ida

Well, the spring of 2026 would strongly suggest otherwise.

Allan

This might be my favorite part of the entire stack of sources. The chronic infrastructure exhaustion. I mean, Anthropics platform has essentially been buckling under its own weight for months.

Ida

It's been a rough season for them.

Allan

We saw major disruptions in March. Then back-to-back crashes on April 7th and 8th, then a massive outage on April 28th that logged 12,000 complaints on down detector. And then just weeks later on May 8th, Claude went down globally for nearly two hours.

Ida

And that April 8th incident was especially revealing from a mechanical standpoint. The supposedly omnipotent Sonnet 4.6 model entered a literal frozen state.

Allan

Okay, wait. What does a frozen state actually mean technically? Like, did it just get confused by a prompt?

Ida

No, no, it's not a cognitive failure. It's a hardware traffic jam. When you send a prompt to an AI, it requires a massive amount of instantaneous computational math, what they call inference.

Allan

Inference, right.

Ida

So if too many prompts hit the servers at the exact same millisecond, the system's memory gets completely overwhelmed. It literally can't process map fast enough, the cues back up, and the server essentially drops the connection to protect itself from physically overheating.

Allan

So you get high error rates or just a total halt in output. People on social media were joking that Claude goes down and the entire global economy instantly reverts to the Stone Age. I just love the visual of a sci-fi super brain essentially putting up an out-to-lynch sign.

Ida

It's pretty funny.

Allan

Like, are we really ready to hand over the global economy to a system that requires a mandatory, unscheduled nap when too many college students log on to summarize PDFs?

Ida

Well, in the industry, they are actually calling this a success disaster.

Allan

A success disaster. That is a fantastic piece of corporate spin.

Ida

It is, but honestly, it's technically accurate. The infrastructure is crashing because the computational demands of generative AI are astronomical, and Anthropic is facing completely unprecedented demand.

Allan

Aaron Powell But there is a vital technical nuance we have to highlight here regarding how traffic is routed.

Ida

Oh, right. Because if you were at work on May 8th, you might have noticed something strange.

Allan

Right. This is the difference between the API and the consumer site.

Ida

Okay, break that down for us. An API or application programming interface is essentially the digital plumbing that allows two different software programs to talk to each other without a human clicking buttons. Enterprise businesses use the API to bake Claude's brain directly into their own internal company tools.

Allan

So to use an analogy, the API is like the massive industrial kitchen in the back of a restaurant fulfilling huge catering orders.

Ida

Yes.

Allan

While Claude.ai, the website you and I log into on our laptops, is just the walk-in dining room in the front.

Ida

Aaron Powell That's a perfect analogy. And when Anthropic runs out of computing power, they don't shut down the industrial kitchen.

Allan

They lock the front doors to the dining room.

Ida

Exactly. For the vast majority of these artiges, the underlying API actually remains stable. So while the consumer-facing site was giving everyday individuals a blank screen or a spinning wheel, the core technology powering major enterprise clients who pay a massive premium, by the way, was largely humming along.

Allan

Because they use load balancing to prioritize the VIP traffic. Okay, so the servers are melting because of the success disaster, and they are rationing compute power like water in a drought. But what exactly is driving this massive, unprecedented surge in traffic right now? Why are millions of people suddenly flocking to Claude to the point of literally breaking it?

Ida

Well, the sources point to a really wild mix of geopolitical controversy and some incredibly cutthroat corporate strategy.

Why Claude Keeps Going Down

Allan

Yeah, this part was fascinating. It all started when OpenAI Anthropic's biggest rival signed a highly controversial deal to deplay their models within the U.S. Defense Department's classified network.

Ida

And a lot of users immediately pushed back on that.

Allan

Big time. You had privacy advocates and tech workers worrying about domestic mass surveillance or the integration of AI into autonomous weapons systems. So Anthropic took a stand. They publicly refused to do the same kind of deal with the military.

Ida

Which led the Pentagon to label Anthropic a supply chain risk.

Allan

Wow.

Ida

Yeah, it was an unprecedented move against an American company. The government essentially said, if you won't build inside our classified walls, we can't trust you.

Allan

But the blowback actually worked in Anthropic's favor, didn't it? Silicon Valley completely rallied behind them.

Ida

They really did. The sources show a mass migration of users deleting ChatGPT and moving to Claude. Claude saw a 60% spike in free users, and their paid subscribers doubled in a very short window. That's nuts. And their projected 2026 revenue is hitting a staggering$15 billion.

Allan

15 billion.

Ida

That is a 10x year-over-year growth. Analysts actually think they could overtake OpenAI in revenue by mid-2026.

Allan

And this ties directly back to Dario Amade's behavior on the world stage, doesn't it? Because he is out there loudly warning about the most dangerous window in AI history. He's painting pictures of totalitarian nightmares to lobby the U.S. government to block the sale of advanced computer chips to China. Right. But I have to stop you there and play devil's advocate for a second. Because the sources suggest this might be a brilliant, cynical business mode.

Ida

It's definitely a theory.

API Vs Website Who Gets Priority

Allan

They claim anthropic is essentially weaponizing safety to crush international competition. Because lobbying to block chips to China in the name of global safety also perfectly prevents Chinese AI labs like Alibaba, who makes the highly efficient Quen models from getting the hardware they need to create a model that is 97% as good as Claude, but for a fraction of the price.

Ida

It does happen to work out perfectly for them.

Allan

Right. But isn't it also possible Ambude genuinely believes the existential risk is real? I mean, given that he just refused a lucrative Pentagon contract, maybe y he actually is terrified of autonomous weapons.

Ida

No, it's entirely possible he believes every word he is saying about the existential risks. We can't know his internal motives. And as we're analyzing this, we're not taking a side on his personal beliefs.

Allan

Right, of course.

Ida

But if we look purely at the strategic outcome of his lobbying, regardless of his intent, the effect is undeniable. By pushing for policies that slow down foreign AI development under the banner of safety, Anthropic protects its own rapid$15 billion growth from cheaper international alternatives.

Allan

It's masterful. Whether you mean to or not, you throttle your cheapest market competitors, sustain a 10x revenue growth, and manage to sound like the savior of humanity while doing it.

Ida

Exactly. And the apocalyptic predictions sustain the massive hype needed to attract venture capital, which they desperately need to fix those crashing servers we talked about.

Allan

But here is where the narrative hits a brick wall. Because even with$15 billion in revenue and brilliant geopolitical maneuvering, Silicon Valley cannot escape the physical limits of the universe. Nope.

Ida

Physics always wins.

Allan

Always. Despite all the sci-fi promises of digital gods writing their own code in Escher-like loops, there are hard, unyielding bottlenecks. And Dario Amade admits this in his essay. The AI might be able to design software at lightning speed, but it cannot instantly mine silicon out of the earth.

Ida

Right.

Allan

It cannot magically speed up the manufacturing of physical computer chips and a factory in Taiwan.

Ida

Let's actually look at the mechanism of that bottleneck. When we talk about training a massive new AI model, it requires literal temporal compute time.

Allan

Meaning just time on the clock.

Ida

Exactly. The chips process data, but there is a physical limit to how fast data can travel from the memory banks into the processing cores. It's called the von Neumann bottleneck.

Allan

The von Neumann bottleneck.

Ida

Yeah. You can have the smartest algorithms in the world, but if they are waiting on electrons to physically move across a microscopic piece of silicon, you just have to wait. You cannot fast forward physics.

Allan

And the physical constraints aren't just hardware either. It's the humans.

Ida

Oh, the OpenClaw story.

Allan

Yes. We have to talk about OpenClaw because this story perfectly encapsulates the fragility of this entire ecosystem.

Safety Politics And A User Migration

Ida

So OpenClaw was meant to be the agile, open source alternative to the corporate AI giants. It was a wildly popular project that allowed developers to run powerful AI tools locally.

Allan

But in late April 2026, it suffered a catastrophic meltdown. Users were reporting that their installations were hopelessly trapped in plug-in dependency repair loops. The entire system basically choked.

Ida

It did.

Allan

So what actually is a dependency loop, mechanically speaking? Like why did it break?

Ida

Think of modern software like a massive digital house of cards. A dependency graph means program A relies on a specific piece of code in program B to function, and program B relies on program C. Okay, makes sense. So if someone updates a single comma in program C, it can cause a cascade of failures all the way up to program A. Managing those dependencies requires meticulous oversight.

Allan

And the sources reveal that the entire OpenCLOM project heavily relied on one human being to do that oversight. Peter Steinberger, the founder.

Ida

He was the linchpin. He was manually reviewing the code, handling the packaging, resolving those dependency conflicts.

Allan

Wait, so this massive global alternative to Claude and ChatGPT relied on one guy. What happened when he quit?

Ida

Well, he transitioned to a job at OpenAI. And without him actively maintaining that house of cards, the sheer weight of the complex dependency graphs simply crushed the system. The automated plugins tried to fix themselves, got caught in conflicting loops, and completely melted down.

Allan

I love that this exists, but also why? We are trying to build this sci-fi digital god, but underneath it all, it's held together by duct tape and exhausted humans frantically trying to patch leaky code.

Ida

Literally duct tape.

Allan

If the guy who updates to spreadsheet takes a different job, the open source revolution just grinds to a halt.

Ida

It really makes you ask what does this say about us as a society right now? If we connect all these dots, the apocalyptic predictions, the server crashes, the single points of human failure, the real danger highlighted in these sources isn't just the AI itself. Right. It's the human panic surrounding it.

Allan

How so? Like what's the actual danger there?

Ida

The tragedy here is that the hype, the claims that 50% of white-collar jobs are vanishing in months is causing real harm today. You have young people dropping out of college to chase AI startups or making fear-based career choices because they genuinely believe they have almost no time before they become part of a permanent underclass.

Allan

Wow.

Ida

They are reacting to the aggressive marketing, not the reality of the buffering servers and the Jenga Tower dependency loops.

Allan

That is such a vital takeaway. Which brings us to the ultimate question for you, listening to this right now. You're probably sitting at your desk wondering what does this actually mean for my job? Right. Am I going to be replaced by Claude 4.6 while I'm out getting a sandwich?

Hedged Preparation And Offline Resilience

Ida

And the data strongly suggests no. The transition we are seeing is about augmentation, not total replacement. The future belongs to people who practice what some analysts and the sources are calling hedged preparation.

Allan

Aaron Powell Hedged preparation. I really like that terminology. It means you don't panic, but you don't stick your head in the sand either.

Ida

Exactly. You learn to work alongside these tools, whether it's Claude or ChatGPT or an aggregator like Zemeth, where you can access 25 different models at once. You build a daily habit of using them to handle the drudgery, the short feedback loop tasks we talked about earlier.

Allan

You let the AI do the execution, but you provide the judgment.

Ida

Yes. You focus your career on the skills that require deep human context, empathy, and navigating those long feedback loops where a mistake isn't caught for years. Because while AI might be getting brilliant at how to execute a task, it is still fundamentally terrible at knowing what task actually needs executing in the real world.

Allan

It's the centaur phase of work. The human steers the machine.

Ida

Exactly.

Allan

And here's a final thought I want to leave you with something to mull over the next time you find yourself staring at an AI error message.

Ida

Oh, I'm sure it'll happen, so undoubtedly.

Allan

If our entire global economy is rushing headlong to rely on a handful of incredibly complex AI models, models that require$15 billion of infrastructure and mind-bending algorithms, but can still freeze up and enter a panic state on a random Tuesday morning because too many people logged on.

Ida

Right.

Allan

Perhaps the ultimate superpower in the workforce of the future won't actually be prompting AI at all. That's a great point. Maybe. Just maybe. The most valuable skill you can possess will simply be retaining the resilience, the mechanical knowledge, and the ability to just do things the old fashioned offline way when the digital god is buffering.