The Deepdive

AI Brain Fry: When Bad Management Meets GenAI

Allen & Ida Season 3 Episode 55

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 21:34

Send us Fan Mail

Your company didn’t hit an “AI limit.” It hit a human limit. We walk through the real-world generative AI workplace: sales teams quietly building rogue features, HR teams dealing with a new kind of cognitive exhaustion, and executives sending polished messages that sound empathetic but create distance from reality. The big twist is that the AI tools are often working exactly as designed, and that’s the problem. They amplify whatever leadership system they get plugged into.

We dig into research on AI productivity and why so many gains vanish into rework, editing, and verification. Then we unpack Boston Consulting Group’s term “AI brain fry,” a measurable cognitive overload state tied to decision fatigue and major mistakes, hitting hardest in text-heavy functions like marketing and HR. If you’ve been stuck in a loop of prompting, checking, and re-prompting, you’ll recognize the pattern instantly.

From there, we zoom out to leadership: the taxes of bad leadership, the trust tax that turns curiosity into threats, the alignment tax that fuels vibe coding, and the product slop that appears when teams skip discovery because AI makes delivery feel instant. We also confront the collapse of middle management, the loss of the translation layer, and what disasters like Zillow’s algorithmic overreach reveal about context and accountability. Finally, we explore a hopeful counterintuitive idea: AI as executive coach, “algorithmic humility,” and why taste and judgment may become the most valuable professional skills in the AI era. If this made you rethink how generative AI should be deployed, subscribe, share with a leader on your team, and leave a review. What part of AI adoption is causing the most friction where you work?

Leave your thoughts in the comments and subscribe for more tech updates and reviews.

The AI Utopia Meets Reality

Allan

So picture this. You are uh walking through a modern corporate office right now.

Ida

Okay, I'm picturing it.

Allan

And on one floor, the sales team is secretly using generative AI to build their own software features entirely behind the engineering team's back.

Ida

Aaron Powell Which is just a nightmare waiting to happen.

Allan

Right. A total nightmare. And then you walk down the hall, and the HR department is just staring blankly at their screams, suffering from this completely new, like mathematically measurable form of cognitive exhaustion.

Ida

Oh, absolutely.

Allan

And upstairs, the CEO is sending out these beautifully written, highly empathetic, just deeply moving emails to the whole company.

Ida

Aaron Powell Let me guess.

Allan

Yep. Generated entirely by a robot. Trevor Burrus, Jr.

Ida

Of course they are.

Allan

Trevor Burrus I mean, uh, we were promised this absolute AI utopia, right? A frictionless world of hyperproductivity where nobody does grunt work ever again.

Ida

Aaron Ross Powell That was the pitch. Yeah.

Allan

But instead, it seems like we're getting what researchers are now calling AI brain fry and just a massive system-wide amplification of bad management.

Ida

It's a it's the grand paradox we really find ourselves in right now. I mean, we treated AI like it was this magical solution to human problems, but it's not. It's really just an amplifier. Trevor Burrus Aaron Powell, Jr.

Allan

An amplifier, really. Trevor Burrus, Yeah.

Ida

It's like think of it like plugging a high voltage wire into a crumbling circuit board.

Allan

Well, that's a good way to put it.

Ida

Trevor Burrus, Jr. Right. It's not the electricity's fault that the house is touching on fire. The wiring was already shot. The technology is actually working perfectly, but it's exposing the fact that our human organizational structures are, frankly, incredibly fragile.

Allan

Aaron Powell That is such a fascinating way to look at it. So welcome to the deep dive. Today we are unpacking a huge stack of recent research from BCG to the California Management Review to explore why the AI revolution is hitting this massive human bottleneck.

Ida

It's a huge bottleneck.

Allan

It really is. So we're going to dissect the hilarious, sometimes terrifying taxes of bad leadership, and we'll discover why the most valuable professional skill in the future might simply be uh having good taste.

Ida

Which is a profound shift, honestly, in how we think about work. I mean, for the last two years, everyone has been just obsessively focused on the capability of the AI models themselves.

Allan

Yeah, exactly. Like how many parameters does it have?

Rework Eats Productivity Gains

Ida

Right. How fast is the generation? But the thing is, the bottleneck isn't the processing power of the computer anymore. The bottleneck is a processing power of the human being who's actually sitting in front of the screen.

Allan

Aaron Powell Start right there, actually, with the immediate reality of AI implementation versus the hype we've all been sold. Because the prevailing narrative has been, you know, AI will instantly double your output. Right. But according to this recent future of work analysis, almost 50% of the productivity gains from AI are currently just being lost to rework.

Ida

50%. That's huge.

Allan

Half of it. People are generating this massive amount of text or code, but then they have to go back and painstakingly fix it or edit it or, you know, verify the AI's output. We are basically spending half our time just babysitting algorithms.

Ida

Aaron Powell And that babysitting, it has a really profound physiological and psychological cost. So Boston Consulting Group recently completed this major study on this dynamic, and they identified a phenomenon they are officially calling AI brain fry.

Allan

Brain fry. I mean, it sounds like a novelty fast food menu item, but the data here is actually wild.

Ida

Aaron Powell It's a very real cognitive state. BCG surveyed this massive pool of workers and found that 14% of them are actively suffering from it right now.

Allan

Wow. 14%. Yeah.

AI Brain Fry Explained

Ida

And they define brain fry as the specific mental fatigue that results from the excessive use of or interaction with AI tools well beyond your natural cognitive capacity. Trevor Burrus, Jr.

Allan

And downstream effects are pretty alarming, right?

Ida

Oh, definitely. People experiencing brain fry show 33% more decision fatigue, and there is a 39% spike in major errors in their actual work.

Allan

Okay, but here's the thing: is this just the new Zoom fatigue or something worse? Because it sounds like we gave everyone a coagulator jetpack, but totally forgot to tease them how to land. Are people just, you know, tired of looking at screens, or is there something mechanically different happening in the brain here?

Ida

Aaron Powell It's a really critical distinction to make. The research is very clear that this is fundamentally different from traditional burnout.

Allan

How so?

Ida

Well, if you think about traditional workplace burnout, historically it's a measure of emotional and physical exhaustion. It comes from interpersonal conflict, managing difficult stakeholders, uh emotional labor. Trevor Burrus, Jr.

Allan

Right. Office politics, that kind of thing.

Ida

Exactly. Brain fry, on the other hand, is pure cognitive overload. It's literally a working memory problem. We are treating human brains like unlimited hard drives.

Allan

Aaron Powell But what does that actually look like, like on a random Tuesday afternoon for a normal employee?

Ida

Aaron Powell Okay, so imagine you ask a chatbot to generate a marketing strategy. In three seconds, it spits out a highly confident, incredibly dense 40-page document with five different strategic options.

Allan

Right, it's instantaneous.

Ida

Instantaneous. The machine did the work instantly, but now the human is forced into this rapid-fire, relentless decision-making loop. You have to read it.

Allan

You have to evaluate it.

Ida

Evaluate it, fact-check it, synthesize it. Your brain just wasn't built to process that volume of synthetic information that quickly. Eventually the human hard drive just crashes.

Allan

So if you're listening to this and you've spent the last three hours prompting a chatbot just to write a single strategy brief, you are exactly who BCG is talking about.

Ida

Yep. You've got brain fry.

Allan

Aaron Powell And the irony of who is suffering the most from this is staggering to me. You'd assume it would be the software engineers, right? The coders who were deeply embedded in these systems all day.

Ida

Aaron Powell You would think so.

Allan

Aaron Powell But the BCG data shows the highest rates of brain fry are actually in human resources at 19.3% and marketing at nearly 26%.

Ida

Aaron Powell Which logically follows, honestly, when you consider the nature of those jobs.

Allan

Aaron Powell Really? How do you mean?

Ida

Well, those departments are dealing with massive, unstructured volumes of text, communication, and human synthesis. They are often pushed to the bleeding edge of adopting these generative tools to handle, like hiring pipelines or mass content creation.

Allan

Aaron Powell Oh, that makes sense.

Ida

Yeah. They are essentially the canaries in the coal mine for what happens when a biological brain tries to match the cadence of a generative machine.

Who Gets Brain Fry Most

Allan

Aaron Powell That individual overload is wild, but it's really just the first symptom, I think. Things get so much more interesting when you look at what happens when you drop this highly disruptive technology into a leadership system that was already fundamentally broken.

Ida

Oh, yes.

Allan

There's this brilliant framework developed by product leader Stephanie Liu about what she calls the taxes of bad leadership.

Ida

Yes. Her central thesis is that AI does not fix a broken leadership system. It merely accelerates it in the exact wrong direction.

Allan

Right.

Ida

A weak leadership structure has always exacted a tax on an organization. You know, it makes everything slower, more painful. But AI has basically added a massive multiplier to all of those existing taxes.

Allan

I was reading through her framework and her breakdown of the trust tax is incredibly revealing. This tax focuses on the fragile relationship between the CEO and their product leaders. Stephanie points out that executives are seeing their competitors prototyping new features at record speeds because of AI. So the CEO turns to their own product teams and asks, why is it taking us so long?

Ida

And if the foundational trust in that relationship is already low, that question doesn't land as genuine curiosity.

Allan

Not at all.

Ida

It lands as a direct threat. It lands as doubt.

Allan

The product leader instantly goes on the defensive?

Ida

Exactly. The whole mechanism here is just self-preservation. Instead of focusing on leading the team to build better products, the product leader is forced into these endless, dragged-out status update loops.

Allan

Just making slide decks all day.

Ida

Literally building It really does.

Bad Leadership Taxes Multiply

Allan

So because generative AI has essentially lowered the barrier to entry for coding to zero, non-engineers are getting restless. People in sales or marketing are seeing a cool AI demo on social media, and instead of waiting for the engineering team, they just decide to independently prototype their own software features using AI.

Ida

Just going totally rogue.

Allan

Yes. That's vibe coding. They're just building rogue software based on vibes. I love that this exists, but also why? It's like an orchestra where the violins are playing Mozart, the brass section is playing jazz, and the conductor is just nodding politely. How does anything actually get built?

Ida

It doesn't. Or rather, a lot of disjointed things get built, but absolutely nothing cohesive survives. That is the essence of the alignment tax. Everyone is moving at 100 miles an hour, but nobody's moving together. Right. The CTO wants to rebuild the entire architecture. Marketing is spinning up rogue AI tools because they vibe coded a new dashboard, and the poor product leader is just caught in the middle. What this exposes is the massive difference between real alignment and fake alignment in corporate culture.

Allan

I've definitely been in meetings that had fake alignment.

Ida

We all have. Real alignment is extremely difficult. It involves genuinely uncomfortable, behind closed doors conflict. Yeah. It's two leaders arguing over resources until they forge a single unified direction that they both actually commit to. Fake alignment is what usually happens when leaders want to avoid conflict entirely.

Allan

Right. Everyone's just being polite.

Ida

Exactly. Everyone nods in the meeting, says, great idea, but then everyone goes back to their desks and does exactly what they're going to do anyway. It leaves the teams underneath them completely guessing about what the actual priorities are.

Allan

And when you pour the accelerant of AI onto fake alignment, you get what the industry is now calling product slop. Teams are moving so fast, skipping crucial steps, just because generating code feels faster than actually researching a problem.

Ida

This really gets into the mechanics of how software is supposed to be built. Product managers often use a framework called the double diamond. So the first diamond, the left side, is the discovery phase. You are researching the market, talking to users, and asking the fundamental question who actually buys this? What problem are we even solving?

Allan

The important questions.

Ida

Right. Then the second diamond, the right side, is the delivery phase, where you actually write the code and build the thing.

Allan

But AI makes the right side of the diamond practically instantaneous.

Ida

Precisely. AI makes building so cheap and so fast that teams are completely skipping the left side of the diamond. They skip the discovery phase entirely. So you end up shipping products at lightning speed that absolutely nobody wants to buy. That is product slop.

Allan

When teams are churning out this product slop because of fake alignment, it inherently breeds a massive amount of insecurity among the people actually doing the work, which leads to this terrifying psychological toll, what Stephanie Liu calls the therapy tax.

Vibe Coding And Alignment Breakdown

Ida

Yeah, this is the hidden emotional labor cost of the whole AI transition. Product managers and knowledge workers right now are dealing with profound anxiety about their own relevance.

Allan

Yeah, I can imagine.

Ida

Imagine spending 10 years mastering a highly specific, skill-like writing technical user stories, and suddenly an intern can do it in four seconds with a prompt. Their entire professional identity is shattering.

Allan

And the leaders are just stuck absorbing this grief in their weekly one-on-ones. One of the articles uses this amazing phrase saying leaders feel like they are hurting cats who think they're lions.

Ida

It's so accurate.

Allan

Wait, what? Seriously.

Ida

Yeah.

Allan

The employees are terrified, but also wildly overconfident because of the AI tools. If the leaders are paying a confidence tax and the team is paying a therapy tax, aren't we just automating our own professional midlife crises?

Ida

We are absolutely automating an existential crisis. Let's look at that confidence tax you just mentioned. This affects the leaders themselves. They are experiencing deep, deep uncertainty about what their role even is anymore. And when leaders lack internal stability and don't know what the future looks like, they default to the worst possible human behaviors. The mechanism here is risk aversion. When the company actually needs bold, decisive vision, insecure leaders freeze up.

Allan

They don't want to make the wrong call.

Ida

Exactly. They start deferring to whoever sounds the most certain in the room, even if that person is completely wrong.

Allan

So the system is emotionally fragile, structurally misaligned, and running at 100 miles an hour.

Ida

Which is why adding AI to this mix is like putting a massive high-performance jet engine into a car with a cracked chassis. It doesn't help the car win the race. The torque just rips the car apart much, much faster.

Allan

And we are seeing the structural fallout of that cracked chassis breaking apart in real time. Let's talk about the collapse of the middle. According to labor data, middle management is essentially evaporating.

Ida

It really is.

Product Slop From Skipping Discovery

Allan

Job postings for middle managers are down 42%. And Gartner predicts that 20% of organizations will eliminate more than half of their middle management roles in the near future, replacing those functions with AI.

Ida

This represents a massive hollowing out of the modern organization. The senior executive assumption is that, well, AI can handle the reporting, the scheduling, the status updates, all the administrative tasks they associate with middle management.

Allan

But that's not all they do.

Ida

Exactly. That fundamentally misunderstands what a good middle manager actually does. They aren't just bureaucrats, they are the translation layer. They provide the context, the coaching, the emotional steadiness, and the continuity between the grand vision of the CEO and the daily reality of the frontline worker.

Allan

Aaron Powell But the senior leaders don't seem to realize they are losing that translation layer because they are using AI to create the appearance of leadership.

Ida

Yes.

Allan

They're using algorithms to generate these perfectly polished, empathetic sounding company-wide emails or to instantly summarize on hundred-page reports. They feel incredibly productive, but mechanically they're becoming more distant and disconnected from the reality on the ground than ever before, which leads directly to disasters like the Zillow offers situation.

Ida

The Zillow case study is the perfect, terrifying example of what happens when you remove human context and rely entirely on algorithmic oversight.

Therapy Tax And Confidence Tax

Allan

For you listening who might not remember the details of this, Zillow had an algorithmic pricing model for buying and flipping homes. It was supposed to be completely automated. And it ended up overestimating home values so badly that the company had to take a$304 million write down in a single quarter.

Ida

Ouch.

Allan

$304 million. And the underlying reason why is just fascinating. The algorithm was looking at historical data, but it couldn't understand the unprecedented real-world human context of the COVID-19 pandemic.

Ida

Exactly. Automated valuation models assume the future will look roughly like the past. When COVID hit, the housing market exhibited bizarre, emotionally driven human behaviors, panic buying, massive migrations, that the algorithm simply couldn't contextualize.

Allan

You didn't know there was a pandemic going on.

Ida

Right. It kept buying houses at inflated prices because it lacked the capacity to watch the evening news or understand human anxiety. It was an anomaly that any competent human analyst or middle manager would have caught immediately.

Allan

What does this say about us as a society? If we willingly rip out the middle managers, who are the glue that understands human context, and the top executives are hiding behind AI-generated summary emails, who is left in the building to catch the$300 million mistakes.

Ida

It creates a profound systemic vulnerability. We are confusing the ability to generate information with the ability to make a decision.

Allan

Oh, that's good.

Ida

AI generates infinite complexity and endless options. But leadership, true leadership, still requires a human being to look at the context, choose one direction, and own the consequences of that choice. More information floating in a broken, hollowed-out system is just creating hesitation, not conviction.

Allan

Okay, we've talked about brain fry, we've talked about the taxes of bad leadership, vibe coding, and$300 million algorithms gone completely off the rails. I feel like we need to find the silver lining here.

Ida

Let's find one.

Allan

Because the research actually suggests that AI might be the exact tool we need to train better human leaders.

Ida

It is a deeply ironic twist, but the data actually supports it. There is a fascinating study published recently in the California Management Review that looked at using AI not to replace workers, but as an executive coach for leaders.

Allan

Okay, interesting.

Ida

Yeah, they ran a comprehensive 12-week experiment comparing highly trained human executive coaches against AI coaching agents.

Middle Management Collapse And Zillow

Allan

And the results kind of blew my mind. The AI coaching agents improved the leaders' cognitive flexibility by 28%. And this is the crazy part. They reduced implicit bias by 35%. The AI significantly outperformed the human experts. But how? How is a chatbot better at teaching leadership and reducing bias than a human being?

Ida

It comes down to a psychological mechanism, the researchers coined as algorithmic humility.

Allan

Algorithmic humility.

Ida

The mechanism is actually rooted in human biology. When a human coach gives you critical feedback, your brain's social threat response naturally activates. You get defensive.

Allan

Oh, sure. Nobody likes being criticized.

Ida

Right. The human coach knows this. So they naturally deploy empathy. They might soften the blow of the critique or subtly validate your excuses to protect the relationship, but an AI simply does not care about your feelings.

Allan

It really doesn't.

Ida

It does not possess empathy. And crucially, your brain knows it's talking to a machine, so the social threat response doesn't trigger in the same way. The AI is relentlessly honest. It just holds up a mirror of unfiltered objective data. It forces the leader to confront their blind spots and their biases logically without the comfortable cushion of human empathy to protect their ego.

Allan

This is simultaneously impressive and completely ridiculous. Wait, it gets better. We are literally using emotionless robots to teach human managers how to be better humans, specifically because the robots don't politely coddle their egos.

Ida

It is absurd, but it is highly effective. And it points toward what the midterm future of leadership actually looks like. The leaders who survive and thrive in this new era won't be the ones with the deepest functional domain expertise. They won't be the fastest coders or the most technically proficient marketers.

Allan

Who will they be then?

Ida

They will have to evolve into orchestrators. Their primary job will be ensuring the best possible collaboration between human employees and generative AI.

Allan

Which completely shifts how we define valuable human work. There's an analysis from Workday that highlighted this beautifully. They asked the fundamental question: what is the next artisan in the AI era?

Ida

Yeah.

Allan

In a world where a machine can predict the next word perfectly, or write a flawless block of code, or generate a 50-page marketing strategy in seconds, what is left for us to do? And the answer they came to is taste.

Ida

Taste and judgment. It is a profound realization. When the execution of a task becomes basically free and instantaneous, the value no longer lies in the execution. The value lies entirely in deciding what to execute. Right. Is this good? Is this right for our brand? Does this solve the actual messy human problem we are facing?

Allan

Aaron Powell It's like having a million master painters standing behind you who can perfectly execute any brushstroke you ask for.

Ida

Yeah.

Allan

The value isn't knowing how to mix the paint or hold the brush anymore. The value is knowing what the painting should actually look like.

Ida

Exactly. The human advantage isn't coding or processing data or generating reports anymore. The human advantage encourage it's emotional steadiness. It's the ability to navigate ambiguity, to build actual trust instead of fake alignment, and to have refined taste. AI actually helps us by stripping away our biases and taking over the administrative burdens so we can finally focus entirely on those uniquely human traits.

AI Coaching Algorithmic Humility

Allan

That is a surprisingly hopeful place to land. Let's wrap this up. We started this deep dive looking at a corporate landscape that felt like an AI-induced fever dream sales teens vibe coding, HR departments suffering from brain fry, and the complete hollowing out of middle management.

Ida

It's a lot.

Allan

It is a lot. Yeah. But what we've discovered is that AI is incredibly powerful, yes, but ultimately it functions as a mirror. It is just reflecting and accelerating our own organizational and leadership flaws.

Ida

It exposes the cracks we've been ignoring for decades. The fake alignment, the lack of trust, the fear of conflict.

Allan

The technology isn't the problem, and it isn't the savior. To fix the AI strategy, we first have to fix the human strategy. We have to stop paying the taxes of bad leadership, lean into algorithmic humility, and focus on cultivating the uniquely human skills of taste, judgment, and connection.

Ida

It's a powerful reminder that no matter how advanced our computational tools become, the foundation of any successful endeavor is still deeply unavoidably human.

Taste Judgment And The Final Question

Allan

But as we leave you to ponder all of this, I want to leave you with one final lingering thought. We just talked about how AI is proving to be relentlessly honest. It's highly effective at inducing algorithmic humility by pointing out the biases of middle managers and product leads because it strips away the ego and just looked at the data. Well, if it's that good at evaluating performance without bias, how long until corporate boards of directors start using AI to evaluate the CEO's performance? Right. And when that happens, when the machine turns its unfiltered gaze to the very top of the food chain, will the C suite be ready for the raw truth from an algorithm that doesn't care about their title? Or will those beautifully written, robot generated, empathetic emails suddenly sound a lot like an automated pink slip? Thanks for joining us on this deep dive. We'll see you next time.