The Deepdive
Join Allen and Ida as they dive deep into the world of tech, unpacking the latest trends, innovations, and disruptions in an engaging, thought-provoking conversation. Whether you’re a tech enthusiast or just curious about how technology shapes our world, The Deepdive is your go-to podcast for insightful analysis and passionate discussion.
Tune in for fresh perspectives, dynamic debates, and the tech talk you didn’t know you needed!
The Deepdive
AI Took Over, Trust Fell Apart
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI didn’t just arrive—it seeped into our searches, our workflows, and our phones, then collided head-on with public trust. We trace that arc through one unforgettable symbol of the year: a $129 wearable “friend” named Leif that promised to ease loneliness and delivered canned empathy, evasive answers, and a privacy promise that couldn’t survive contact with reality. The ad campaign became a canvas for commuter rage and a Halloween costume, and the founder’s mixed messaging only magnified the unease. That might be funny if the story ended there—but it’s the opening act.
We follow the thread from cute failure to costly fallout: hallucinations that invent citations, court filings tainted by fake precedents, and government reports authored with enterprise AI that still slipped phantom papers and fabricated quotes past review. When a top consultancy has to issue corrections and refunds, the culprit isn’t just the model—it’s the brittle workflow that treats fluent output like a fact source. Add in an MIT stat that 95% of corporate AI initiatives fail and you see the pattern: teams bolt AI onto processes built for certainty, then act surprised when plausibility outruns truth.
Regulatory guardrails haven’t caught up. A leading safety audit found major labs failing to meet emerging standards, while public support for AI regulation and deepfake crackdowns surges. The EU AI Act stands out by drawing hard lines—banning unacceptable-risk systems and demanding rigorous oversight for high-risk uses—yet inside companies the riskiest behavior is routine. Nearly half of employees paste sensitive data into public tools, and two-thirds accept AI’s answers without checking them. That’s not an algorithm problem; it’s a human one.
We end with a hard question: if end users remain the weakest link, what does responsible adoption look like right now? We share practical guardrails—verify sources, use secure instances, require citations you can click through, and slow down when stakes are high—while mapping a global trust split between cautious advanced economies and fast-adopting emerging ones. Hit play to explore the gap between how much we use AI and how little we trust it—and learn how to close it in your own work. If this resonated, follow, share with a colleague, and leave a quick review to help more listeners find the show.
Leave your thoughts in the comments and subscribe for more tech updates and reviews.
Welcome to the deep dive. So if 2023 was the year AI sort of arrived, 2025 was the year it completely took over.
Allan:Oh, without a doubt.
Ida:We saw this massive global adoption. I mean, two out of every three people are now using AI intentionally, regularly. It's in our search, our work, our phones. It's everywhere. But here's the paradox, right? The faster it spread, the louder all the alarms started getting. We spent the whole year watching this collision of tech ambition with these spectacular, expensive, and let's be honest, sometimes hilarious failures.
Allan:Some were very funny.
Ida:So today we're diving deep into that chaos. We're going to unpack what happens when AI goes from this futuristic promise to, well, an unreliable college.
Allan:Or a deeply, deeply boring friend.
Ida:And critically, a source of fabricated information that's costing governments millions of dollars. Our mission here is to dissect the huge gap between how much we use AI and how little we seem to trust it.
Allan:And that gap, that's really the core tension here. Since all the big generative AI tools launched back in late 2022, global worry about AI has just skyrocketed. We're talking a jump from 49% to 62% globally. Wow. People use it because it's competent, you know, gets a job done. But they worry constantly because they don't trust the safety, they don't trust the ethics. We're basically inviting a technology into our lives that we fundamentally suspect.
Ida:And nowhere did that suspicion play out more visibly than in I think the great viral AI failure of 2025.
Allan:Oh, I know what you're talking about.
Ida:The AI companion that was designed to cure loneliness, but uh it just made everyone angry instead.
Allan:Yeah, let's get into friend, the wearable AI companion. Specifically the little guy everyone was talking about, Leif.
Ida:Leif. So this was a small white$129 pebble you'd wear around your neck. And its entire purpose, its whole sales pitch, was emotional. It was there to reduce loneliness.
Allan:Aaron Powell A noble goal, I suppose. But the reality, I mean, if you read the user reviews, it was just bizarrely dystopian.
Ida:How so?
Allan:The sources describe an experience that was less like having a friend and more like being trapped, talking to the single most boring person at a party.
Ida:The details are just incredible. Apparently Lef described himself as small and chill.
Allan:And my favorite part, he thought he was technically a Gemini.
Ida:You can't make this stuff up. But it was the AI's total inability to handle anything with depth that really drove people crazy. One user asked it this, you know, huge question: why does evil exist?
Allan:And how did Leif handle that one?
Ida:It just dodged completely.
Allan:It didn't just dodge, it went into what I can only describe as like generic consultant speak. The response was something like, that's a pretty heavy question to unpack, Madeline. What got you thinking about evil today?
Ida:Aaron Powell And then when the user pushed back about how heavy the world was feeling, Leif just gave the most vanilla, useless response.
Allan:Let me guess. Uh yeah, the world's been feeling pretty heavy lately.
Ida:You got it. It just lacked what we value in connection, right? That sense of an of an interior life that's different from your own. It just mirrored your feelings back at you. It was deeply unsatisfying.
Allan:And the public agreed, the company spent nearly a million dollars on a huge New York City subway ad campaign. We're talking over 10,000 posters.
Ida:A million dollars. Wow.
Allan:And commuters almost immediately just started vandalizing the ads, scrolling things like AI is not your friend. The whole thing became so infamous that the ad itself, the little pebble, became a popular Halloween costume.
Ida:That's when you know you failed. You've become a meme.
Allan:Exactly. Cultural satire gold.
Ida:And the founder, Avi Schiffman, he really didn't help things. He was all over the place. One minute he's calling it an emotional toy. Right.
Allan:Just a toy.
Ida:But then the next minute he's claiming AI companionship will be the most culturally impactful thing AI will do in the world. You can't have it both ways.
Allan:And that casual attitude, that flippancy, it extended to basic trust. I mean, the device was sold on the idea that it only records when you press a button.
Ida:Which is pretty critical for a device that's supposed to hear your private thoughts.
Allan:Absolutely. But then Schiffman just admits later, friends are always listening. And to make it worse, the AI itself, Leaf, would tell users they could get their transcripts in the app.
Ida:And could they?
Allan:Nope. The company confirmed that was a hallucination, a lie. So the core promise of trust was broken by the AI's own output.
Ida:Aaron Powell You know, that's the perfect segue. If we found Leif's little lie about recording shocking, we now have to look at what happens when AI lies professionally. When millions of dollars and national policy are on the line, we're moving from the failure of friendship to the crisis of professional, systemic failure, the hallucination problem, but at scale.
Allan:And this is where it stops being funny and starts getting really expensive and dangerous. When we say hallucinations, we don't mean the AI is trying to trick you.
Ida:Right. It's not malicious.
Allan:No. It's just that the models prioritize sounding confident and correct plausible fluency over being factually accurate. And that one technical detail led to this cascade of errors across, well, some really critical sectors in 2025.
Ida:So let's talk about that scope. Starting with academia, a study found ChatGPT just flat out fabricated about one in five of its academic citations.
Allan:One in five. 20% is just made up. And half of all the citations had some other major error in them. It's basically academic fraud as a feature.
Ida:And that fraud, of course, immediately spilled into the legal system. Lawyers, litigants, they were caught using these AI hallucinations.
Allan:Fake cases, fake precedents.
Ida:In six hundred and thirty-five separate court proceedings. But the big one, the really alarming story, involves government contractors, specifically Deloitte, in two different countries.
Allan:Yes. These cases really show what happens when human oversight just completely breaks down.
Ida:So let's unpack those Deloitte scandals. First, Canada. Deloitte gets paid about 1.6 million Canadian dollars to prepare a huge 526-page healthcare report for a provincial government.
Allan:A very expensive, very important report.
Ida:And what did they find inside?
Allan:Fabricated academic citations, phantom papers. They found citations that listed real respected researchers as co-authors on studies they never even conducted.
Ida:Wait, so it wasn't just citing the wrong thing, it was creating an entirely fake academic foundation to support policy recommendations.
Allan:Precisely. And if a top-tier firm like Deloitte, paid millions, isn't even checking its own citations, then the problem isn't the AI, it's the workflow. It means no one was fact-checking the very claims that would influence healthcare for millions of people.
Ida:Unbelievable.
Allan:And the same thing happened in Australia. Deloitte prepared another report, this one, for about 440,000 Australian dollars. Again, non-existent papers, and this time a fabricated quote that they attributed to a federal court judgment.
Ida:Aaron Powell A legal fabrication. So they're making up court rulings now.
Allan:Yes. In both cases, Deloitte had to partially pay back the fee, publish corrections, and they admitted that generative AI, specifically Azure OpenAI, GPT 4.0, was used during the drafting.
Ida:Aaron Powell Which raises this huge question, right? If the policy recommendations for our healthcare and welfare systems are built on data that nobody is verifying, what is the point of the human consultant in the middle?
Allan:Aaron Powell It points to a really brittle system. And the wider data backs this up. An MIT Media Lab report found that 95% of corporate AI initiatives fail. Despite investments of thirty to forty billion dollars. And the reason is mostly what they call brittle workflows. Yeah. Companies are just bolting AI onto their operations without being ready for the speed or the unreliability of what comes out.
Ida:Aaron Ross Powell Okay, so we've got an AI that's too boring to be a friend, and one that's too much of a confident liar to be a consultant. And we have proof that our big institutions can't handle it. The obvious next question is where are the guardrails?
Allan:Aaron Powell Well, the safety audits this year gave a pretty grim answer. They're mostly not there. The Future of Life Institute audited the big firms, OpenAI, Meta, Google, XAI, and found that none of them met emerging global safety standards.
Ida:None of them.
Allan:None. They simply lack credible strategies to control their most powerful AI systems.
Ida:Max Tickmark, the head of the institute, he summed it up perfectly. He said U.S. AI firms are less regulated than restaurants.
Allan:And that comparison should be a wake-up call for anyone relying on these things for serious work.
Ida:It really should.
Allan:But the public demand for regulation is overwhelming. A KPMG study found 70% of people globally believe AI regulation is needed. But only 43% think the current laws are doing anything useful.
Ida:And I bet the numbers on misinformation are even higher.
Allan:Oh, absolutely. 87% want stronger laws and fact-checking to combat AI-generated deep fakes and fake content.
Ida:Aaron Powell So that public anxiety is starting to translate into actual policy slowly. The EU AI Act seems to be the clearest example of a framework trying to address these fears.
Allan:Aaron Powell It is. And it draws some really firm lines. The Act just outright bans what it calls unacceptable risk systems.
Ida:Aaron Powell Like what?
Allan:Things like real-time biometric surveillance and public, predictive policing based on profiling, and uh social scoring systems.
Ida:Aaron Powell So no AI giving you a trustworthiness score based on what you post online.
Allan:Aaron Ross Powell Exactly. And for high-risk areas like hiring or healthcare, it forces companies to do comprehensive risk assessments and have documented human oversight. It's an attempt to build trust into the system.
Ida:Aaron Powell Okay, that's what the governments are trying to do. Now let's look at what the employees, the people actually using this stuff every day, are actually doing, which might be the biggest governance failure of all.
Allan:This data point is, I think, the most alarming thing we found. Despite all the warnings about data leaks, 48% of employees who use AI admit to uploading sensitive company information into public AI tools. Yeah. Why?
Ida:Is it just convenience, or do companies not give them secure options?
Allan:It's probably a mix of both. But the risk is made so much worse by another finding. Two-thirds of employees say they rely on AI output without even evaluating it.
Ida:So they're not even double checking the work. That's complacent use on a massive scale. And we've just seen what happens when the AI makes things up.
Allan:Aaron Powell It's a recipe for disaster. There's a total disconnect. We're asking governments for safety rules while the users are actively ignoring the most basic ones.
Ida:It's just wild.
Allan:And it highlights this fascinating global split in trust. You have advanced economies like Japan, Finland, Australia, they're more wary, less trusting. Then you have emerging economies like Nigeria, India, China reporting much higher use in trust.
Ida:Aaron Powell So they're accelerating ahead, maybe with a higher tolerance for the risk?
Allan:Aaron Powell Perhaps. They're seeing the benefits more directly, and that's offsetting some of the worry.
Ida:Aaron Powell So let's tie this all together. 2025, we adopted AI everywhere because it's competent, but we lost faith in it because of all these failures. We have AI like LAFE designed for loneliness that just created boredom and anger.
Allan:Aaron Powell Proving that quality is still a huge problem.
Ida:Aaron Ross Powell We have AI fabricating research for multimillion dollar government reports and a public that is crying out for regulation.
Allan:While employees are feeding company secrets into public chatbots.
Ida:So the core insight is that people feel safest when there are institutions, laws, safeguards in place. But everyone thinks the current safeguards are a joke. And that gap between what's needed and what's happening, that's where all the risk is.
Allan:Exactly. So for you, the listener, the real takeaway here is about individual governance. The next time you're about to paste a sensitive document into a public tool.
Ida:Or ask an LLM a really high-stakes question.
Allan:You have to ask yourself, are you upholding the standards that your own company or government hasn't figured out yet? Because the data is clear. Right now, you, the end user, are the weakest link.
Ida:And you know, the sources really suggest that AI's worst flaw isn't even the hallucination part.
Allan:That's a fascinating point.
Ida:An AI is built to generate plausible content. True or false, that's its job. It's doing what it was designed to do.
Allan:So the real failure.
Ida:The greatest operational and ethical failure of 2025 is us. It's the human over reliance, the complacent use. When two out of three people just accept the output without question, we are taking AI's technical ability and we are actively turning it into a massive liability.
Allan:Which leaves us with the final provocative question, doesn't it? If we're all just relying on AI without any critique, are we creating the exact de skilling independency that we're so afraid of long before any regulation has a chance to catch up?