The Deepdive

The AI Art Dilemma: Who Owns What?

Allen & Ida Season 2 Episode 13

Send us a text

The meteoric rise of artificial intelligence isn't just transforming how we work and play—it's fundamentally reshaping our understanding of creativity, ownership, and value. With AI projected to add a staggering $13 trillion to the global economy by 2030, the question of who owns what in this new frontier has become increasingly urgent and complex.

Our deep dive explores the fascinating intersection where cutting-edge technology meets centuries-old legal principles. We unpack the current legal consensus that AI cannot be considered an "author" under copyright law, while examining the nuanced exceptions when humans guide AI creation. The monkey selfie case provides a surprising precedent for understanding these limitations, highlighting how our legal frameworks still center human creativity despite technological advancement.

The battle over training data represents perhaps the most contentious battleground in AI intellectual property. When AI systems learn from millions of copyrighted works without permission, do they infringe on creators' rights? Recent landmark cases suggest a significant shift in legal thinking, including Anthropic's unprecedented $1.5 billion settlement with authors—the largest copyright recovery in history. We examine how courts are increasingly distinguishing between properly licensed materials and unauthorized "scraping" of content.

Beyond legal technicalities, we confront the real-world impact on creative professionals. New research reveals that when AI-generated content enters marketplaces, human participation drops significantly, raising profound questions about the future of creative careers. Meanwhile, regulatory frameworks like the EU AI Act are emerging to address these challenges through transparency requirements and content labeling.

Whether you're a creator concerned about your rights, a technology enthusiast excited about AI's potential, or simply curious about how these innovations will shape our cultural landscape, this exploration provides essential context for understanding one of the most significant intellectual property challenges of our time. Join us as we navigate this complex terrain and consider what it means for the future of human creativity in the age of artificial intelligence.

Leave your thoughts in the comments and subscribe for more tech updates and reviews.

Ida:

All right, let's unpack this Artificial intelligence. It's not just a buzzword anymore, is it? It's everywhere.

Allan:

Everywhere.

Ida:

Making our lives easier, sometimes funnier, and well, increasingly it's making art.

Allan:

Right.

Ida:

But here's where it gets really interesting and, frankly, a bit bewildering. Who actually owns that art and you know who owns the data it's trained on in the first place.

Allan:

That's the billion dollar question, or maybe trillion.

Ida:

Exactly Today, we're diving deep into the fascinating and often contentious world where AI meets intellectual property. You've sent us a stack of sources and our mission, basically, is to cut through the jargon.

Allan:

Yeah, there's a lot of it.

Ida:

And navigate this legal and ethical tightrope walk that comes with this revolutionary tech.

Allan:

And what's fascinating here, I think, is that AI isn't just expanding, it's genuinely creating entirely new categories of property.

Ida:

Right.

Allan:

And challenging legal concepts that have been around for well centuries.

Ida:

Yeah.

Allan:

We're talking about an industry projected to add a staggering $13 trillion to the global economy by 2030. $13 trillion, the global economy by 2030. $13 trillion, wow yeah. So naturally, when there's that much value at stake, the question of who gets what becomes absolutely critical and surprisingly complex.

Ida:

Absolutely, and it's not just for the tech giants or big corporations, right? This affects everyone Totally Independent artists, musicians, coders, designers, even you, even you, the curious listener, as you navigate this new digital landscape. We're here to give you the core basics. You know what you need to understand how creations are being protected, or maybe not protected, in the age of AI mm-hmm, speaking of this revolutionary tech, the sheer scale of the AI boom is staggering, isn't't it? It?

Allan:

really is.

Ida:

Our sources show global investment in AI startups just exploded like from $1.3 billion in 2010.

Allan:

Peanuts almost compared to now.

Ida:

To over $40 billion in 2018. And the patents the number of AI-related patent applications.

Allan:

Oh yeah, the patent offices are busy.

Ida:

The US Patent and Trademark Office has published over 27,000 since 2017. And get this 16,000 of those flooded in during just the last 18 months.

Allan:

Wow, that's acceleration.

Ida:

That's a lot of innovation.

Allan:

Indeed, and it's not just about AI as an invention itself AI is also becoming an incredibly powerful tool for IP development.

Ida:

OK, explain that bit.

Allan:

A tool? Yeah, think about it. Pharmaceutical companies are using AI in drug discovery, speeding up processes that used to take, you know, decades.

Ida:

Right finding new molecules and things.

Allan:

Exactly. Advertisers are leveraging AI to create content at lightning speed, so this means AI is generating valuable new outputs drugs campaigns, whatever and even making incremental improvements to its own algorithms All potentially valuable intellectual property.

Ida:

It sounds like the ultimate creative assistant then.

Allan:

In some ways yeah.

Ida:

But here's the rub If AI is generating so much value, what does this mean for ownership, especially when, as we know, intangibles like IP represented what 84% of S&P 500 company value back in 2018.

Allan:

A huge chunk.

Ida:

It sounds like we're in a bit of a legal wild west out there.

Allan:

You've really hit on the core challenge there. How do we protect that value when the legal landscape is well, it's still very much evolving Still catching up Definitely Like WIPO, that's, the World Intellectual Property Organization, the European Patent Office, epo, the USPTO here in the US, the US Copyright Office they're all actively examining these fundamental issues.

Ida:

Like what specifically?

Allan:

Well, questions of AI inventorship Can an AI invent something what's actually eligible for a patent anymore? Data privacy issues, which are huge and, of course, ai-related copyright. It's an area ripe for new strategies and, as we'll definitely see, a lot of legal debate.

Ida:

Okay, that discussion about protecting value raises, I think, a fascinating question about the source of that value itself. If AI is creating so much, can it truly be called an artist?

Allan:

The artist question.

Ida:

Yeah, and if so, who owns that creation? Because, let's be honest, we've all seen those incredible AI generated images, or maybe heard AI composed music. Who's the actual author?

Allan:

Well, the general consensus, particularly in the US, is a pretty firm no. No really yeah. The US Copyright Office has explicitly stated that works not created by a human author are simply not eligible for copyright protection.

Ida:

Okay.

Allan:

They even point to cases like the now infamous monkey selfie.

Ida:

Oh, the monkey selfie.

Allan:

That photo taken by a monkey To illustrate that non-humans cannot be copyright holders. The core principle is human authorship. Human authorship Based on original works of authorship Needs a human. The monkey selfie Needs a human.

Ida:

The monkey selfie. That's a classic. So okay, if a robot paints a masterpiece, it's just public domain, free for all.

Allan:

Pretty much yeah.

Ida:

No credit for the silicon Picasso. Doesn't that feel a bit old-fashioned, especially with how sophisticated AI is getting? Are we clinging to an outdated definition of creator?

Allan:

Well, that's the debate, isn't?

Ida:

it.

Allan:

But legally precisely, at least in the US right now. However, it gets nuanced very quickly.

Ida:

There's always nuance.

Allan:

Always. The US Copyright Office has clarified that if a human exercises sufficient originality, Sufficient originality OK. In selecting the inputs or maybe editing the AI's output, then the human drivendriven creative expression in that final work can be copyrighted.

Ida:

So the human touch matters.

Allan:

It seems so. For example, there was a piece titled A Single Piece of American Cheese.

Ida:

Okay, catchy.

Allan:

It became the first visual artwork composed, they say, solely of AI outputs to actually receive a copyright. That was back in January 2025.

Ida:

How did that work?

Allan:

receive a copyright. That was back in January 2025. How did that work? It was based on the human selection arrangement, coordination involved in the creative process. Not the AI's autonomous generation, but the human curation, if you like.

Ida:

So it's not the AI's creation itself, but the human's guidance of the AI that counts. Like the AI is just a really fancy paintbrush.

Allan:

That's one way to view it. Yeah, a very, very fancy paintbrush, but the Copyright Office still maintains that works. Where the expressive elements are determined by a machine, those remain uncopyrightable.

Ida:

Okay, so it's a fine line.

Allan:

A very fine line and it mirrors the patent side too. The USPTO similarly codified restrictions in February 2024. Confirming human inventors must always be named. Right in February 2024, confirming human inventors must always be named Right. That followed the Thaler v Perlmutter ruling in August 2023. Stephen Thaler's AI program, DBS, denied inventorship.

Ida:

It's to say the inventor didn't happen.

Allan:

Didn't happen in the US but, interestingly, other places are. Well, they're painting in different shades of gray.

Ida:

Like where.

Allan:

The UK's Copyright, Designs and Patents Act from way back in 1988 says the author of a computer-generated work is the person making the arrangements necessary for the creation. A bit broader perhaps.

Ida:

Hmm, arrangements necessary.

Allan:

And China's Beijing Internet Court actually recognized copyright and AI-generated images back in November 2023.

Ida:

Really.

Allan:

So it's truly a global patchwork. Right now, everyone's figuring it out.

Ida:

Speaking of what AIs can create, that brings us to how they even learn to do that. It's not magic.

Allan:

Definitely not magic.

Ida:

They're fed massive amounts of data and a lot of that data. Well, it's copyrighted stuff, human created stuff.

Allan:

The trading data issue.

Ida:

This is where the lawsuits really start flying right.

Allan:

Absolutely. This is a huge area of conflict. Deep learning models basically, they scrape enormous amounts of media from the internet.

Ida:

Scrape Sounds illicit.

Allan:

Well, it means automatically collecting it. Think of it like a super-fast librarian scanning millions of books and images, but instead of reading, it's converting text and visuals into numbers, like a unique digital fingerprint for each piece, just to identify patterns.

Ida:

Okay.

Allan:

But that process necessarily involves making copies of copyrighted works, millions, maybe billions of copies.

Ida:

Right Copying is usually a no-no in copyright.

Allan:

Exactly so. It raises the fundamental legal question Does this infringe the copyright holder's exclusive right to reproduce their work, or does it fall under fair use exceptions?

Ida:

Fair use. Ah, that's the big legal defense we hear about, isn't it Like if you use a tiny bit of something for a review, it's okay. Does that logic apply here?

Allan:

It's a lot more complex than that. It's a four-factor legal defense in the US.

Ida:

Yeah.

Allan:

Definitely not a simple loophole.

Ida:

Okay, four factors.

Allan:

Yeah, and traditionally AI developers argued that training AI models is fair use because it's transformative, meaning it changes the original work into something new, it doesn't just copy it and it's limited in how it uses the content.

Ida:

Makes sense kind of.

Allan:

Some compared it to cases like Google Books. Remember that, yeah, scanning copyrighted books was found to be fair use.

Ida:

I do remember that yeah.

Allan:

But critics are really pushing back now, saying hold on. This is fundamentally different. Judge Vince Chabria, in a case involving meta and open AI, put it quite bluntly. He basically said you have companies using copyright protected material to create a product that is capable of producing an infinite number of competing products.

Ida:

Competing products yeah.

Allan:

You are dramatically changing the market for that person's work and you're saying that you don't even have to pay a license to that person. I just don't understand how that could be fair use.

Ida:

Wow, that's a powerful statement from a judge, kind of a mic drop moment in court.

Allan:

It really resonated.

Ida:

Does that sentiment reflect a broader legal shift we're seeing, because that sounds like a game changer and we have seen some big legal fireworks lately.

Allan:

It certainly seems to yes. In February 2025, a federal court sided with Thomson Reuters against Ross Intelligence. The court ruled that Ross's AI it wasn't even generative AI its use of Westlaw headnotes was not fair use.

Ida:

Why? Because it was building a directly competing product.

Allan:

Directly competing. That seems key, very key. And even more significantly, in August 2025, the AI company Anthropic agreed to pay wait for it $1.5 billion to settle a class action lawsuit with authors $1.5 billion. Billion, with a B Largest publicly reported copyright recovery in history.

Ida:

Whoa. What was the core issue there?

Allan:

Well, the judge basically affirmed that using legally purchased books for training was fair use. Okay, fine Okay, but using unlicensed works from data sets like the Pile, which is this huge messy collection of internet text, much of it scraped without clear permission?

Ida:

The scraping again.

Allan:

That was not fair use and that led to that massive settlement.

Ida:

Wow. So buying books, getting licenses, fine Scraping from shadow libraries or these massive unlicensed data sets, big, big no-no.

Allan:

Got it. That seems to be the emerging line.

Ida:

Yeah, this settlement, $1.5 billion, that's huge. You know, some people might look at that and say if every image, every text snippet had to be compensated, wouldn't that just bankrupt AI companies?

Allan:

This is the argument from the developers. Yeah.

Ida:

While others might argue the value of any single image in a giant data set is like centi-pennies negligible.

Allan:

Right the scale argument.

Ida:

But clearly that anthropic settlement indicates a significant shift, at least in that case. So what's the scene like outside the US? Are they all grappling with this too?

Allan:

Oh, absolutely Different approaches but definitely grappling. In the EU they have the 2019 Digital Single Market Directive. It provides text and data mining TDM exceptions to copyright infringement. Tdm exception yes, and the EU AI Act adopted in 2024, clarified this covers AI data collection, but copyright holders can opt out.

Ida:

They can say no, don't train on my stuff.

Allan:

Potentially yes, and, crucially, providers of general purpose AI models will need to publish detailed summaries of their training content by August 2025. Transparency, ah, transparency. That's interesting. The UK has proposed something similar. India, on the other hand, currently lacks specific provisions, which is leading to ongoing legal battles, like in the ANI v OpenAI case.

Ida:

Right, so different paths, same destination, maybe.

Allan:

Or maybe different destinations. It really does sound like everyone's trying to figure out how to dance this legal tango. It's complicated.

Ida:

Okay, so the input side, the training data, clearly contentious, huge legal battles. But what about the output?

Allan:

The stuff the AI makes.

Ida:

Yeah, If an AI is fed all this copyrighted material, what happens if it then generates something that looks well suspiciously familiar itself, Like? It basically spits out a near copy of something it saw during training.

Allan:

Yeah, that can happen. It's a phenomenon called memorization or sometimes overfitting.

Ida:

Memorization.

Allan:

Essentially, the AI has learned the training data too well, almost like a student who can perfectly recite a textbook page but doesn't really understand the concepts.

Ida:

Right Wrote learning.

Allan:

Exactly, who can perfectly recite a textbook page but doesn't really understand the concepts. Right Wrote learning Exactly. While AI developers generally consider it an undesired outcome. They want generalization, not just copying. These deep learning models can replicate items pretty closely from their training set.

Ida:

And that's a legal risk.

Allan:

A significant copyright risk? Yes, Because under US law, to prove infringement, a plaintiff needs to show their work was actually copied and that the AI output is substantially similar. Memorization could tick both boxes.

Ida:

So if an AI generates an image that looks almost exactly like, say, a famous photograph or painting from its training data, that's a problem.

Allan:

Potentially, yes, a big problem, but it gets even more complex with something called style imitation.

Ida:

Style imitation.

Allan:

Yeah, generative models are quite adept at imitating the distinct style of particular artists. You know, our sources highlight how an AI could generate an astronaut riding a horse in the style of Picasso.

Ida:

Right, I've seen prompts like that.

Allan:

But here's the thing An artist's overall style, generally speaking, is not subject to copyright protection.

Ida:

Oh, really, just the specific work.

Allan:

Generally, yes, the expression, not the underlying style or idea. This has led to really strong feelings among artists who feel like their entire creative identity, their style, is being sort of stolen or diluted.

Ida:

I can see why they'd feel that way.

Allan:

Absolutely, While proponents argue that humans also learn and are influenced by existing art styles. That's how art evolves. It's a very thorny debate.

Ida:

Oh, so if it copies the vibe, the style, but not the actual painting itself, legally that might be okay.

Allan:

Legally, it's much harder to challenge.

Ida:

yes, that feels like a very, very fine line for artists and AI users alike. It seems it's not just about what the law says, is it? It's about what society feels is right or fair, exactly. It's a very human problem for an artificial intelligence. Are these ethical considerations actually starting to influence the technology itself?

Allan:

That's the current legal nuance. Yes, style versus expression, but you're right, there are ethical lines starting to be drawn, partly pushed by public feeling.

Ida:

Like what.

Allan:

Well, in March 2025, chatgpt actually placed limits on users generating images in the style of living artists like Hayao Miyazaki.

Ida:

The Studio Ghibli director. Why?

Allan:

There was this whole Ghiblification trend online, making everything look like his style. It sparked controversy, partly because Miyazaki himself has been very publicly negative about AI art.

Ida:

Ah, ok, so the platform responded to the artist's feelings in the public discussion.

Allan:

It seems so. It shows a growing recognition that ethical considerations, public sentiment and the creative community's concerns are pushing beyond just strict legal interpretations. Companies are starting to listen, maybe.

Ida:

Interesting. This brings us, then, to the big picture. What actually happens when AI-generated content floods the market? Does it ultimately help us maybe more choice, lower prices or does it hurt the human creators it learned from by effectively competing with them?

Allan:

Well, there's a new study by Samuel Goldberg and H Tai Lam that gives us some pretty striking insights into exactly that question. Oh good Some data. Yes, they examined an online marketplace a real one that, starting in December 2022, began allowing AI-generated images to compete directly alongside human-made ones. Crucially, they had to be labeled as AI-generated images to compete directly alongside human-made ones.

Ida:

Crucially they had to be labeled as AI-generated. Okay, labeled, and what did they find? Was it like a robot uprising for artists? Did the humans get pushed out?

Allan:

The results were pretty stark. The total number of images for sale on the platform skyrocketed up 78% per month Wow, and that increase was almost exclusively driven by generative AI.

Ida:

Okay, more stuff available, but what about the artists?

Allan:

And here's the kicker the number of non-AI artists, human artists, on the platform actually dropped by 23%.

Ida:

Ouch Okay.

Allan:

And while total sales across the platform increased by 39% more transactions, overall Right Purchases of non-AI human-made images actually dropped.

Ida:

Ah, so the AI stuff wasn't just adding to the pie, it was taking slices from the human.

Allan:

That strongly suggests that AI-generated images are not just a supplement but direct substitutes for human-generated ones, at least in this marketplace context.

Ida:

So for consumers, maybe good news, more choice, maybe better quality, pushed by competition. Potentially yes, but for human artists, yeah, yeah, not so great. Sounds like a difficult market shift.

Allan:

Exactly. The study concluded that generative AI is likely to crowd out non-gen AI firms and goods. Good news for buyers, perhaps, but tricky, as they put it, for producers.

Ida:

Tricky seems like an understatement.

Allan:

And they flagged a real policy concern about markets being completely dominated by AI, effectively squeezing humans out Right. And this research, importantly, directly supports the argument that AI outputs are substitutes for human-created inputs. That's a central point in many ongoing lawsuits, including the big one, the New York Times case against OpenAI.

Ida:

Danielle Pletka Connecting the dots. So what's the solution here? I mean, do we try and ban AI art? That seems unlikely.

Allan:

Marc Thiessen. The researchers don't recommend a ban, no, but they talk about ensuring equitable access to GenAI technologies and non-AI artists. Danielle.

Ida:

Pletka Equitable access Marc.

Allan:

Thiessen how.

Ida:

Danielle. It points to the need for new frameworks, new regulations. Creative Commons, for example, is actively exploring how its licenses can maybe support genitive AI while still respecting human creators Trying to find a balance, exactly, and they acknowledge that ethical concerns go way beyond just copyright law.

Allan:

It touches on privacy, consent, bias, all those wider economic impacts we just talked about.

Ida:

It's a whole ecosystem of issues.

Allan:

It really is, and if we connect this to the bigger picture again, the EU AI Act adopted June 2024 is the world's first comprehensive AI law a major step.

Ida:

What does it aim to do?

Allan:

It aims for safe, transparent, non-discriminatory AI. Safe, transparent, non-discriminatory AI and, specifically on this point, it requires generative AI to disclose that content is AI generated, like in the study.

Ida:

The labeling.

Allan:

And remember, publish summaries of copyrighted training data by August 2025.

Ida:

Those are crucial first steps towards greater transparency and maybe some accountability. It really is like a whole new economic and artistic paradigm shift unfolding right before our eyes.

Allan:

It feels that way, doesn't it?

Ida:

So, wrapping this up, what does this all mean for you, our listener? Whether you're a creator grappling with these questions, a consumer enjoying AI-generated content, or just someone fascinated by how quickly tech is changing everything, it's clear that this intersection of AI and intellectual property is well incredibly complex. It's constantly evolving and it has significant real world impacts already.

Allan:

Indeed From that fundamental question of human authorship we started with, to the intricate dance of fair use and AI training and these very real economic shifts we're seeing in creative markets. This deep dive really shows us that our legal and ethical frameworks are frankly struggling to keep pace, playing catch up. Definitely, the core challenge seems to be balancing this incredible pace of innovation with the crucial need for protection and fair compensation for human creativity.

Ida:

Yeah, it's not just about the law on paper, is it? It's about fostering a future where AI genuinely augments human potential, rather than maybe diminishing it or replacing it entirely. That's the hope and the question of who gets compensated and how well. It's far from settled, but cases like that massive anthropic settlement are certainly setting new impactful precedents. Things are moving.

Allan:

They really are. And maybe this raises an important question for all of us to think about, as AI continues to evolve its creative capabilities and it will For sure. How will we collectively define and truly value originality and authorship in a world where machines can mimic, synthesize and generate content at scales we've never seen before?

Ida:

A question to ponder indeed. Will future masterpieces come with a human signature, or maybe just a string of code? That's definitely some food for thought until our next deep dive.

People on this episode