Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Hey, welcome to Science Stuff, a production of iHeartRadio. I'm
horehit Champ, and today we're talking about artificial intelligence, or AI,
and specifically images and videos made by AI, which some
people call AI slop, and how it affects us, our brains,
our society, and our sense of reality. We're going to
be talking to an AI expert about this, a neuroscientist,
(00:24):
and somebody who's looked into the dangers of AI generated
pictures of cute, fuzzy animals. Yeah, you think they're perfectly safe,
but they're not. So ask chat GPT to clear your schedule,
claude your way to our show and mid journey with
us as we answer the question what is AI slop
(00:47):
doing to us? Enjoy? Hey? Everyone, So you've probably noticed
that AI is everywhere, and you've probably seen and heard
images and videos made by AI programs in your social
media feed, online, and even on TV, sometimes without knowing
it was made by AI. And this trend has exploded
(01:10):
in just the last few years. So I was curious
to know two things. First, what happened a few years
ago that led to this sudden explosion of AI content
and apps and services. Do we suddenly discover a magical
algorithm or alien artifact that suddenly gives machines this much power.
And second, I wanted to know what effect all this
(01:31):
fake AI made content has on our brains and how
we see the world. So we'll answer both questions, and
we'll start with the first. What happened a few years
ago that started this recent explosion in AI? And is
there a limit to it? To answer these questions, I
reached out to this guy I met at lunch the
(01:52):
other day, who I'm pretty sure is human. Well, thank
you doctor Tam for joining us.
Speaker 2 (01:59):
Thank you JORGEV for having me. So my name is
doctor ed Tam. I am a postdoctoral fellow at Stanford University.
A lot of my work concerns the uncertainty around AI
and what are the limitations of AI and how we
can potentially overcome those.
Speaker 1 (02:15):
And how can I make sure you're real and not
an AI.
Speaker 2 (02:19):
Well, we've met in person, so you know that.
Speaker 1 (02:23):
I met someone who looks like you a role I know,
although it was over at lunch.
Speaker 2 (02:30):
So yes, a robot that can eat and joke and drink.
Speaker 1 (02:36):
Okay, doctor ed Tam is going to tell me what
happened a few years ago that made AI swimmingly explode
in what it's able to do. But to do that,
we have to go back to where it all started,
which is when scientists had the idea for AI in
the first place.
Speaker 2 (02:52):
It goes back to like the last century at least.
Speaker 3 (02:56):
So the idea is that the human brain is composed
of cells that are called neurons, and these neurons connect
with each other and these different productivity patterns in the
brain gives us the rich functionality of the human brain.
So back in the nineteen hundred, scientists we're thinking, can
we somehow mimic the biological human neural network with a computer,
(03:19):
And so there are some early models that kind of
try to tries to do this. One of the earliest
and famous one is called the perceptron, and it's just
this kind of like three layer no network with some
nonlinear function and coded. So that was kind of like
the first model, but it was already kind of like
a mini breakthrough.
Speaker 1 (03:35):
It sounds like a transformer like the toys.
Speaker 3 (03:38):
Oh yes, yes, megatron.
Speaker 2 (03:43):
I don't know why they call it the perceptron.
Speaker 1 (03:46):
So, starting in the nineteen forties, scientists and engineers had
an idea of how the human brain worked. It was
a humongous network of neurons connected to each other, and
so the scientists and engineers wondered if we could make
an artificial brain. So they programmed on a computer how
a neuron reacts. It takes signals from other neurons, weighs
(04:07):
each input depending on some numbers or parameters, and then
if it all adds up to a certain amount, it
fires off a single signal to other neurons. And at
first these networks with layers of neurons were pretty simple,
but amazingly they could easily recognize patterns, meaning like recognize
(04:28):
numbers or images kind.
Speaker 3 (04:31):
Of that's exactly right, like telling apart of human digits,
can you tell a part a cat from a dog.
There are all these image benchmark data sets that people
work on, and then it turns out when you stack
them on top of each other, it can encode really
really rich patterns.
Speaker 1 (04:47):
So things progress somewhat slowly. Over the next few decades,
people started to make bigger and bigger networks of these
artificial neurons. Instead of a few neurons, it started to
become dozens, and then hundreds, and then thousands of these
neurons and dozens of these layers, and it was pretty powerful.
You could give the machine a whole bunch of images
(05:08):
of dogs and they could learn to recognize dogs. You
could give it a whole bunch of human fingerprints or faces,
and they could recognize fingerprints and faces.
Speaker 3 (05:18):
So that was maybe in like the twenty tens or
something like that. When you go on Facebook, you know how,
if you post a picture on Facebook, it actually tags
your face and it tags the faces of your friends,
and I remember that being like magic.
Speaker 2 (05:32):
I was like, wow, like it recognizes me.
Speaker 3 (05:35):
And that's the type of AI back then ten years
ago that can do that.
Speaker 1 (05:41):
So ten years ago, AIS were really good at recognizing things,
but that's kind of all they could do. They couldn't
make anything up or talk to you or write anything
for you. They were smart, but they were passive. But
then in twenty seventeen, some engineers at Google discovered a
special way to make these networks of neurons that changed everything.
(06:05):
And it kind of started when they said, you know what,
maybe we should stop trying to copy the human brain.
Speaker 3 (06:12):
So at the very beginning, people try to come up
with very simple computer models, like the perceptron that tries
to mimic the human brain, and then gradually this neural
network or deep learning morphed into something completely different that
is no longer trying to mimic the human brain.
Speaker 1 (06:29):
Yeah, you can imagine that if the History of the
Human Race was a movie, this might be the part
of the movie that foreshadows that it's not gonna end. Well,
what the engineers at Google had invented was something called
the transformer. In this case, it's not related to the
(06:51):
robot toys.
Speaker 2 (06:51):
It is not related to the robot toys at all.
I guess I've been in the AI.
Speaker 3 (06:56):
World for so long that transformers used to me in
the movie and me Tron and all that souff now
means something completely different to me.
Speaker 1 (07:03):
They reprogrammed your brain.
Speaker 2 (07:05):
They reprogrammed my brain. Yeah, okay.
Speaker 1 (07:09):
What this transformer way of connecting neural networks that the
Google engineers invented in twenty seventeen does is that it
uses an old ticnique from language translation called attention. And
basically what it means is that when an AI is learning, say,
a piece of text from a book, it doesn't just
learn what each word means. It learns the context of
(07:31):
the word. For each word, it sort of learns what
word it usually has after it and before it, and
it also learns what word it usually has two words
after it and two words before it, and so on
and so on. Let's here's an example. So like the
word sit from sitting, usually it comes a few words
before chair or sofa.
Speaker 3 (07:52):
Right right right, So I sit on a sofa. I
sit on a sofa is very logical, very sensible. The
key work that is important in terms of me understanding
what sofa means is the I in the sentence. So
it's not a dog sitting on the sofa, it's not
cat sitting on the sofa, it's I that is sitting
on the sofa. And so the attention mechanism allows you
(08:13):
to kind of learn that type of correlation. And there
are technical details about position encodings, but the main thing
is just you want to learn the meaning of a
word in the context of the words that surround it.
Speaker 1 (08:27):
This sort of simple and kind of common sense sounding
trick is basically what revolutionized AI. You can apply it
to text, images, music videos, and it's what makes chat GPT, Gemini, Grock,
mid Journey, Claude, Nano, Banana, Copilot and all those AI
agents out there work. And that's because using this transformer
(08:51):
architecture and paying attention to the context of the data
clout AIS to teach themselves.
Speaker 2 (09:00):
Yeah, so it's called self supervised learning.
Speaker 1 (09:03):
Oh that sounds scary.
Speaker 2 (09:04):
Yeah. Yeah. So traditional machine learning or AI systems, they
rely on supervised learning, which means that a human has
to tell you what is right or what is wrong. Now,
while that's great, it's not very scalable because a human
can only tell you what's right or wrong for a
thousand things or two thousand things or whatnot. You can't
(09:25):
scale it to a billion things. Whereas with this self
supervised paradigm, you don't need human supervision. You just need
some large purposes of unstructured data, and then the machine
teaches itself just by predicting what the next token is
and seeing whether that matches reality in terms of the
real data.
Speaker 1 (09:45):
What doctor Tam saying is that this new type of
AI could teach itself by predicting what the next word
or part of an image should be according to its
context and then checking with the real thing. And do
ability of AIS to learn contexts and teach themselves, combined
with big companies putting billions of dollars into huge computer
(10:09):
farms meant that you could suddenly have ais teach themselves
the entire Internet.
Speaker 3 (10:16):
So the idea is that very surprising things happen when
you scale something to really really make sizes. They perform
way way better in terms of producing realistic samples and texts.
So there is some type of phase transition in terms
of how intelligent or how realistic the samples that it
(10:37):
generates will look like something.
Speaker 1 (10:39):
Click. It suddenly activated some potential in these systems that
experts were not expecting. Right. So t wecap ais were
originally meant to copy the human brain, and they were
really good at just recognizing things patterns, faces, numbers. But
then some engineers at Google decided, you know what, let's
(11:02):
forget the human brain and just push this through the limit.
And what they came up with created ais they could
understand context, and more importantly, they could teach themselves. And
once that happened, they scaled up, gubbled up the Internet
and made a leap that's making a lot of people,
uh concerned right now. So when we come back, we're
(11:25):
going to talk about how these new ais can affect
your brain, and how they could even make endangered species
of animals go to think, don't go anywhere, We'll be
right back. Hey, welcome back. We're talking about AI slop
(11:52):
or AI made images and videos that are swamping the
Internet right now, and what effect it's having on us
so far. Talking about what happened in twenty seventeen that
suddenly made these AI systems be able to talk to
you and make the kind of realistic images everyone sees
in their social media feeds. Now we're gonna talk about
(12:12):
the effect it's having on our brains. Is it possible
for AI generated content to rewire how we think? To
tell us about that, I reached out to my friend,
neuroscientist Dwayne Godwin, a professor at Wake Forest University and
the co author of the book Out of Your Mind,
which he co wrote with me. Doctor Godwin recently wrote
(12:33):
an article about this topic online, so I thought he
was the perfect person to give us the neural perspective
on this topic. Here's my conversation with doctor Dwayne Godwin. Hey, Dwayne,
how are you?
Speaker 4 (12:46):
I'm doing well? Orgey, how are you?
Speaker 1 (12:48):
I'm good? Now? Just to make sure this is the
real you, right, I'm not talking to an AI version
of you.
Speaker 4 (12:53):
No, I AMZ I'm the real deal man.
Speaker 1 (12:57):
And is there an intelligence artificial or is organic?
Speaker 4 (13:01):
It's augmented by caffeine.
Speaker 1 (13:05):
Augmented intelligence? There you go. Well, I wonder if there's
a way for people to know if this conversation is
generated by one of those you know, chat GBT things
that mix podcasts.
Speaker 4 (13:15):
You know, if we were really sneaky, we might have
generated an AI podcast of the two of us going
back and forth and then tested our audience in their
ability to tell the difference. But we're not that sneaky.
Speaker 1 (13:28):
Of course, this is.
Speaker 4 (13:29):
The real thing.
Speaker 1 (13:30):
All right, let's get into it. I guess the first
question I had was how good are we at telling
is something is AI generated?
Speaker 4 (13:36):
We're not very good. People tend to overestimate their AI radar.
A lot of obvious tells, like you know, extra fingers,
robotic phrasing, those are getting all smoothed out. And we're
looking at these things often on very tiny screens, and
so what's plausible quickly becomes good enough or believable wow,
(13:58):
And the result of that is there's kind of a
confidence gap. We feel very certain that we can tell
fake stuff when we see it, but often we're not
doing much better than chance.
Speaker 1 (14:09):
Are there any studies that kind of tell us how
good we are at spotting fake ais?
Speaker 4 (14:14):
Yeah, there have been studies that have been done. There
was a study where people are essentially two groups. They're
presented with both real images and AI generated images, and
it turns out that they're about fifty percent you know,
it's like fifty to fifty that they can tell the difference.
Speaker 1 (14:30):
Basically, they can't tell the difference.
Speaker 4 (14:32):
Yeah, so they're really about at the level of chance.
Speaker 1 (14:35):
Yikes. Okay, and this was probably done a few couple
of years ago, right.
Speaker 4 (14:40):
Yeah, that's right, And so things are only better now.
So you can imagine that right now, with the quality
of generative AI, that this is even going to be
more true.
Speaker 1 (14:52):
Yes, we all like to think we're good at spotting
fake images or videos. I mean, we've all seen bad
computer graphic effects movies, so we think we know what
fake looks like. But actually the sign says that we
are not much better than guessing randomly if something is
fake or not. And according to doctor Godwin, there are
three reasons our brains are kind of helpless against this
(15:16):
onslaught of fake content coming at us on a daily basis.
The first reason is that your brain sort of wants
to believe things are real. I guess, as a neuroscientists
and as a psychologist, why do you think our brains
are so easy to fool?
Speaker 4 (15:33):
Well, a couple of reasons. You know, our brains are
built for speed and survival and not for fact checking.
Speaker 1 (15:41):
Right, And I guess that means out in the wild,
if you think you see a bear or a tiger
coming at you, your brain is not going to have
wanted like fact check that there really is a tiger
coming at you. You want your brain to just react
to the idea that maybe there's a tiger coming at you.
Speaker 4 (15:54):
Yes, that's sedaptive. You're going to go with bear if
it looks anything.
Speaker 1 (16:00):
Good. Because I guess we don't have all information all
the time, so our brains are sort of have trained to, like,
do as much as you can with as little information
as possible.
Speaker 4 (16:08):
Yeah, there's a concept in psychology called heuristics, which is
basically making decisions on the basis of very limited information,
and our brains are doing that constantly, so you know,
we have to make those very quick decisions, so we
can pass our genes onto the next generation.
Speaker 1 (16:24):
I see, and so you can exploit that. I guess
because we see something that sort of looks real, your
mind will basically make the conclusion that it is real.
Speaker 4 (16:31):
Yeah, it will create the story that it is real.
You know, what is the purpose of our brains. Our
brains are prediction machines, and so we've learned certain facts
about the world, about how things normally go. So if
you're in the woods and you encounter something it seems
like a bear, your brain already knows that that's probably
not a great thing, and so you're going to extrapolate
(16:52):
that story and you're going to want to avoid that
bear as much as possible, right.
Speaker 1 (16:57):
I guess we kind of had a natural mechanism against that,
which was this idea of the Uncanny Valley.
Speaker 4 (17:04):
Yeah, I'm familiar with the Uncanny Valley. It's that sort
of eerie feeling that you get when you see something
that's trying to be real but it's not real. It
makes me uncomfortable because I know that it's not supposed
to look like that exactly.
Speaker 1 (17:16):
Yeah, So, like we did have a brain mechanism to
defense against fake things. Right, But it seems like AI
today has basically blown past that defense, like it's now
past the uncanny valley.
Speaker 4 (17:27):
Yeah, it has jumped across the uncanny valley for sure.
Speaker 1 (17:32):
In other words, our brain has a built in BS
meter called the uncanny valley. But now AI content is
so good it's blown past that last line of defense
or your brain. Okay. The second reason our brains are
easy to fool by AI generated content is that it's
not built to withstand the constant stream of it we
(17:52):
get every day. The second thing you said was repetition.
Speaker 4 (17:57):
Yeah, that's right. So your resistance is being reduced in
two ways. So first is just the repetition of seeing
the same thing over and over.
Speaker 1 (18:05):
Meaning like, if I'm scrolling through these online feeds and
most of what I see has that sort of feeling
of an AI generated thing, then my brain is just
gonna normalize it. Is that kind of what you're saying, Like,
it's yeah, it is.
Speaker 4 (18:17):
There's another more insidious part of that, which is, let's
use the you know, the cute animal AI generated cute
animal videos you'll see in the comments underneath those people
reacting to it as if it's real, and so part
of that is, Hey, everybody else thinks it's real, and
I'm a person too, shouldn't I be thinking that this
(18:40):
is real? And so there's this sort of desire on
the part of people to reach a consensus with your group,
and so if you start identifying with that group, then
your brain might be bent toward making a decision that, hey,
maybe this is real, even if you might have had
initial doubts. Yeah, maybe you start to fall into line.
Speaker 1 (19:01):
Like we all like to think we're independent thinkers and
critical thinkers, but part of our brain is sort of
wired to kind of basically follow the mob.
Speaker 4 (19:09):
Yeah, a little bit of mob mentality. Yeah, we like
to think of ourselves as logical beings, but the reality
is we're all challenged by biases, and those biases can
be very adaptive in the case of things like heuristics.
Speaker 1 (19:27):
And then the third reason our brains are easy targets
for AI slob is that a lot of that SLOB
is made to play with our emotions. And then you
said the third one was that we tend to believe
things that kind of affect our emotions.
Speaker 4 (19:43):
Yes, that's probably one of the most insidious things, is
that things that ring our emotional bell tend to make
it through without the kind of critical thought processing that
we would normally apply to something that's just factual or techation.
So think about politics. Politics is very polarizing. So if
(20:06):
you see something in your social media feed and it
confirms what your emotions tell you should be true based
on your political leanings, then that we'll get through the
normal cognitive criticism that you would apply to that information.
Speaker 1 (20:23):
Yikes, Like, we let our emotions get in the way
of our critical thinking, so our brains are ill equipped
to deal with all this AI slot because it tends
to believe what it sees. We're inundated with this content
and it's all meant to trigger our emotions and things
doctor Godwin says are just getting worse.
Speaker 4 (20:46):
They're getting worse because it's getting better. There's a lot
of money being put into making these things basically indistinguishable
from reality, and we were already suffering under the idea
that we could possibly tell the difference, but it's becoming
a parent that we cannot, and it's going to be
even harder.
Speaker 1 (21:06):
It kind of seems like it's also getting worse because
it seems like people are more willing to post fake
AI generated things, like just the other day, the White
House official account posted some AI altered image of some
activist protester, and they were totally unapologetic about it.
Speaker 4 (21:25):
Yeah, the tools are more readily available, and your ability
to manipulate those tools doesn't even require that you have
any sort of computer science degree.
Speaker 1 (21:34):
Yeah, it's just an app on your phone.
Speaker 4 (21:36):
Yeah. This stuff is scalable too, So it's not just
that a person can go in and do like a
one off. A single person could generate a lot of
this content and get it out there, and even use
AI to generate bots that would distribute that content for you.
Speaker 2 (21:53):
Yeah.
Speaker 4 (21:54):
So it's only going to get worse, I think.
Speaker 1 (21:57):
Yeah, it's not looking so good for us organic and
they naturally made squishy intelligences out here. But according to
doctor Godwin, there are some things you can do to
protect your neurons. What can we do to protect our
brains from the defect of this AI slut?
Speaker 4 (22:15):
Well, it's hard. I wish I had an easy answer,
because what I've told you so far it's pretty hopeless. Right,
it sounds hopeless.
Speaker 1 (22:24):
Maybe I should look it up and chat GBT.
Speaker 4 (22:26):
But I think there are things that we can do,
and part of it is just slowing down. The thing
we can't do is there's no way we're going to
become a forensic AI detector. So what we have to
do is really add some friction, build some simple verification habits,
and try to shrink our exposure to this fire hose
(22:48):
of bad stuff. I think a good rule is if
it hits you emotionally in the first two seconds, then
just pause and then just ask the question, Okay, made this,
where did it come from? And can I confirm that
this is real? Somewhere boring and reputable.
Speaker 1 (23:07):
I see, look for sources that you can trust, and
maybe shift your consumption from the wild west of social
media to some of these sources that you trust more
and that are known to give you objective truth.
Speaker 4 (23:21):
That's right. So if something seems to perfectly confirm your
own biases, that's the time to tell yourself, you know,
I need to dig a little deeper before I accept this.
Speaker 1 (23:32):
I see, check other websites, maybe even the ones that
you don't agree with.
Speaker 4 (23:36):
That's right. Part of it is if it's real, then
it should exist somewhere other than your social media feed.
And I think the other part of this is there
is an algorithm that is trying to cater information to you.
It's called a social media feed for a reason, because
it's being fed to you.
Speaker 1 (23:56):
Right, I see, ye realize that it's a feed.
Speaker 4 (24:01):
So you have a choice. In x dot com, for example,
it'll have a curated feed, and then it'll have a
feed of people that you follow, and I always turn
off the curated feed and only go for the accounts
that I have actually chosen to follow.
Speaker 1 (24:17):
Yeah, well that's a good one. Well, hopefully people will listen.
But here's the twist, ending, Dwayne, this was AI generated. Yes, No,
we're just kidding. No, we actually had this conversation. How
can we prove it? Dwayne, say something that an AI
would never say?
Speaker 4 (24:33):
What an AI would never say? Never listen to an AI?
Speaker 1 (24:38):
All right, let me come back. We're going to talk
about another way in which AI slob is affecting us,
one that is kind of unexpected. That is through the
insidious use of pictures of cute animals. You think they're harmless,
but actually they could lead to a lot of death
(24:59):
and destruction. So stay with us. We'll be right back.
Welcome back. I am not an AI or am I
(25:22):
We're talking about the effects AI slop or AI generated
content has on us. And so far we learn how
AI slop works and why it's so easy for our
brains to believe it's real. Now we're going to explore
how we can affect our actions by focusing on one
specific example, which is pictures and videos of cute animals.
(25:46):
You've probably seen this in your social media feed. Some
cute little hummingbirds taking shelter inside a flower, or a
cute kitten plucking gummy bears off a gummy bear tree,
or a cow scratching itself with the room. Actually one
of those is real, but the fact that you don't
know which of them it is is kind of the
(26:07):
whole problem, and, according to the next person I talked to,
can kind of be a serious problem with potentially dire consequences.
Ketterina Zimmer is a science journalist who writes about biology
and the environment, and she recently published an article about
aislop in Atmos magazine. Well, thank you, Katrina for joining us.
Speaker 5 (26:29):
Thanks so much for having me.
Speaker 1 (26:30):
Now, how can I be sure that you're real and
not AI generated?
Speaker 5 (26:34):
That's a good question. I guess you'll have to trust
me on this.
Speaker 2 (26:37):
One.
Speaker 1 (26:38):
Oh no, Well, you've written a lot about the impact
of AI and AI generated images on us, but from
a different angle, So tell us about the article that
you wrote.
Speaker 6 (26:48):
Yes, the story began sometime last year when I started
seeing this wave of AI generated images of animals on Instagram,
which is my poison and of social media use.
Speaker 5 (27:01):
And so for context, I'm a big animal lover and.
Speaker 6 (27:04):
I write journalistic articles about nature and wildlife for living,
So my algorithms know that and they show me a
lot of animal content.
Speaker 5 (27:12):
But these AI generated images started to.
Speaker 6 (27:16):
Worry me a bit because they were usually very captivating,
often very beautiful images, but they're not real and I
guess I could only tell that they were AI generated
because they were depicting animals doing things that they usually don't,
or moving not quite the way you'd expect, or sometimes
(27:37):
they had this like glossy photoshop sheen to them that's
characteristic of some AI imagery.
Speaker 5 (27:45):
And what alarmed me.
Speaker 6 (27:47):
Was that a lot of people, judging by the comments
sections on some of these posts, didn't seem to know
that they were a fake because they look so realistic.
Speaker 1 (27:55):
Wow, what would they say? So?
Speaker 6 (27:57):
One example is an I generated video of a pair
of hummingbirds sheltering from the rain inside a rose. I
asked an ornithologist about this. Hummingbirds do not do that,
and people are still commenting things like oh, how cute lovebirds,
how adorable, and it's got three point one million likes.
Speaker 1 (28:20):
Wow, So millions of people were believing that these were
real hummingbirds.
Speaker 6 (28:25):
Yeah, And as a truth abiding journalist, I felt it
was important to call this out for what it is,
misinformation and also explore what impact does it have when
you have millions of people believing this kind of imagery
is real? How could this affect our perceptions of nature?
Speaker 1 (28:44):
Amazing? What are some other examples of PI generated images
of animals you've seen out there?
Speaker 6 (28:49):
Yeah, so there are different genres of this kind of imagery.
Speaker 5 (28:54):
So birds in beautiful settings.
Speaker 6 (28:56):
I've seen a lot of videos showing animals like bears
or sometimes seals being rescued from the ocean. They're also
AI concoctions of animals that don't exist, like there are
some like tadpole type creatures with big googly eyes that
I saw going viral as well. And there are also
(29:18):
a lot of videos showing like eagles or lions attacking
children or dogs, that kind of thing.
Speaker 1 (29:25):
Why do you think people make these images and videos?
Speaker 6 (29:29):
Yeah, I think there's a lot of motivations that feed
into this. The most obvious one is just wanting more likes, clicks, followers,
which is of course like the currency of status for
social media.
Speaker 1 (29:43):
People just want to get popular on social media.
Speaker 6 (29:46):
I think so in some cases, they're also financial incentives,
so social media platforms like TikTok will actually pay creators
for videos that go viral. I had a conversation with
one cre who makes AI generated videos, and he had
a motivation that really surprised me. For him, part of
(30:07):
the fun is deliberately testing whether people can recognize these
as fake. So he'll often make his videos to look
like they were shot from an iPhone, like there is
one showing a giraffe walking through a mall in Dubai.
Speaker 1 (30:22):
Now you wrote a little bit about why we're vulnerable
to these images, like why are we prone to believe
these fake animals?
Speaker 6 (30:28):
Yeah, I think a lot of the spoils down to
our growing lack of knowledge about animals among many societies today.
So a lot of us are living in cities, We're
being less exposed in nature. I think we're losing touch
on a large scale, and I mean a lot of us,
including me, I should say, struggle to identify species of
(30:51):
plants and animals around us. There is one study of
students at Oxford University that showed that half of them
couldn't five British bird species. If I recall correctly, those
were actually biology students.
Speaker 1 (31:06):
Wow.
Speaker 5 (31:07):
And I think we're.
Speaker 6 (31:08):
Especially susceptible to biology related misinformation in particular because, I mean,
scientists are always discovering new species and new behaviors. Just
the other week, there's that story about Veronica, this Austrian
cow who used a broom to scratch herself that was
not AI generated because that information came from reputable news outlets.
Speaker 1 (31:32):
For a second, I thought you were mentioning something that
was not real.
Speaker 5 (31:35):
No, No, I'm pretty sure that was real.
Speaker 1 (31:37):
This was a cow that grabbed a broom and scratched
itself with it.
Speaker 6 (31:40):
Yeah, she had figured out to use a broom to
scratch herself in particular ways, and I think it was
described as the first case of tool use in cows
by scientists.
Speaker 1 (31:54):
Bonus points. Do you if you correctly guessed the cows
scratching itself with a broom is the real image and
not AI generated. It's true, look it up on a
real new site. Okay, now let's get to how these
AI made images can affect us. And then in the
article you wrote about how this can affect how we
(32:15):
relate to animals, what did you learn.
Speaker 6 (32:17):
Yeah, so a lot of these impacts are just unfolding
at the moment, so we don't have concrete data on this.
But the experts I spoke to had a few concerns.
One of them is that as we see more images
of a certain animal, that can make us more likely
to believe that that animal is in fact more abundant
(32:39):
in the real world. So there's some interesting research showing
that seeing images of endangered lions or giraffes can cause
us to overestimate their abundance in the wild. And I
don't know the exact numbers, but I think there's only
like a little over one hundred and ten thousand wild
giraffes in the world, and only a little more than
(33:01):
twenty thousand lions. But a lot of people don't know
that because we see these animals all the time in
travel commercials or children's toys cartoons, and so some of
the AI generated videos of birds that I've seen featured
birds of paradise, many of which are threatened by habitat loss.
(33:22):
So I think seeing these images a lot can cause
us to believe they're actually quite abundant and make us
blind to the conservation crisis these creatures are in.
Speaker 2 (33:32):
I see.
Speaker 1 (33:35):
So that's one way, hey, I slap can change our
view of animals. It can distort how abundant or how
safe from extinction they are. It can also distort our
view of how safe or dangerous so animals are.
Speaker 6 (33:50):
And there are some other concerns and more related to
specific kinds of videos. So one expert mentioned that seeing
human like depictions of animals like polar bears could lead
us to believe that they're cute, fluffy, and approachable creatures,
and that kind of information could lead people to seek
(34:14):
out encounters with those animals in the wild. There actually
are cases of people in the US going out to
try and pet mountain lions and bears and unfortunately ending
they end up getting moulled.
Speaker 1 (34:29):
Are you trying to tell me that polar bears don't
really drink Coca cola and wear scars?
Speaker 5 (34:33):
I'm afraid not. Sorry to bust that myth.
Speaker 1 (34:38):
Say she didn't go up to a polar bear with
us pop drink.
Speaker 5 (34:42):
Definitely not. You should never approach a polar bear.
Speaker 1 (34:46):
Yes, fake images can make some animals seem safer than
they really are, and you can also have the opposite effect.
You can make some animals seem more dangerous than they
really are.
Speaker 6 (34:59):
There are a lot of videos showing animals like eagles
or lions attacking humans or dots, which could make people
more likely to support the culling of those animals or
less likely to support their conservation.
Speaker 1 (35:17):
Kindarina says. Another big genre of AI generated images or
videos are those of exotic or endangered animals walking around
in people's houses.
Speaker 6 (35:28):
Seeing a lot of AI generated imagery also features exotic
animals being.
Speaker 5 (35:33):
Kept as pets, such as Kappi virus.
Speaker 6 (35:36):
Seeing that kind of imagery could make people more likely
to want to have them as pets and potentially even
fuel the capture of these animals in the wild.
Speaker 1 (35:48):
But one of the worst consequences of all this AI slob,
Katerina argues, is that it makes us doubt what's actually real.
Speaker 6 (35:58):
I want to add the big impact that worries me
personally is that as people begin to cotton on that
a lot of this imagery isn't real, they might start
to dismiss wildlife imagery altogether. So you could have photographers
spending years trying to get a significant image of a
(36:19):
bird doing something interesting, only to have people on social
media say, oh, that's fake news. I'm not going to
care about that. And the same thing could happen with
scientists who discover new species or new behaviors. And I
think that's really what worries me the most is that
it could further alienate us from the natural.
Speaker 1 (36:41):
World because we don't know what to believe anymore, and
so therefore we will tend to not care about.
Speaker 5 (36:48):
It, right exactly.
Speaker 6 (36:50):
Yeah, And I think that's really sad because this is
a time when species really need our attention. So twenty
eight percent of plant and animal species that have been
studied are in some way threatened with extinction, so basically
a third. And to me, I think that's the saddest
thing about this is that not only are we not
(37:11):
paying attention to these real threats that animals are facing,
but that we're instead looking at these fake images of
them on our phones that don't reflect reality. It's like
the world's on fire and we're looking at these hummingbirds
and roses and going, oh, how cute. That's wonderful, and
(37:33):
there are actually some hummingbird species that are in need
of conservation attention.
Speaker 1 (37:38):
Meaning that it's sort of detaching, is further from reality,
and the reality could be kind of bad out there. Yeah, exactly,
like the world's on fire and people are looking at
the pictures of the fire and going, oh, that's ais.
Speaker 6 (37:53):
Yeah, exactly, all right.
Speaker 1 (37:58):
Well, hopefully that gives you a pretty sense of why
these AI generated images are so powerful all of a sudden,
why our brain is so easily fooled by them, and
how even pictures and videos of cute animals can impact
our sense of reality. Some of you might be wondering, well,
what can we do about this, and the answer is
not that complicated. According to our experts. You should a
(38:22):
not believe everything you see on the Internet or your
social media feed and look for reliable and reputable sources. B.
You should keep pushing companies to do a better job
of labeling AI generated content, and c lobby your elected
officials to put in place more rules and regulations around
this stuff. But most definitely you should listen to this
(38:45):
podcast every week. Wait did I say that or did
AI say that? Either way, if you're hearing this and
you're human, thanks for joining us. See you next time
you've been listening to Science Stuff. The production of iHeartRadio
(39:06):
written and produced by me or hych Ham, credited by
Rose Seguda, executive producer Jerry Rowland, and audio engineer and
mixer Kasey Pegram and you can follow me on social media.
Just search for PhD Comics and the name of your
favorite platform. Be sure to subscribe to Science Stuff on
the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts,
(39:27):
and please tell your friends we'll be back next Wednesday
with another episode.