All Episodes

April 3, 2025 67 mins

For centuries, it was often understood that "seeing was believing" -- while people might embellish a story, and write whatever falsehoods they wished, visually witnessing an event was solid proof of what actually happened in any given situation. Yet this no longer holds true in the modern age. Photographs have been faked since, well, the invention of photography, and video followed shortly thereafter. However, new technology is enabling the creation of fake video with an unprecedented level of sophistication and believability. So what happens when we can no longer believe our own eyes? How will the world react to the rise of the Deep Fake?

They don't want you to read our book.: https://static.macmillan.com/static/fib/stuff-you-should-read/

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Friends and neighbors, fellow conspiracy realist, welcome back to stuff
they don't want you to know. We have a classic
episode for you today that has only aged better with time.
You may recall back in twenty nineteen, we started talking
about the idea of a deep fake. What is a

(00:24):
deep fake?

Speaker 2 (00:26):
Oh, it's a generated version of something that looks real.
And it's strange how even since twenty nineteen, the way
we talk about this concept and even the vocabulary that
we use surrounding it has a it's changed completely. It's
been altered by artificial intelligence and the way we talk

(00:46):
about that thing. But it's a Yeah, it's just a
fake thing, right, It's a it looks like Matt, but
it's not Matt.

Speaker 1 (00:55):
Yeah. Photographs have been faked since the invention of photography,
which we talked about there pretty often. And I just again,
you know, we get a lot of messages and people
ask me, hey, Ben, are you guys ever gonna successfully
predict a good or a positive thing? We hope so.

(01:18):
But we were right about a lot of evil things,
and deep fakes are one of them. It was really
interesting to listen back to this episode together and to
realize just despite your point about nomenclature to realize just
how disturbingly accurate a lot of this was.

Speaker 2 (01:40):
Yeah, so let's get into it. We assure you these
are the real US is having this conversation.

Speaker 3 (01:49):
From UFOs to psychic powers and government conspiracies. History is
riddled with unexplained events. You can turn back now or
learn this stuff they don't want you to know. A
production of iHeart Radios How Stuff Works.

Speaker 2 (02:13):
Welcome back to the show. My name's Matt, my name
is no.

Speaker 1 (02:17):
They call me Ben. We were joined as always with
our super producer Paul Michigan, trol Deck and most importantly,
you are you. You are here and that makes this
stuff they don't want you to know. As we are
barreling toward the end of the year, if that is
you believe in the current calendar?

Speaker 2 (02:36):
Yes, right, I mean.

Speaker 1 (02:39):
Is this the real life? Is this just fantasy?

Speaker 4 (02:44):
We're gonna do three part harmony.

Speaker 1 (02:45):
No, no, okay, here, let's start this way. What's the
what's the very last thing you watched on video before
we recorded today, like VHS, No, just video, just any
video on any screen.

Speaker 2 (03:00):
I can't really remember, but right now I'm feeling lost
in the woods all right.

Speaker 4 (03:06):
Is that another queen reference?

Speaker 2 (03:08):
That's a reference for all the parents out there who've watched.

Speaker 4 (03:11):
First Into Is that the song that actually held up?

Speaker 2 (03:15):
I liked.

Speaker 4 (03:15):
People were saying that the music was a little lackluster.

Speaker 2 (03:19):
I'm sorry. When when I get older, everything is going
to make sense. That is performed by Josh Gadd is
one of the best songs written.

Speaker 1 (03:28):
Who's Who's Josh Gadd?

Speaker 2 (03:30):
He plays the Little Snowman.

Speaker 1 (03:32):
Yeah, okay, so so you saw that, which is entirely
a deep fake.

Speaker 4 (03:38):
Basically it's CGI. You know, it's like e'sing real going
on there. I watched The Irishman, which is an interesting
thing to bring up when when we're talking about today's
topic because it uses anti aging de aging technology, which
sort of a hybrid of like video and CGI to
make the actors look younger in the earlier parts of

(04:01):
the film where they're you know, their younger selves. The
movie takes place over decades, and it still has a
little bit of that Uncanny Valley kind of vibe. It
looks a little just too perfect, almost like a cut
scene from a final fantasy game or something like that.
But you eventually stopped noticing it. So that was the
last I've been watching that I had to watch in
two sittings. It's quite long, and I, of course Sasey

(04:22):
would not approve.

Speaker 1 (04:23):
I watched The Irishman on the recommendation of our own
Paul Michion control decan who has fantastic, impeccable taste in
film and also has a film of his own out
of It now on Amazon Prime. You okay with me
pluging that, Paul.

Speaker 2 (04:37):
It's called Annie, Get Your Gun. There are some people
here at the apartment complex and they mean business.

Speaker 1 (04:44):
It does take place in the city, Annie in the City.
Do check it out. You'll astute listeners will notice some
cameos from some of our podcast.

Speaker 2 (04:57):
Cohort, especially Ben Well.

Speaker 1 (05:00):
I think my favorite turn is probably one of our
super producers, Chandler Mays. He's really the scene stealer for me.
So check it out, tell us what you think, and
we can assure you, to the best of our ability
that the people who appear to be on screen saying
and doing things are actually on screen saying and doing things.

(05:23):
This is not a revolutionary thought. For centuries, people used
to say scene is believing, and this meant that while
people can make up anything in conversation or writing. Actually
witnessing something with your own eyes presented inarguable proof of
an event. This began to change with the rise of photography.

(05:49):
Photographs could be faked. During the heyday of spiritualism in
the West, it was very common for mediums or people
purporting to be mediums to fake photographs.

Speaker 2 (06:00):
The old double exposure right right.

Speaker 1 (06:02):
Which was an unfamiliar alien technology to the casual observer.
We cannot blame people for believing that they saw it
with their own eyes. They were unaware of the trickery
that could or could not be involved. And then things
escalate further with the dawn of moving pictures. That's when directors, filmmakers,
and other industry professionals begin working assiduously to make the

(06:24):
unreal seem real for fantasy's sake. For fantasy's sake, yes exactly, Matt,
to bring the fantasies of the human mind to life
of a sort on screen. Our species also quickly realized
the power inherent and film, and at times history has
hinged on specific images or even just bits of video.

(06:45):
So today's question, what happens in a world where seeing
is no longer believing? What happens when the line between
film fiction and film fact blur. Let's start with some
of the most immediately important video the world, the news.
Here are the facts.

Speaker 4 (07:04):
So, according to a twenty eighteen survey from the Pew
Research Center, forty seven percent of Americans prefer watching the
news rather than reading or listening to it. Thirty four percent,
on the other hand, prefer to read the news and
nineteen percent to listen to it.

Speaker 2 (07:19):
And I know those people who prefer their news on video,
most of them still really want to watch it on
a big screen, on a maybe not a huge screen,
but a bigger screen like a television, rather than internet video,
you know, maybe on your phone or on your computer.
And that may be surprising to a lot of people listening.
I know it's a little surprising to me, right. I

(07:41):
think it's because, look, this is my understanding of it.
But I think I think a lot of the people
who end up responding to a Pew survey maybe are
leaning more towards people who would be watching their televisions,
or maybe a little skewing a little older maybe.

Speaker 1 (07:57):
Or the kind of people who say, sure, take a survey.

Speaker 2 (08:01):
Yeah, right, That's just my read of it.

Speaker 1 (08:04):
I think it's a good point.

Speaker 2 (08:05):
And then Pugh found just over four and ten, So
four out of ten United States adults prefer TV, compared
with about a third who prefer the web, and then
fourteen percent who prefer radio and seven only seven percent
who actually want to read the words.

Speaker 1 (08:23):
Of the news right for one reason or another. That's
the other thing. We don't have a solid methodological grasp
of how these questions were accurate, like how they were built,
how they were framed, and there's so much that goes
into there. But these numbers feel pretty solid, if surprising.

(08:44):
I think it's safe to say that the four of
us recording obviously longtime listeners. You know this. We don't
just go home and turn on CNN, Fox or MSNBC
or something. We we get to the edges of stuff.
We look into the weird things.

Speaker 2 (09:03):
Yeah, generally, I just really quickly and maybe we do
a quick pull here. Generally, if I'm if I'm consuming
the news, it is through a written article.

Speaker 4 (09:13):
What about you, guys, Yeah, I typically read stuff because
of the way we research for the show and other shows.
It tends to be a kind of our bread and butter.
Occasionally watch documentary, read parts of books. We don't always
have the luxury of reading, you know, an entire book
for one podcast episode, but definitely excerpts and chapters. And
then occasionally, you know, stuff like YouTube videos, which tend

(09:33):
to sort of show you where the kind of Internet
culture or the zeitgeist is, you know, coming down on
one side or the other of an issue.

Speaker 1 (09:41):
I think I would watch television more often if there
were more if there was a larger degree of variants
in the narratives and stories presented in the West at least,
so people like me naturally end up watching things on
the Internet because that's the easiest way to get opposing viewpoints. Right,
you want to see Tinhua or Al Jazeera or RT,

(10:04):
all of which are imperfect, then you you probably are
not going to see that in your basic Comcast package, right,
So the majority of Americans don't prefer online video just yet,
even though you know, I think a lot of us
listening because we listen to podcasts, are already all up
on the internet for video content for news, right and.

Speaker 2 (10:26):
For honestly audio podcasts about the news, like The Daily
or just Get Daily Zeitgeist. There's all kinds of shows
out there that will give you what you need in
an audio way.

Speaker 1 (10:37):
The economists BBC, Global News, et cetera. And this this
makes sense for us. But even if the majority of
Americans or the largest swath of Americans don't prefer online
video just yet, at least for the news, there's no
arguing that the Internet has made a massive impact on

(10:57):
video technology and filming in jet general. And we are
in the midst of this evolution as we speak. It
is an evolution at a very fast cadence. The earlier,
the earliest, and the earlier film technology. And you guys
and Paul know this very very well. Like the earliest
stuff was super expensive, highly specialized, it was cantankerous, and

(11:22):
it broke a lot. Add to that, distribution channels for
anything filmed were owned by a fairly small number of
corporate or state interests, and this meant that whether your
film was a sci fi blockbuster or whether it was
just some shameless World War II propaganda, it would go
through predictable channels. The same people would shake the same

(11:43):
hands to get that to a theater or a screen
near you. But the technology continued to evolve. Soon people
were able to purchase televisions and instead of relying on radios,
they would be able to put an image with a sound.
This played huge role like the first televised presidential debate.
You know about this one, right, Nixon and Kennedy.

Speaker 2 (12:05):
Oh yeah, just the difference in their appearances, and how
much of a difference that made outside of the words
they were saying.

Speaker 1 (12:12):
Exactly. So, people listening to the radio, regardless of political party,
thought that Nixon won the debate, and people watching television,
regardless of the political party, thought that Kennedy won the debate.
So we were able to bring a small version of
the big screen to living rooms around the world, and
happily ever after, right, yes, yeah, well, so then we
had the rise of home projectors vhs, like you had

(12:35):
mentioned earlier in old DVD and so on. These allowed
viewers more agency. You didn't just have to stick to
the programming dictated by your TV channels. Your CBS is
your NBCs. Yeah.

Speaker 4 (12:45):
It's like a rudimentary early version of what we now
think of as on demand consuming of media, which has
really changed the game entirely.

Speaker 1 (12:53):
Right, right, And we're still segments of the industry are
still attempting to keep up with that change. As film
technology improved cost for equipment began to creator. Nowadays, filmmakers
don't always have to go to a studio. You don't
have to shake the same hands at Warner or whatever, and.

Speaker 2 (13:13):
You could shoot a low budget movie with technology available
and you know, make VHS copies let's say, of it,
and distribute it locally or maybe two stores in a
local area. I'm just saying, like, in that time, even
before you know nowadays, it was possible to escape the studio.

Speaker 1 (13:33):
Right right, And this escape from the studio, as exodus
from the studio, has accelerated the rise of home and
internet video commingled and it led us to the current world,
a world in which anyone with a little bit of
scratch can purchase tools to make their own video. Right.

Speaker 4 (13:51):
And not only is it the democratization of creating, it's
the democratization of consuming, because you know, anyone can upload
a video to the internet for free on YouTube and
you can command your own audience depending on the quality
of your workhound quality is sort of a loose term there,
I guess, but at least in terms of how salacious

(14:11):
or how kind of hooky it is and how much
it grabs people's attention you can actually command an audience
of scale as a creator with very little overhead.

Speaker 2 (14:22):
Yeah, with worldwide virtually instant distribution sick.

Speaker 1 (14:27):
Yes, So if these filmmakers have an Internet connection, they
can bypass those antiquated channels of distribution send their work
across the planet. Anyone else with web access can, in theory,
watch it to their hearts content as much or as
little as they wish. But don't feel bad for the
old guard. The democratization of av technology did not lead

(14:47):
them to extinction. They evolved as well.

Speaker 4 (14:50):
Yeah they had to. I mean, a journalist in say
Hong Kong now during the protests that we're experiencing, can
immediately post video and help out those news providers, shedding
light on events that might have been otherwise relegated to
the shadows. Reporters in Belgium or Bolivia can record political
announcements that the capitol live and stream it to millions

(15:13):
of viewers or followers.

Speaker 2 (15:15):
The president can, you know, before having important meetings with
world leaders decide to talk for forty five minutes because
the cameras are live.

Speaker 1 (15:24):
Mm and other world leaders can be caught doing their
own rendition of mean girls who are significant geopolitical events.
All in all, this is impressive stuff. Right live video
has the potential to bring our incredibly litigious and bellicose
species a little bit closer together. If everyone can see

(15:46):
something happening plain as day at the same time while
it's happening, what on earth is there left to argue about?
Scene is believing?

Speaker 2 (15:55):
Right?

Speaker 1 (15:57):
No? What?

Speaker 2 (15:59):
But we'll talk about out right after word from our splatzer.

Speaker 4 (16:02):
Wow.

Speaker 1 (16:05):
This episode of stuff they don't want you to know
is brought to you by Express VPN.

Speaker 2 (16:09):
Recently, over one hundred million people had their personal information
stolen in a major data reach. We're talking social security numbers,
contact details, credit scores and more, all taken from Capital
One customers. Oh that's me, and.

Speaker 1 (16:23):
That means there's a good chance you were personally affected. Folks.
These kinds of attacks are getting more frequent and more severe.
It's not just Capital One, Zequifax, Facebook, eBay, Uber, PlayStation,
and Yahoo. They have all leaked passwords, credit card info,
and bank numbers belonging to billions of users.

Speaker 2 (16:42):
That's why we use ExpressVPN to safeguard our personal data online.

Speaker 1 (16:47):
According to recent reports, hackers can make up to one
thousand dollars when they sell someone's personal information on the
dark web, making people like us easy lucrative targets.

Speaker 2 (16:57):
ExpressVPN is an app in the app connects with just
one click. It's lightning fast, and the best part is
ExpressVPN costs less than seven bucks a month to use.

Speaker 1 (17:07):
And Listen, Honestly, if a breach can happen to Capital One,
it can easily happen to anyone else. So protect yourself
with ExpressVPN, the number one VPN rated by tech Radar,
c Net, the Verge, and countless others.

Speaker 2 (17:21):
You can use our special link Express vpn dot com
slash conspiracy right now to arm yourself with an extra
three months of ExpressVPN for free.

Speaker 1 (17:31):
Support the show and keep your information safe. That's ExpressVPN
dot com slash conspiracy for an extra three months free.
Here's where it gets crazy.

Speaker 2 (17:49):
Hey, Remember when we said that technology, that stuff that
lets us do all the things that we like to do.
It's still evolving. Remember we talked about that. Uh, well,
we're on the precipice of this other thing, this this
new shift in video technology, and it is not I

(18:09):
don't see good things happening from it. It's gonna be
like amazing for those dank memes, but for everything else.

Speaker 1 (18:18):
Oh boy, Yeah, it's an inherently conspiratorial shift. Today we
are talking about the rise of the deep fake, which
sounds hyperbolic but very much is not. What is a
deep fake? Somebody might be saying, well, we're glad you asked.
Our story starts with a fellow named Ian J. Goodfellow.

(18:40):
I think that's really good. But while you may not
have heard of his name before, while you may be
unfamiliar with the concept of deep fakes, you have almost
certainly already encountered some version of Ian Goodfellow's work. He
works extensively in areas of what we call machine learning.
Anybody who remembers our earlier conversation about machine consciousness with

(19:04):
a friend of the show, Damian Patrick Williams is probably
familiar with these edges of science, the bleeding edge of
artificial intelligence. Here's what Ian did. Essentially, he taught algorithms
to play games with each other, specifically to kind of
play game theory, which is still incredibly strange and important.

Speaker 2 (19:22):
Well, yeah, let's talk about what deep learning is really,
because we're talking about machine learning just essentially teaching versions
of artificial intelligence how to learn. And in this case,
this is a sub field of machine learning we're going
to talk about called deep learning. And it's fascinating stuff.

(19:42):
It is the it's also the stuff of nightmares. It's
the stuff of our eventual future, and there's no way
around it. But it's the concept of focusing algorithms that
are focusing on algorithms that are inspired by the way
the human brain functions. And they are called artificial neural networks.
And if you're watching the final season of Silicon Valley,

(20:05):
you're getting kind of a crash course in that right
now as the well, as we're recording this, the penultimate
episode just came out. But anyway, it's it's really fascinating stuff.
And Goodfellow, our friend Ian J. Goodfellow actually wrote a
book on this subject.

Speaker 1 (20:20):
Yeah, a book called his book about deep learning is
called deep learning.

Speaker 2 (20:24):
Hey come, yeah, I know it's a coining.

Speaker 1 (20:26):
And he's a busy guy. So he explains deep learning
this way. He thinks of it in terms of a
hierarchy of concepts, and he says having a hierarchy of
concepts allows a computer to learn these complicated concepts by
building them out of simpler ones, which is what we
have spent a lot of time doing it. How stuff works?

Speaker 2 (20:46):
Right, have this episode even we do?

Speaker 1 (20:48):
Yeah, that's how, because that's how our brains often approach things.
We build toward that gestalt. So he says, if we
draw a graph showing how these concepts are built upon
each other, we see that the graph is just visually,
it's deep. It has a ton of layers. And he says,
for this reason, we call this approach to artificial intelligence

(21:09):
deep learning. In plain English. What that means is that
we as a species have programs that can work more
and more like an organic brain, and artificial neural network
is meant to function more and more like a brain.
And he has one very well known invention. This is

(21:32):
the engine behind his work that you have already seen.
Even if you've never heard of a deep fake, you've
never heard of being goodfellow, and you've never heard of
machine or deep learning.

Speaker 4 (21:41):
That's right, and it's his most well known invention innovation.
It's something that's called generative adversarial network or a general
adversarial network or GAN and gans enable algorithms to move
beyond classifying data into actually generating or creating images.

Speaker 2 (22:04):
Oh yes, Now maybe you're the gears are turning in
your mind. Maybe I have seen something like is it.

Speaker 4 (22:10):
Like the deep dream kind of stuff really Google's deep Dream, right, yeah,
where it would uh sort of take a like a
face or an image and then it would pull things
from elsewhere on the Internet that it's sort of matched
up to those textures or you know, spaces like to
fill with other images of say like dogs or slugs

(22:32):
or what have you. And then people started animating them
and they became these like hellscape kind of psychedelic nightmare images. Yeah,
very dalias, very eschery, just like super trippy really for
lack of a better term.

Speaker 1 (22:46):
Yeah, you're you're on the right track there, because they
are indeed related. In terms of the science, deep dream
is a makes use of rather something called a convolutional
neural network or a com net or CNN, which could
be a little confusing. So they're very similar approaches.

Speaker 2 (23:05):
At basis.

Speaker 1 (23:06):
So these generational adversarial networks are trying to trick each other.
They can move beyond classifying data into generating or creating data,
generating or creating images. So these two networks, these two
generative adversarial networks they attempt to fool each other into

(23:27):
thinking that a given image is real and using as
little as one image. From that back and forth between
what's called the generative and the discriminative sides of this thing.
Just using one image, they can create a video clip
of a person, so they can animate a picture, and
they also you can also take it a step further

(23:50):
and have that animation speaking and what sounds like that
person or that image's voice. Samsung's AI Center released a
report on the science behind DAN and they said such
an approach is able to learn highly realistic and personalized
talking head models of new people and even portrait paintings.

Speaker 2 (24:10):
It's just people, new people.

Speaker 1 (24:12):
Created people, generated humans, and they look great.

Speaker 2 (24:16):
They really do.

Speaker 1 (24:17):
I'll say it, some of them are attractive. If you
didn't know, and you just saw a picture of one
of these generated images on your dating app of choice, yeah,
there are a couple you would probably swipe right on.

Speaker 2 (24:33):
I wouldn't know which way to swipe because I don't
understand those things, but I totally get what you're saying.

Speaker 1 (24:38):
So and it's startling because now even now you can
take tests where you attempt to identify a real person
from a generated talking head or image, and it's tough. Yeah,
we're getting closer and closer to traversing the Uncanny Valley.

Speaker 2 (24:53):
Oh yeah, well, and it's it really is frustrating that
it's not easy year because for so long there and
I think you, Noel, you've referenced Final Fantasy cut scenes
or something earlier on. And I remember when that Final
Fantasy movie came out a long time ago. There was
almost all too good at that point. Well it was, yeah,
it was all computer generated and it looked fantastic. It

(25:15):
was a feature film. I remember thinking how incredible it
was seeing that, and then seeing something like Avatar, where
you've got these I forget what they're called, the nav
the nave that don't look human necessarily, but they look
real enough, right, And then when you get to something
like this and you're looking at these just the portraits,

(25:37):
even it feels it feels pretty scary, not being able
to trust your eyes to know if something is real
or not, even though it's generated. Either way, if it
was an actual image of a person and it was
taken converted into data you know, ones and zeros, then
displayed on your screen, that's not a real person necessarily

(25:58):
you're looking at the representation, but just knowing that a
computer can fool you that hard is pretty.

Speaker 1 (26:05):
Cruel and very easily.

Speaker 4 (26:06):
Yeah, that's kind of why I wonder too, why you know,
to be fair, a lot of the de aging stuff
in movies is quite good, so much better than it's
ever been. But there were a couple spots in The
Irishman where I was like, really, like, I thought the
technology was better than this, and it is, But I
guess it's different when you're making a younger version that

(26:27):
has to then coincide with written lines and map up
to you know, an actor's face and you know, look
believable in terms of the way the mouth moves and
acting ticks and all of that stuff that's specific to
a scene rather than a pre existing video of some kind, right.

Speaker 1 (26:42):
And it also goes down to how much existing footage
they may be able to obtain of when the person
was that age. So that's one of the reasons why
it probably works best with celebrities and political figures because
there's just you know, not everybody did taxi driver or
a kid, right. So the stuff with DeNiro specifically, I

(27:04):
would argue he just always looked vaguely in his late
thirties to early forties and somewhat perturbed, like, right, was
it a short or is he mad about a relationship?
You know what I mean? His face has a story.

Speaker 2 (27:18):
It tells I want to bring something up here quickly.
With the the high priced effects that were going on
in a movie like The Irishman, with these aging effects
this these are designed to be displayed in the highest
resolution possible. So you're talking four K, ten ADP something
like that. It's designed to do that, right, it's on

(27:40):
a streaming service. They don't know what you're going to
play it on what screen, but it's got to be
high res, right, And what we're saying is it's fairly
easy to discern that something is going on here at
that high resolution. But what if it's a much lower resolution,
more grainy, like more are destroyed like a gift, like
a small YouTube video that isn't maybe ten ADP or something,

(28:03):
or a cell phone camera video that then gets uploaded integrated.
It changes our ability to discern some of these things.

Speaker 4 (28:12):
And we'll get to this, but again, all of this
comes down to these algorithms, which, as it turns out,
require an insane amount of computing power. Yes, yeah, even
to do in these low res forms right for now,
at least exactly right. So think of it this way
now without too much of a hassle. As you are
listening to today's episode, you can get this technology online.

(28:38):
You can create videos that are nearly impossible to identify
as quote unquote fake. For a fun example, we just
want to keep it innocuous before we have to strap
in and go down this rabbit hole. For fun example,
Let's do two fun examples. Let's say you have a
friend who knows you love Marvel movies, and so for

(29:00):
you know, your birthday, your King Seniera or whatever, they
make a deep fake video where it looks like you're
in a Marvel film. The Avengers all assemble, and holy smokes,
there's Derek.

Speaker 2 (29:10):
That's fun, all right.

Speaker 1 (29:11):
That seems fun. What a thoughtful kind gift. Or to
make it a little more applicable to our show. One
thing that would be a fantastic deep fake present for
our very own super producer, Paul Michigan Control Deck, and
would be to take an Applebee's commercial and just put
him in it. Oh man, And so he's the person there,
you know, gesturing in amazement as the ribs and the

(29:36):
bottomless Jalapaniel poppers or whatever come out, and we even
have him say the tagline in his voice. That is
so much fun. But that is not the only use
of this technology. Make no mistake, the lid of Pandora's jar,
and it was a jar, is unscrewed. This technology is
no longer theoretical. It is very real, and it is

(29:57):
immensely dangerous. Why we'll tell you after a word from our.

Speaker 2 (30:01):
Sponsor, psych We're not gonna tell you.

Speaker 1 (30:11):
It's not even us.

Speaker 2 (30:14):
Oh what you thought this was conspiracy stuff. We're just
your car talking to you.

Speaker 1 (30:19):
Is my real voice, and Robert de Niro so terrible
denu impressions. Aside, there are applications of deep fakes which
should trouble every single person or bought listening to this show.
I did not initially think of the first application, which
was dumb and naive of me. Apparently one of the

(30:41):
first things people tried to do when gan technology got
out of its got out of its research R and
D Heidi Hole was to apply it to pornography, and
pornography drives a lot of technology. I mean, arguably, there's
a very good case to be made that the reason
VHS won out over Beta Max was because the porn

(31:01):
studios went with VHS.

Speaker 2 (31:03):
Yeah. I don't want to be crass or get too
much away here, but I do remember far before this
technology was available, when photoshopping was really the only option,
there were sites, i want to say, early on in
the Internet where it was just sites dedicated to celebrity pornography,
where it was just photoshopped images on purpose. That would

(31:26):
be pretty crazy applying it to video with this new technology.

Speaker 1 (31:33):
Yikes, black mirror esque, right, Like, imagine you are a
creep with a crush on someone. It could be a colleague,
a classmate, a celebrity, you know what, it could be
an historical figure. Maybe King Touch just really does it
for you for some reason.

Speaker 2 (31:49):
Actual King Touch, Yeah, the actual one.

Speaker 1 (31:51):
We know what the guy actually.

Speaker 2 (31:52):
Looks like now, and not Steve Martin.

Speaker 1 (31:54):
Not Steve Martin. He's one of the best banjo players
in the.

Speaker 2 (31:57):
World, and he really does do it for me at least,
but in this case, okay.

Speaker 1 (32:02):
So let's Steve Martin example. Then now there's nothing wrong
with having a crush. It becomes creepy. However, if you
use deep fake technology and put Steve Martin's face on
the face of someone in a pornographic video, especially when
the video will genuinely look real and it will sound

(32:23):
like them. This is not science fiction. This is the
idea behind a website with the immensely creative name Deep Nudes.

Speaker 2 (32:31):
Now, boy, Deep Nudes.

Speaker 1 (32:33):
Did exactly what we just described here. Luckily, the founder
eventually canceled the site's launch and they had a public
statement about it.

Speaker 4 (32:44):
Yeah, and the founder actually eventually canceled launching the site,
noting that quote the probability that people will misuse it
is too high. Oh. I never would never would have
thought that.

Speaker 2 (32:57):
What somebody's gonna misuse this.

Speaker 1 (33:04):
There's a great Mitchell and Web sketch about like an
evil scientist where it's like I built the Ultra violence
Lazer to save the world, not destroy it. Nice, I'll
send it to you guys. Maybe we could post it
on here's where it gets crazy. But that's just one use.
That's the immediate one. And again I don't know about you, guys,
by a felt naive for not immediately assuming that's what

(33:27):
would happen.

Speaker 2 (33:28):
I yeah, I think it's pretty obvious. It's just hey, cheers,
guys for not always thinking about pornography.

Speaker 1 (33:34):
I guess.

Speaker 2 (33:35):
So.

Speaker 1 (33:35):
Yeah, it just felt awkward when I was talking to
some contacts about this off air and they looked at
me like I was from you know, a different universe
or time period. When they said, they're like, yeah, porn, Ben,
it's that's why people have technology, is to get better porn,

(33:55):
which I don't know. I don't know whether that's completely true,
but it was a weird night. There's another use of
deep fakes that is more apparent and has the potential.
So like this fake pornography or this fake rendition of
people in these intimate times, it can ruin an individual's life.

(34:16):
It could be blackmail, and it could be blackmail. But
there's another version of a deep fake, a weaponized deep fake,
that could ruin the lives of hundreds of thousands or
millions of people.

Speaker 4 (34:28):
Yeah, because we can't forget that a lot of countries,
including US, are very heavily entrenched in this notion of
asymmetrical warfare called cyber war.

Speaker 2 (34:41):
Awkwardly, yeah, yeah, a little bit.

Speaker 4 (34:43):
Many world leaders have extensive, like you said, Ben, video
footage out there of themselves at functions, giving speeches, events
and the like. There's more than enough for Gan to
work from here.

Speaker 1 (34:59):
Yeah, so let's say, who can we use as an example?
All right, Matt? He did Steve Berner earlier. Yeah, okay, okay.
So let's say let's say, Matt, you and knowl are
the leaders of these different opposing countries. It's been a
lot of tension for a while. Okay, So if there

(35:20):
was someone either on one of your sides or a
third party, let's say I'm a country that just messes
with other countries for fun, right, I'm Russia, I'm the US,
I'm one of the hits, right, And I say, you
know what's going to be great. I can't take on
no Landia or the Republic of Frederick with conventional military might.

(35:43):
So I'm going to turn them, turn them against each other.
I'm going to foment instability, or I'm going to mess
with their election by just going on Facebook. I don't
need to launch an ICBM. I'm just going to go
on Facebook and Twitter. I'm going to make fake videos
of leaders of both countries Landy and the Republic of Frederick,
and I'm going to have him say things that they
would never actually say. This sounds like a we're building

(36:07):
toward a comedy bit, but there's a real, real fake video,
multiple levels of Nancy Pelosi, a politician here.

Speaker 2 (36:15):
In the US.

Speaker 1 (36:16):
And have you guys seen this video? I have not
seen this one, so it's it came out with a
nice side by side view, but the deep fake video
was also propagated on its own. It's very interesting to
watch the difference between the two. This gives us a
chance to rewrite history in a disturbingly or Wellian way.

(36:36):
What happens if, for example, all original let's see Noel
makes an historic speech in Nolandia that triggers a new
golden age for the country, and someone destroys the original
copies of this profound speech and replaces it with a
deep fake. And then the last living generation, the less

(37:01):
people who are there when the original speech was propagated,
they can keep its memory alive, but when they die,
history has in a very real way changed. Now.

Speaker 4 (37:11):
I don't know if this counts or not, but uh,
there was a brief, brief period recently where there was
some what I would consider deep fake gifts that were
making the rounds. There's like Obama on a skateboard, there's
one of the Pope doing a magic trick where he
like pulls the the tablecloth out from under some kind
of votives like at a like on an event like

(37:32):
a live CNN stream.

Speaker 1 (37:33):
Which I choose to I know it's fake, but I'm
just gonna believe in it.

Speaker 2 (37:37):
But they're so good.

Speaker 4 (37:39):
Yeah, I wanted to believe in it as well, especially
the Obama on a skateboard one.

Speaker 2 (37:43):
But is that kind of fall is that sort of
fall under these this category.

Speaker 1 (37:47):
Yeah, yeah, that's a that's a less dangerous version, you know,
because that's fun. See seeing the Pope pull off a
dope magic trick is not gonna like foment instability in
South America. But going back to our example, let's let's assume, uh,
the Republic of Frederick and Nolandia are kind of are
beefed up to the level of like Pakistan and India

(38:09):
or Israel and Palestine, and all of a sudden, in
no traceable way, the public of both countries gets hit
with this barrage of videos that seem to say, seems
seem to have you know, the benevolent dictator Matt Frederick
and the Prime Minister of no Landia and Noel Brown

(38:31):
seems to.

Speaker 2 (38:31):
He's also a dictator by the ways, officially not in
his title.

Speaker 1 (38:35):
Okay, so so the uh, let's these these guys have
these videos wherein each of them are announcing their their
intention to deploy nuclear weapons and mop the mop the
adjacent country clean. Let no stone stand on another How
would you know if you're the audience of this other country,
how would you know whether these were real or fake?

(38:58):
How would you react? How much time would you have
if you think that you literally saw the leader of
the country that you previously went to war with saying
that's it, We're launching in five Well, it's.

Speaker 2 (39:09):
It's really scary because the way that would function, it
would be posted somewhere and it would become viral on
social media so greatly that's the only way that it
would propagate, But it would propagate likely. And I'm just
gonna say personally, what I would do in that situation
is go to the standby that we discussed at the top.

(39:31):
I would probably turn a TV on somewhere just to
check and see if somebody is talking about it seriously,
you know, on one of the major outlets. The problem is,
what if you fool them. This happens all the time,
even with quote unquote fake news.

Speaker 4 (39:47):
Everyone's so into getting the scoop with these the fast
cycle the turnaround of news, that everyone wants to be
the first, so they tend to not vet things like
they used to, and that's how you often get these
misreporting of election results, et cetera. You know, and this
is just perfect example. If someone's like, here's a good example.
There is a deep fake Donald Trump pe tape that's

(40:09):
out there. What if that had been pushed out, you know,
by let's say, you know, some a network opposing that
the Trump administration, that didn't happen. Now that we know
a little bit more about deep fakes being a thing,
maybe there's some caution. No one wants to be the
news agency that does that. But with an inflammatory enough thing,

(40:32):
maybe you don't have time to think about it.

Speaker 2 (40:35):
Well see, yeah, here's here's what I wanted to bring
up in the examples we're talking about with you know,
a speech or something that exists. So you so, in theory,
you would take the base level video of the speech
and maybe maybe that audio. Then you'd manipulate the audio
and then the face right or something to where the

(40:57):
the words being spoken are different on that video. I
think the scariest ones, the scariest versions of this are
where it's supposed to be a hidden camera or something,
where it's, like I was saying before, it's so degraded
it's difficult to truly make out what's going on. But
you can tell that, oh, that's definitely Billy Aikner. And

(41:19):
where you can, you know your brain is at least
telling you that, and it sounds like Billy Aikner's voice
saying things.

Speaker 1 (41:24):
Who's Billy Aikner?

Speaker 2 (41:26):
Billy on the street. Billy on the street, I mean,
and he.

Speaker 4 (41:29):
Was in Parks and Rack. He's the really like wound
up guy from Eagleton that ends up working in the
office when they combine.

Speaker 1 (41:37):
I'm so glad you guys are here. Like, I recognize
maybe three out of five Billy Aker.

Speaker 4 (41:43):
He's the guys that cost people on the street with
a microphone and just yells celebrity names at them and stuff.

Speaker 1 (41:47):
I thought that was Will Ferrell doing Harry Carey. Yeah,
sort of, okay, but I'm sorry, I'm derailing this. Okay.

Speaker 2 (41:55):
The point is there's a there's a human being that
you know that is famous for one reason, and of
possibly powerful and their voice is being manipulated. But the
video itself isn't something that you can reference to like
a speech or to a movie or something that you
remember you can verify with. It looks like a brand
new video, but you can still tell that it's that person.

(42:18):
Their mouth is moving and the words coming out are
something awful.

Speaker 1 (42:23):
I feel like we're putting this in an accurate, humorous way.
We do have to emphasize this is a very scary thing. Matt.
You've built a beautiful example that leads us to another
nefarious use of deep fakes, which will happen. It absolutely
will happen, false accusations.

Speaker 2 (42:42):
Ah, Yes, exactly so.

Speaker 1 (42:44):
According to Andrea Hickerson, who is the director of the
School of Journalism and Mass Communications at the University of
South Carolina, this is a problem because at the most
basic level, deep fakes are lies disguised to look like truth.
In the Hickerson says, if we take them as truth
or evidence, we can easily make false conclusions with potentially

(43:05):
disastrous consequences. So if you want to ruin someone's life,
you want to smear a political opponent, or you just
don't really like your neighbor, you don't like their their vibe.
It just irks you. Then you could with this, with
this capacity in the future, you would be able to

(43:28):
make something where it appears that they're saying something terrible,
where they're like, yeah, I don't kick and puppies, burning buildings,
just whatever to feel something, you know what I mean. Also,
I Le, I rent scooters and I leave them in
the middle of the sidewalk because I'm that guy. I'm
the one, and then it would look real. But if

(43:48):
we go back to orwell, this becomes even more dangerous
because we have to consider activism and heavy handed state actors.
It's already dangerous to be an activist. To Knowle's example
about Hong Kong. Right, people are in danger, people are dying,
They're fighting an authoritarian massive state. So currently, state corporating

(44:10):
criminal actors seeking to silence dissidents all use the ordinary
tool kit of suppression, all the time tested stuff, all
the smooth jazz, all the hits, threats, violence, kidnapping, smear campaigns, incarceration,
disappearing and of course assassination the breakout single of suppression. Right,

(44:30):
But soon even as we record this, state actors will
have a new and powerful tool. If, for instance, you
are protesting something and you are the leader, the face
of activism, and say Hong Kong or one of the
many other places around the world where protests are active,
the authorities or the opposition would be able to make
videos in which you are on camera disagreeing with the

(44:52):
status quo, saying, hey, they were right all along. I
had a change of heart, and I want to confess
that my motives we're not pure. I was paid by
someone else to do this, so I apologize. I'm turning
myself in. And then the actual you finds out about
this when your confession is aired on the news and

(45:13):
people start contacting you.

Speaker 2 (45:15):
Yeah, that is an intense hypothetical currently situation.

Speaker 1 (45:21):
It's gonna happen.

Speaker 2 (45:22):
I know, I know right now. It's just it is possible.

Speaker 4 (45:26):
Well, that leads us to a lot of things that
need to happen or potentially are going to happen. And
I want to lead with this. It's really interesting in
the law now, and lawyers out there correct me if
I'm oversimplifying this, But video evidence isn't like the end
all be all already right, it has to fit a
couple of requirements in order to be admissible in the

(45:48):
first place, which is relevance and authenticity.

Speaker 2 (45:52):
Yeah, with that chain of custody.

Speaker 4 (45:54):
Chain of custody is a lot of things that go
into that, and we're going to get into that in
a minute, and also potentially what might have to have
happen as this technology gets better and better. But video
evidence can be considered hearsay if there isn't someone to
corroborate it. If it's the only evidence, that's not a
great case that you know, an eyewitness. You know, I

(46:14):
was just saying the good crime shows an eyeball witness
is really your best bet to getting a conviction. Video
and video alone, if that's all you got, especially if
it's like grainy security cam footage, a lawyer could say
that is not a reasonable representation of the subject being
displayed or in question, et cetera. But what this comes
down to is like, when is the log going to

(46:35):
catch up to this new technology in terms of that
chain of custody, Because that's the thing. If we can't
believe our eyes, like we said at the top of
the show, you said, Ben seeing isn't really believing.

Speaker 2 (46:46):
How do we authenticate this stuff?

Speaker 4 (46:49):
It's going to have to be that chain of custody
that's kept under lock and key, so we know that
the footage we're seeing was captured and then disseminated with
no between.

Speaker 1 (47:00):
Right, Just video is nine enough. That's why Bigfoot will
never see a day in jail ever. And he needs
to be.

Speaker 2 (47:07):
There, he really does. But you know, here's the other thing.
We can't trust our memories either. We're talking about in court.
Video without corroboration from a witness doesn't work, and the
witness is the best way, the eyewitness. But the eyewitness
is very unreliable. It's perhaps the most unreliable thing that
exists as far as evidence goes. So what the heck

(47:27):
do we trust when it comes to evidence of something
that actually happened.

Speaker 1 (47:31):
The approach is at what of accretion of aggregation? Right,
you have to have multiple like nol said, you got
to have multiple things so you can say you can
triangulate a little bit, say, yeah, memories imperfect. We can't
trust video alone. But we have a video, and we
have a witness, and then maybe we have some other
forensic evidence like a gun case, scenes of you know,

(47:54):
spent shells were found and they match the gun.

Speaker 2 (47:57):
I completely agree, and you're right. What worries me is
that what is stopping let's say, world leaders and powerful
people who do wrong things where a video is an
actual video is taken, maybe a surveillance cam video is
taken of some wrongdoing occurring. Let's says as high as
an assassination is something as dire as that occurs on camera,

(48:21):
but there are no witnesses. But for sure, on camera
there is a world leader shooting somebody in the head.
What is stopping them from saying, oh, that's a deep
fake video?

Speaker 1 (48:33):
Obviously, I'm so glad you said that, because that's the
other side of the coin, right, If you are caught
on video doing something despicable, doing something illegal, whatever, you
can just say I have enemies, this was a deep fake.
You know, my ex hates me, or I am I
am doing work for I don't know whatever, a union

(48:54):
or something or an NGO. So there's not a very
good way to refute that other than having to resort
to as much other corroborating evidence as you can. And
then again you have to rely on, like you said,
those eyeball witnesses, whose memories at times can be very
financially motivated. Wow.

Speaker 2 (49:14):
So so in the end, we're relying on law, which
is such an old concept, right.

Speaker 1 (49:19):
I think this is I think this is where we're going, Nolan.
I know this is something we've talked about on this
show so often because it's so important. Technology will always
outpace legislation. If we finally get around to making a
law about something, the horse has already left the barn,
the you know, the detainee has already pulled the black

(49:40):
hood off and as well on their way to international waters.
In the summer of twenty nineteen, the US House Representatives
Intelligence Committee sent a letter. They didn't pass anything. They
sent a letter to the big socials Twitter, Facebook, Google,
asking them how these sites planned to fight again and
steep fakes, particularly in the twenty twenty election. And this

(50:05):
all came about because remember we mentioned that deep fake
video of Nancy Pelosi, House speaker here in the US.
The current president, President Donald Trump, didn't just co sign
that deep fake video. He retweeted it.

Speaker 4 (50:20):
Wait a minute, when you say he wasn't just like saying, hey,
watch out for this fake video of my friend Nancy Pelosi. No, no, no,
he put it out there like he was implying that
it was real. Yeah, he's a savage on Twitter too,
as anybody knows.

Speaker 1 (50:34):
On Twitter. As a matter of fact, if you look
at his record, he disagrees with himself extensively.

Speaker 2 (50:38):
Well, it's just it's one of those things where it
really shows how convincing these things can be. I think
back to I think we've mentioned it on this show before,
but the Jordan Peel deep fake video of President Obama.
Oh yeah, yeah, yeah, where it looks like President BarackObama

(51:00):
sitting at a chair in the Oval office somewhere and
he's just saying weird things and it sounds pretty close
to him, but it's actually just Jordan Peele doing a
voice like a voiceover in the character that he's played
before on Key and Peel, and it is it's pretty disturbing,
and I think it speaks to how well Jordan's impression is,

(51:25):
or how good his impression is, but it also speaks
to the ability, this ability of matching that face, turning
it into or or changing the way the president's mouth
is moving to match the words. And if you imagine
the technology that is coming out right now where you
can take five seconds of any voice recording and then

(51:47):
you can recreate that voice saying anything you want it to. Again,
this is a small research project coming out of a
couple universities, but you could if you combined that voice
changing technology with the face changing and manipulating technology, you
get to a point where it will truly be we

(52:08):
will be unable to tell.

Speaker 1 (52:11):
That's right, and we know that therefore there's a ticking
time bomb on this, right. So the presidential retweet of
that deep fake video is what inspired the House representatives
to send that letter. But this followed a request from
Congress that occurred earlier this year in January, where they

(52:33):
asked the d and I Director of National Intelligence to
give a formal report on deep fake technology. It was basically,
explain it to us so that we sound like we
know what's going on with our constituents. And also, of
course there's more than a little self preservation involved because
these are members of Congress, right, so we know that

(52:54):
we have to have legislation involved. But we also know
there's a big chance that it's just not going to
be enough. It's too easy to do this, it's too convenient,
it's too powerful, right.

Speaker 4 (53:07):
Really quick in our industry, as podcasters. There's a there's
an audio equivalent of this that's kind of on the
on the horizon.

Speaker 1 (53:15):
Frank and Biting's next evolution very much so.

Speaker 4 (53:19):
I found out about this through a third party. I
can't name names. I don't think it's technology that's really
on the market yet. But literally, an algorithm that could
sweep through our catalog. Me you, Ben and Matt run
an algorithm on our catalog, and then you could feed
it lines and it'll it'll approximate our voices. And I've

(53:41):
heard it in action and it does a very convincing job.
And on the one hand, we could say, oh cool,
we don't have to read ads anymore. But on the
other hand, we could say, oh no, we don't have
jobs anymore, we don't need podcasting. I'm just saying, like
the the the implications of stuff like this, it's always
more far reaching than you would originally think.

Speaker 2 (54:01):
Well, you know, that's why Dan is ending Harmontown is
because he's just gonna take all the voices and bottom
together and make a whole new podcast that he can
just write.

Speaker 1 (54:11):
Did I tell you I watched the last episode?

Speaker 2 (54:14):
I yeah, it was No, you didn't tell me, But
I'm glad that you.

Speaker 1 (54:18):
Did touch it too. That was cool. Yeah, So I
mean end of an era for sure. But then now
we have the scary proposition, and I think i'm I
think I'm familiar with what you're talking about, NOL. We
have the scary proposition where if someone has enough audio
footage to pull from, then Harmontown would never have to end.

(54:38):
It would just get really weird because the technology you
would still need to evolve.

Speaker 2 (54:42):
I'm telling you that's what he's doing, guys, That's what
I'm saying. Like he's gonna make Rob Shop say whatever
he wants. It's gonna be amazing.

Speaker 1 (54:49):
Well, I'll tune in. I'll tune in. If you guys
are listening, let us know we would. We would love
to hear it, and we applaud you for pioneering into
this brave, new strange world. Government institutions like our favorite
math scientists DARPA, and researchers at colleges like you had
mentioned Matt Carnegie, Mellon, University of Washington, Stanford, so on,

(55:10):
are also experimenting with deep fake technology. They're experimenting in
two different but very related paths. One they want to
figure out how to use it, how to make it better,
and two while doing that, they want to figure out
how to stop it. So think these goals are kind
of contradictory unless they build in some kind of equivalent
of a governor switch, you know, like for anyone unfamiliar

(55:34):
with automobiles, some engines have a specific switch in there
that limits the performance.

Speaker 4 (55:41):
Of the engine governor, right, Like in long distance trucks especially,
there's a company. I only know this because I knew
a guy that that was a long haul truck driver.
And there's a company called Swift that everyone jokes in
the industry stands for sure, wish I had a fast
truck because they are notorious for putting governors on there
that I guess it's a calculation you make as a

(56:02):
business in terms of the risk, you know, versus like
how fast can we get there versus how likely our
drivers are going to drive too fast and potentially put
themselves at risk and others, and you know, open up
for liability and lawsuits. But yeah, that's absolutely a thing.

Speaker 1 (56:15):
So Bluebird buses would be another great example, and they
have their own very interesting story.

Speaker 4 (56:21):
Bird scooters have digitally triggered governors, where like we have
an area in town called the belt line where it
GEO targets that area, and when you ride them on
our belt line, which is like almost like the high
Line in New York, it's like a walking trail, but
you can ride the scooters. It does not allow you
to go past a certain speed that you could go
past elsewhere in the city.

Speaker 2 (56:39):
Man, I remember I had one on my ninety five
Dodge Caravan. Talked it out right around.

Speaker 1 (56:47):
So we're talking about the technological analog. That's why I'm
bringing up governors switches because it seems like researching the
improvement of the given technology while also researching a way
to stemy that technology means that you would ultimately build
in something like that, and ideally you would want it
to be proprietary such that you would control it. So

(57:09):
in a way, these institutions are competing. Who will control
the nature of the truth and reality. That's it. That's
a big question, but it's it's there.

Speaker 2 (57:19):
I think I think we need some kind of tech
ethics board or you know, or advisory committee like techics,
like like ethics, yeah, ethics, yeah, ethics. We should do that.
We should get somebody to come through and like create
something that everybody has to sign.

Speaker 4 (57:34):
Doesn't that have to be self imposed by like these
tech companies, you know what I mean?

Speaker 2 (57:39):
Like, I don't know, if you get like a Gavin
Belson or somebody like that, you could probably get everybody
else on board, Like you know, a big name.

Speaker 1 (57:46):
You'd have to, uh, you have to ruin some people
for an example. You'd have to ruin some people in
the beginning for an example.

Speaker 2 (57:53):
Probably, But but I think you could. You could push
it through.

Speaker 4 (57:56):
You just name dropped a fictional character. I just want
to point that out to Wait what yeah, just huh,
just putting that out there. Who any Silicon Valley fans
out there trying to I'm trying to put something in
there that you literally made me do a double take.
Wait a minute, like is he who is the How
come I haven't heard of him?

Speaker 1 (58:12):
Oh?

Speaker 4 (58:12):
Yeah, fiction.

Speaker 2 (58:13):
I'm trying to easter egg it.

Speaker 4 (58:14):
Sorry, dude, I shouldn't have said anything.

Speaker 2 (58:16):
He played along, but you played along perfectly. Oh thanks, man,
that was beautiful.

Speaker 4 (58:20):
I really screwed it all. Sorry, Well, what what's next?

Speaker 1 (58:23):
Really? I mean? I think I think right now, regardless,
there are people who are pro deep fake technology. Uh,
there's there's a very convincing argument, all right, I think
a very exciting argument that this can fundamentally change what
we think of as film as entertainment because imagine you
have the perfect role for an actor who has passed

(58:46):
and you want them to be in your film. With
this kind of technology, you can do it plausibly. And
that means that, coupled also with machine learning writing of fiction,
we could arrive I've had a time in our individual
or collective lives where a film is made without human

(59:08):
involvement on the creative ends. How insane is that? Forward
to the future, I say, no, reduce.

Speaker 2 (59:15):
We know that other researchers are attempting to do that,
like combining the different versions and types of neural networks
to write a screenplay, to do some voice over at work,
you know, to actually shoot video and edit video. It's
all happening. It's just a matter of time. I think
you're right on the money there, Ben, So.

Speaker 1 (59:36):
Where does this leave us for twenty twenty and beyond
at a where to dilemma? Because it's really a matter
of free expression versus true deception. According to Sharon Bradford Franklin,
as a policy director for the Open Technology Institute out
of New America. Deep fake videos threat in our civic

(59:56):
discourse and can cause serious reputational and psychic harmed in
vials right. They also make it more challenging for platforms
to engage in responsible moderation of content, which is already
a huge problem for anyone who's been paying attention to
the latest news about Facebook. While the public is understandably
calling for social media companies to develop techniques to detect

(01:00:19):
and prevent the spread of deep fakes, we must also
avoid establishing legal rules that push too far in the
opposite direction and pressure platforms to engage in censorship of
free expression online. So, to take your earlier argument, which
again Matt, I love where someone says, that's not me,
this is a deep fake. What happens if you were

(01:00:39):
posting something, let's say you're an activist or you've a
whistle blower or a whistleblower yes even better, and you
propagate this film that is indisputable proof of the shenanigans
you said we're occurring all along, and then the people
who have their hands on the switches, their fingers on
the faucet, they just say, oh, that's a deep fake.

(01:01:00):
And they turn it off. We were always at war
with a seysha. You know what I mean.

Speaker 2 (01:01:08):
It's harroring when you put that at the endpen We
were always with these things.

Speaker 1 (01:01:12):
Right, I mean, but where do you where do you
think this is going? I would say one person's opinion
that it's it's going to continue. It will not disappear.

Speaker 4 (01:01:21):
I will say, if you want to see a really
really jarringly good example of what this can look like,
it's a video of.

Speaker 2 (01:01:33):
Bill Hayter.

Speaker 4 (01:01:34):
On is it the Letterman Show? No, yeah, it was
an older a show, an old clip. It was an
old clip. But it's a Bill Hayter. I believe it's
The Letterman Show. It might be Conan and he's telling
the story about meeting Tom Cruise at a party of
some kind of it had to do with No, he
was not what it was. He was in that movie
Tropic Thunder with with Tom Cruise, and he sees he

(01:01:55):
meets Tom Cruise at the premiere and Bill Hayter this
point isn't like huge. He's an SNL. He's known for
doing impressions, et cetera. So of course when he's telling
the story, he's doing the Tom Cruise impression when he's
doing Tom Cruise's parts of the story, and he's a
great impression, and he's a great impressions and in this
deep in his video, every time he starts to lapse
into the Tom Cruise voice, his face turns into Tom

(01:02:17):
Cruise and it is it is jarringly good and it's
almos it's borderline like it makes your brain kind of
like spasm.

Speaker 2 (01:02:25):
A little bit because it's just so good.

Speaker 1 (01:02:27):
He also does a great impression of Arnold Schwarzenegger and
al Pacino, I want to say, and you can see
the same technology of play. The Tom Cruise one, I
would say is the best example because their faces are
already a little more similar this.

Speaker 4 (01:02:40):
One, even though it goes a little further where he
starts to do a Seth Rogan impression later and then
his face becomes Seth Rogen. But the way it does
it morphs where it just like for a split second
it'll be Seth Rogen and then it's back to Bill Hayter.
But the Tom Cruise parts are shockingly good. It doesn't
look like mapping, it doesn't look like projection mapping, or

(01:03:00):
you know, it really very much is like he becomes
him and everything seems you can tell. It's like pulling
from something that Tom Cruise did where he was acting,
you know. Like, but like you said, Ben, this video
that we're seeing doesn't really exist in the wild. It's
like a you know, composite of like all of this
stuff that's out there, right. Are you seeing it, Matt, Oh, yeah,

(01:03:20):
I've seen it.

Speaker 2 (01:03:21):
And there's the there's one that's called the Deep Fake
Impressionist or Deep There's there's another impersonator.

Speaker 4 (01:03:27):
It's a Instagram account that I follow as well that
it does a lot of these.

Speaker 2 (01:03:31):
Oh okay, yeah, all I know is this one guy
that I've seen several videos of. I couldn't even tell
you his name right now, but he got together with somebody.
They made an entire video of non stop versions of
this and it really is convincing crazy.

Speaker 1 (01:03:45):
And this is just the beginning. People will look back
on this era as the halcyon days of discernible fakes,
at least if this continues. Yeah, just the time of
us knowing what was real and what was not right.
Where do you see this technology going? Folks who want

(01:04:06):
to pass the torch to you, let us know and
on the way, send us your favorite deep fakes. You
can post them on our community page Here's where it
gets crazy. On Facebook. You can find us on Instagram,
you can find us on Twitter. You can also find
every episode we have ever done on our website. Stuff
they don't want you to know dot com And.

Speaker 2 (01:04:24):
We don't put this out there enough. But as the
holidays are coming around, remember we've got a te public store.
So if you want to get some some awesome stuff
they don't want you to know, gifts, whatever it is,
mousetads do it, and you know that's look candidly and upfront.
We get a tiny, tiny, witty bitty percentage, but you
are supporting our show bye by doing this. The designs

(01:04:47):
are cool, they're really cool. I genuinely wear my new
red stuff they don't want you to know shirt all
this time.

Speaker 1 (01:04:54):
Ben Ben's old school.

Speaker 4 (01:04:56):
Yeah, that's awes the text version.

Speaker 1 (01:04:58):
They got Superman move right.

Speaker 2 (01:05:00):
Look, we feel weird wearing our own things, so a
lot of times we do what Ben's doing. You put
something over it, but when you wear it, it's almost
like for me like having the Superman's thing on. You know,
I wear the stuff they don't want you to know.
Signature Tidy whities.

Speaker 1 (01:05:13):
What I wear these stuff? They don't want you to know. Snuggie,
do you.

Speaker 4 (01:05:17):
Actually have custom made yes?

Speaker 2 (01:05:19):
Oh my god.

Speaker 4 (01:05:20):
I don't think t public offers it and I had
to outsource it.

Speaker 1 (01:05:23):
And more importantly, we have a good authority. I don't
know if you guys get these texts too, but every
time somebody buys something from the store, I get a
text from Connell that just says one more day.

Speaker 2 (01:05:36):
Yeah, I'm not on that chain, but I'd like to
get on there. That would be fun. Really, it would
really just fuel my neurosis.

Speaker 1 (01:05:51):
And that's our classic episode for this evening. We can't
wait to hear your thoughts.

Speaker 4 (01:05:56):
It's right let us know what you think.

Speaker 2 (01:05:57):
You can reach. You to the handle Conspiracy Stuff.

Speaker 4 (01:06:00):
We exist on Facebook X and YouTube on Instagram and
TikTok or Conspiracy Stuff Show.

Speaker 2 (01:06:05):
If you want to call us dial one eight three
three STDWYTK. That's our voicemail system. You've got three minutes.
Give yourself a cool nickname and let us know if
we can use your name and message on the air.
If you got more to say than can fit in
that voicemail, why not instead send us a good old
fashioned email.

Speaker 1 (01:06:23):
We are the entities to read every single piece of
correspondence we receive, be aware, yet not afraid. Sometimes the
void writes back conspiracy at iHeartRadio dot com.

Speaker 2 (01:06:53):
Stuff they don't want you to know is a production
of iHeartRadio. For more podcasts from iHeartRadio, visit the iHeartRadio app,
Apple Podcasts, or wherever you listen to your favorite shows.

Stuff They Don't Want You To Know News

Advertise With Us

Follow Us On

Hosts And Creators

Matt Frederick

Matt Frederick

Ben Bowlin

Ben Bowlin

Noel Brown

Noel Brown

Show Links

RSSStoreAboutLive Shows

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.