Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Sleepwalkers is a production of I Heart Radio and Unusual Productions. Hey,
how are you doing? Are you in a curious situation?
But not really? Where are you? Can you say where
you are? You sound like you can't talk. Yeah, okay,
(00:31):
I've been working on Sleepwalkers so much. Yeah, so I
know what you're thinking. Kara isn't normally that distracted? But
the truth is that wasn't her speaking. We're playing prerecorded
fakes of her voice to her cousin, created by AI.
Are you sleeping today? Wake you? I feel so tired.
(00:51):
I'm sorry. I wanted to talk to you about huh.
I stupidly left my wallet at home and I need
to order tickets to the screening before it sells out?
What's screening? I'm not? Are you you know? Could you
read me a card number real fast or text me
(01:13):
a pick up your card? I'll then you back. Are
you you talking to me? Your cousin Leslie? Right? Hello? Yeah?
I think we're crossing paths here. You're not answering me
in a weird You're answering me in a weird way.
(01:37):
So what was it like hearing Leslie respond to robot Carol? Well,
it reminded me that it's very easy to prank people
when they have no context for what you're doing. It
took her like a full minute to be like, Okay,
I's tired, not like that's not car you know, it's
always like cool my dad, And I'll say to him
after a minute, Dad, are you playing internet chess? Well,
(01:59):
there is what they all there's tech brain, which is
when someone's texting and talking to you, they're like and
that's sort of what it sounds like. She was like,
are you having another conversation? Has she forgiven you? She's
forgiven robo ka, I'm still not off the hook. Sorry.
Fake audio and fake video can be a lot of
(02:19):
fun for pranks, and there are some life changing the
positive uses for synthetic media that we hear about later.
But just how much trouble could deep fakes get us into?
And as they get easier to make, how can we
keep them out of the hands of the wrong people.
I'm as Velochen, Welcome to Sleepwalkers. The plan originally was
(02:52):
to get cousin Leslie's credit card detail that failed. Yeah,
Julian had the idea of having Kara Ai asked for
credit card information basically to prove how easy it is
to get somebody's credit card information. You can imagine if
it was a little bit better and you were talking
to someone and they were like, oh my god, my grandchild,
you know, needs money. Oh my god, my grandchild is
(03:14):
in trouble, that they would say, Okay, hold on a minute,
I'll get you the credit card number, you know what
I mean. Yeah, And I think that's what's so frightening
about this technology. We're going to dive later into how
you synthesized your voice, but it's the same technological underpinning
of the video that many people have seen of Jordan
Peel basically speaking in Barack Obama's mouth. We're entering an
(03:35):
era in which our enemies can make it look like
anyone is saying anything at any point in time, even
if they would never say those things. For instance, they
could have me say things like President Trump is a
total and complete dipshit. So that was a computer neural
network faking Barack Obama's facial features and mouth movement to
(03:55):
literally look like he was speaking the words that Jordan
Peele said, and that actually makes it even more persuasive
than the fake audio we just heard. Of your voice,
because when you see something, you tend to believe it.
That's why the phrases seeing is believing. Thanks. We're got
to come back to deep fakes, but before we get there,
we're going to take a look at some other online
trickery because the scariest part is that fakes actually don't
(04:18):
have to be as sophisticated as you'll call to cousin
Leslie Toreak Havoc. This is particularly true on Facebook. So
we went to their headquarters in Palo Alto to meet
Nathaniel Glica. He's the head of cybersecurity policy at Facebook,
and he told me about an incident last summer that
creates the true dilemma for him and his team. In July,
(04:41):
we conducted a takedown of a fairly small network of
pages that were operating in the US. Showed links back
to Russian actors, and what they were doing was, among
other things, creating events where they were inviting Americans to
come to protests, and in particular this was around the
Unite the Right two movement, which happened in It was
the anniversary of the bloody clashes in Charlottesville in seventeen,
(05:04):
and the far right wanted to gather again. This time
Russia was watching, and there was an event that popped
up which was the No Unite the Right to movement.
This was a counter protest. There were authentic counter protests
being planned, but this one was being convened by a
group of inauthentic pages and accounts which were linked back
to Russia that were clearly attempting to sort of bring
(05:27):
Americans together in a space where they would go into
physical conflict. Immediately after creating the event, they then went
out and invited legitimate, unwitting activists to co host the
event with them. Let's pause for a moment. This is
Russia we're talking about, and they're creating a Facebook event
to appeal to liberal activists, designed to draw them into
(05:47):
physical conflict with the far right and create the kind
of scenes that tear at our social fabric. But the
people co hosting it are not Russian agitators, their u
S citizens acting in good faith. What we saw in
that case, and what we're increasingly seeing, is these actors
trying to lure their behavior with domestic actors to force
(06:08):
not just the platforms but all of us to ask,
how do you separate these Ultimately, Facebook had to make
a decision. We removed that event from Facebook because it
was created by inauthentic actors. If someone else had created it,
that event would have been fine. So we removed the event.
But then we reached out to the co hosts, the
authentic hosts, and we explained to them what had happened,
(06:30):
and we made clear if you want to host your
own event, you should do that. We just want to
make sure that we everyone understands what's happening and what
did they say, and what was their reaction to realize
that the free will had been manipulated in that way.
If you look at reactions, it's a range of sort
of disbelief, Right, I don't think this was what you're
saying it was, too, I can't believe this happened. To Okay,
(06:50):
that happened, but I strongly believe in this, and I'm
gonna go and I'm going to advocate for my issues
somewhere else. That spectrum of difficulty is exactly clear why
we see actors use these techniques, because there are no
easy answers here. My assumption going into this was that
detecting misinformation would be the biggest challenge for Facebook, but
(07:12):
that's the easy part. It's after you identify the fakes
that the really tough questions begin. We know that, particularly
the government actors in this space, part of their information
dominance strategy is to make themselves appear bigger and more
powerful than they are. They want to seem like they're everywhere,
and it's really easy to see foreign government manipulation under
(07:33):
every rock. I think it's really important not to play
into the hands of these actors and sort of overplay
their own influence. This is attention we struggle with. Whatever
we conduct a takedown for some of these operations, the
most attention it gets is when we take it down.
The entire situation puts Facebook in a catch twenty two.
(07:54):
If they leave the content up, they're helping to promote
a foreign government's nefarious agenda. If they take it down,
the foreign government gets all this attention for being more
powerful and cleverer than they actually are. These decisions are
incredibly hard. Think of Charlottesville, Think of pizza Gate. Think
of Lane Davis, who stabbed his own father after an
(08:15):
argument over the conspiracy theory about liberal pedophiles. Face can kill.
And Facebook has recognized this for a start, they hired
Nathaniel a former cyber crimes prosecutor in the U s
Department of Justice, and in March of this year, Mark
Zuckerberg announced a company wide pivot towards privacy and encrypted messaging,
including services like WhatsApp, which they own. But David Kirkpatrick,
(08:40):
founder of Tachonomy, notes that the pivot carries its own problems.
If you look at South Asia where there's a lot
of ethnic discord political violence, notably in India, Indonesia, Me
and mar Sri Lanka. One of the primary ways that
that spreads is in group messages in WhatsApp. People in
(09:03):
the US don't typically use what'sapp for group messages, but
in places like India and Indonesia they do. And these
groups on five or six people, your your parents, and
your brother and city. These are like you subscribe to
a political leader or a religious zealot. So this is
more like the dear leader being piped into your home right.
So the problem has been almost more severe in those
(09:26):
systems than on Facebook itself of fake news and ethnic
hatred being disseminated. Because WhatsApp is a encrypted service, so
the service itself can't even see what the messages are
that are being distributed. What's scary is it doesn't take
any technical sophistication or knowledge on the part of people
(09:48):
writing these messages spreading this misinformation. They're just using WhatsApp. Yeah,
and these are just messaging apps and social media platforms.
But what they mean is that a single message can
spread like wild And of course the history of new
communication technology tends to go hand in hand with violence.
When the printing press I books came to Europe, they
(10:09):
at least religious wars, but they also made the world literate.
And we've mentioned this before. Technology is usually dual use,
which relates back to deep fikes. Mostly when you read
about deep fakes, probably thanks in part to the fact
they're called deep fakes, the coverage is not very positive.
There have been more and more stories though about positive
uses for deep fakes. So when we come back, I'm
(10:29):
gonna tell you more about how I faked my own
voice and also some of the things that I learned
in the process. We started this conversation a few weeks ago,
and then you asked those to create these artificial voice
based on your identity. That's Jose's to tell out, the
(10:52):
co founder of liar Bird. They're the company who made
robot Kara and helped me prank my cousin, and they've
published a version of their tools online at liarbird dot ai.
Here's how it works. I know it might sound a
bit like magic, but in reality, the way that our
algorithms work is basically they are just a pattern matching algorithms,
(11:12):
and so it's trying to figure out how to identify
the patents in your voice by comparing it against thousands
of other voices I should have tens of thousands of
other voices, and trying to figure out what is it
that makes your voice unique. Once Jose's algorithms identified what
was unique about my voice, obviously everything they had the
(11:33):
building blocks they needed to make a fake. Then we
sent Jose a set of sentences we wanted robot care
to say, and he used another set of algorithms to
turn the text into what we heard. The way they
do this is they use it's called a generative adversarial
network again, which is a system where one neural net
tries to trick another one a thousand times per second.
(11:55):
So each time the second network to tacks of fake,
the first one tries again It basically learns from its mistakes,
and once it tricks its adversary, it's ready to show
its results. In our case, liar Bird pits my fake
voice against my real voice until it sounds like this
sub dog Scara. As this technology becomes more widely available,
(12:17):
so does the potential for abuse. And while Liarbird develops
the technology, they don't take the ethics lightly. But Jose
has an entirely different fear. We believe that the biggest
risk of this kind of technology comes from the fact
that not a lot of people know about it. I
believe that society is not ready for what's going to
(12:38):
happen when this technology becomes widespread, and so I really
want to make my best effort in trying to showcase
it to the public so that they are at least
prepare for what's coming. When people know a scheme exists,
they're less likely to be tricked by it. But if
you don't know deep fakes are possible, you're much more
likely to fall for them. Leslie might been better equipped
(13:00):
to call my bluff had she known it was even possible.
But here's the thing, Well, there are inevitable misuses of
deep fix both behind US and on the horizon. There
are a number of extraordinary benefits of this technology, which
is why Jose is working on it. When people are
diagnosed with a LIST, it's because they start to lose
their movement skills seemed, let's say, their hands or their feet,
(13:21):
and so they go to the doctor and then the
doctor tells them like, you know what, this can be
als and this gets progressively worse. This was the case
for Pat Quinn, the co founder of the Ice Bucket Challenge,
creating a real fight within the a LS community. This
is a public battle now. Pat was diagnosed with a
(13:46):
l S and it ultimately took his ability to speak,
walk and use his hands. During this time. Since they're
diagnosed until they lose their voice, they have some time,
and so the idea is that during this time they
will be able to record themselves, ideally in a really
high quality setting. Then based on these recordings, we will
be able to create an artificial copy of their voice
(14:09):
which they will be able to continue using for the
rest of their life. Liar Bird has partnered with the
a l S Foundation to create Project Revoice. Just imagine
how it would feel for them, to, let's say, not
be able to tell their husband or their wife I
love you anymore, to tell this to their kids. And
so using this technology, they are able to keep this
(14:33):
really important part of their identities. Using the exact same
technology I used to create my deep fake liar bird
was able to give Path the ability to preserve his
voice for the rest of his life. It's a strange
feeling saying the first words at the second time. It's
like you don't realize how powerful, how personally I her
(14:55):
voice really is until it stay them from you. My
voice is how I back against a very disease. Take
it for me, say something, Listen to it. No voice.
Since revoicing pat Lierbard has received a number of emails
from als patients asking if it's possible for them to
(15:16):
do the same thing, preserve this part of themselves which
they know they're going to lose, and Jose has heard
from people who have lost family in other ways. For instance,
we have received that quite a lot, actually very emotional
emails about people telling some variation of this. My wife
died three months ago, and I have two children, age
(15:38):
four and six, and I would really really love to
be able to tell them a good night story in
the voice of their mother, or to tell them that,
in the mother's voice, I love you, I am proud
of you, be happy. The tools on lierbrard AI are
intentionally less advanced and meant to just spread awareness, but
Liarbrard's more bespoke tools open amazing possibilities for changing how
(16:01):
we deal with loss and grief. I would like to
ask you just one question, which is like, how would
you feel, let's say, about recording the voice of your
parents and keeping them What do you think would you
like to the lease or or how do you feel
about that? It was interesting when jose asked me because
I had actually thought about it ever since I learned
about Liar Bird when I was fifteen. So fourteen years ago,
(16:26):
my dad died in a fatal car accident, and nobody
prepares for accidents, you know. One minute my dad walked
out the door, and forty five minutes later the police
showed up at the same door to tell us what happened,
and so I never got to see or speak to
my dad ever again. Sometimes my therapist will ask me
(16:49):
if I think about what I would talk about with
my dad if he was still alive, and I always
say that, you know, I don't. I don't think about
that too much because it's sad to think about at
that because he's not actually around, and because I know
I can't talk about him. But it's also hard to
conceive of. You know, I can't recall off the top
of my head what he sounds like, and sometimes I'll
(17:12):
hear his voice when we watch home movies and it
always spooks me out. So the idea of having his
disembodied voice asked me things like how do you like
working on this podcast? Or what's the most amazing thing
you've learned, or even saying things like I'm so proud
of you. Do you know that? I'm not sure how
I'd react to his voice like that. Regardless, the thought
(17:35):
of it is something in the realm of possibility is
equal parts chilling and exciting. I actually think, given the chance,
I might do it. This is not the science fiction
thing or something that will exist years from now. It's
(17:55):
something that exists already to the people can even go on.
And as my cousin Leslie learned, these deep fakes are
already good enough to use on unassuming family members suburb
This isn't Kara, this is artificial terra. Oh my God,
my voice right now, it's AI. This is awful. When
(18:26):
we started reporting on deep fakes, I never anticipated how
moving the technology could be. I was more focused on
the dangers, and they are worth considering too. One person
who is out in front bringing awareness to the potential
harms of fake media is Danielle Citron. She's a legal
professor at the University of Maryland and the author of
Hate Crimes in Cyberspace. Machine learning technology and neural networks
(18:48):
can learn from your photo and voice that's taken from
recordings of your voice, can sufficiently learn enough about your
face and the way it moves and you or voice
so that it can create really incredibly difficult to debunk
videos of you doing and saying things you never did.
(19:10):
Now we all know how dangerous the simple recent word
can be. Danielle got interested in how fake video could
increase the forces of hate exponentially. There was a whole
Reddit thread devoted to deep fake sex videos of celebrities,
female celebrities like Emma Watson and Hathaway and others. If
you went through the thread, which I did, you can
(19:31):
see the conversation moving beyond Emma Watson to my bitch
girlfriend or that woman I hated in high school, and
it was it was all the conversation about women, you
know what I thought was like the evil Cyberus stocking
was all based on Crewe doctored photos of someone naked,
but if you worked at it, you could figure it out.
Now we can put people into pornography in ways that
(19:52):
are devastate their careers. So, Kara, I do think it
says something that this new technology is being used to
target women. And a lot of these conversations are happening
on the same forums on Reddit where the in cell
movement was born, right, So I think this is especially
important when we talk about famous women and their likeness.
A lot of men on the Internet want to see
(20:13):
their favorite actresses in positions that they wouldn't be able
to see those actresses in, and so with this technology,
it's quite easy to put someone's face on somebody else's
body without the consent of the actual actress and actually sag.
The Screen Actors Guild held a panel a few weeks
ago to bring this up that like, yes, we we're
(20:34):
talking about this in terms of democracy and our political
system and the upcoming election, but we also have to
talk about this in terms of the livelihood of women
who make money on their likeness and whose likeness is
now being misappropriated. Yeah, because it can destroy their careers
and silence them. There's actually a case in India where
people attempt to use deep fate pornography to intimidate and
(20:57):
silence a journalist called run ayub and I spoke about
that case with Danielle the Indian journalist who had been
very critical of Hindu politics, nationalist politics, and a deep
fake sex video sort of was spread basically to discredit
her um and spread through texting networks and went viral,
(21:19):
and she basically was devastated and went offline, stopped writing
for like three weeks. She's a journalist, this is what
she does for a living, right, So imagine that kind
of granular individual harm um and compare it with harm
to CEOs the night before an I p O. A
deep fake is release that shows this person taking a
bribe or doing drugs or whatever. I'm making it up,
(21:41):
but that tanks the I p O. Right. This kind
of video manipulation used to be confined to places like Disney,
and the output was blockbuster movies that are fictional but
not fake. Now AI is being consumerized, and the tools
to create convincing video are spreading, and that means creating
the kind of chaos Danielle describes is also more and
(22:02):
more accessible. That threatens all of us. One person working
on the issue is Hani for Reed of Dartmouth University,
who has been called the father of digital forensics. I'm
concerned that once we know you can create fake content,
there is nothing stopping anybody from saying that any video
is fake. Everybody has plausible the liability. So rewind two
(22:26):
years ago when the Access Hollywood tape came out of
President Trump saying what he does to women. The response
from the campaign was not this is fake. It was
we apologized, this was locker room talk. They found ways
of trying to excuse it. If that was today, guaranteed
he would have said it was fake. And in fact,
a year ago, after having apologized for the for the
(22:48):
audio recording, he said it was fake. And so now
politicians have plausible deniability, and at a time when our
US president is demonizing the press and telling everybody that
you can't believe anything, that credible deniability holds some weight.
And so I'm extremely concerned. Now, how do we distinguish
what's what, and that I think for a democracy is
(23:08):
going to be incredibly challenging. So when nothing is believable,
the mischief doer can say it's a lie. Do you
know what I'm saying? Like the person who commits the
crime or does something and says something incriminatory can say,
that's a fake. So the more you educate you both
but deep fix the evil doers can leverage that and say, well,
(23:29):
you can't believe anything, right. Danielle calls this the liar's dividend.
In a world where nothing can be trusted, everything can
be denied, and even documented bad deeds can be explained away.
This kind of thing is accelerated by deep fakes, though,
which is why I think there are some attempts to
correct it with law, with law like the Anti Deep
(23:50):
Fakes Law, very similar the Malicious Deep Fake Prohibition Act
of eighteen, which was introduced by this Republican center from
Nebraska named Ben Sas and it basically aims to outlaw
fraud in connection to audio visual records. But I don't
know if this law will path. In any case, not
all deep fakes are malicious, and so we have to
(24:11):
be careful with laws which are too broad. As we
heard in your Liabird piece, there are so amazingly positive
applications of deep fake technology. Here's Honi for read talking
about deep fakes and the movie business. Can you imagine
a world where the actor can simply license their appearance
and they never have to show up on the set.
You say, look, here's a bunch of images of me.
(24:32):
Synthesize me doing whatever you want. I'm basically an animated
character for you, and then anybody can be in the movies.
You can imagine customized movies. Imagine I go to the
movie and say, look, I'd like to see this movie,
but with George Clooney and not Kevin Spacey in it.
Please synthesize that for me. Can we do that today
or tomorrow now? But in theory, that is essentially where
we're going. So if you if you haven't seen, some
(24:54):
of these people are creating all these deep fake videos
of Nick Cage and inserted into all these different movies.
That's essentially and that's not the full length movie, they're
doing it into clips, but that's essentially the trend where
you can just put your favorite actor or actress into
whatever movie you want and just watch it. It's personalized movies.
I'm not gonna lie. I find it super weird that
Nicholas Cage has become the posted boy for having his
(25:15):
face deep faction to various movies. I wonder if you
actually asked Internet nerds, why neck Cage, what do you
think would like? I have no idea. Well, he's kind
of already a meme, right, he was, and he was
in face off where his face was switched with another
person's face, So he's always sort of in the poster
child for face swapping, you know. I think actually one
thing that I thought about is this idea of representation.
(25:37):
You know, if there's a movie or movies or series
like James Bond where the lead character has been historically white,
and you want to show your African American son James Bond,
it would be kind of cool to make James Bond black, right,
because then your child could be watching a movie where
James Bond looks like your child. Absolutely, And I think
(25:59):
one of the big pro BOMs in the movie business
and the media business in general is representation. So more
people do have access to this technology now, but it
used to be that only a Hollywood special Effects company
would have access to this technology. When you remove the gatekeepers,
you get these incredible explosions of culture, but you also
(26:19):
get real threats to the social fabric. And so in
the case of deep fakes, and they're all very well
when they're labeled as fake or when we know they're fake,
but when they're posing as real, that's when we start
to be really under threat, I think as a society.
But there are people working on this as ever. Cat
and Mouse. When we come back, we'll talk about some
(26:41):
of the ways they're fighting back. When it comes to
deep fakes. Pandora's box is open, and as Jose argues,
there's no turning back the clocks. The technology exists. So,
knowing deep fakes and fake news have become more sophisticated,
(27:01):
I wanted to find out how actual news organizations are
thinking about the problem. So I spoke with John mccathwaite,
editor in chief of Bloomberg News, and he actually started
by pointing out that fate news isn't new news. I
think that one crucial thing when you look at fake
news is tom it's always been there. You know. The
first bit of fate news was the trojan horse fake
(27:21):
news and propaganda have for ever been some of the
more exotic weapons in global conflict. John points to another
example involving the famous British by and author of James
bond Ian Fleming. Supposedly one of his great schemes was
to drop lots of jumbo sized condoms over Germany um
(27:42):
and label sort of British small on the outside in
the end in the name, with the aim, no doubt wrongly,
of destabilizing German manhood. My point is that there are
many many ways in which you can do this. But
the most interesting thing to me about fate news is
that really in modern history it's tied very heavily to technology.
(28:04):
What tends to happen is a new technology comes along
which suddenly sets media free. If we look to history
we can understand this moment better. We mentioned the early
printing press before and how it enabled explosions of ideology
and led to religious conflicts. Well, when the printing press
was industrialized in the nineteenth century, there was another fake
(28:24):
news boom. Go back to I think it's no one.
You have the invention of the steam press in London,
and what that does It enables people to multiple by
ten the amount of paper that you can print suddenly
all the way across Europe, and then in America, free
ship newspaper starts bringing up. Because you can distribute far more,
you can reach far more people, far more quickly. And
(28:47):
the most notorious of this was The New York Sun
at one time, I think, the world's biggest selling paper,
run by Michael Benjamin Day, and he would run some
stories like the moon was populated by people who are
half human half bat. But what happened, and I think
this will happen again, is that consumers said, we don't
want to read that, we need facts. And so if
(29:09):
you look back at many of the big newspapers of
our time, the New York Times, the Economists were I
used to work, many of these things came from that
particular period because people paid more to get things they trusted. Well,
that is definitely happening again. In other words, most of
the high quality press today, the New York Times, the Economist,
which John also edited, came from consumer demand for trustworthy information.
(29:33):
And that same consumer demand may help us out of
today's predicament. But there is one key difference. Now we
have deep fakes. It's worth a lot of money to
a lot of people to try to fool us. So
you look at things like Twitter handles that aren't quite
the same, some mixture between humans and computers. You used
to deal with those. What is harder at the moment
(29:54):
is video. So to give you an example, I think
a year or so ago, there was some tack in
a subway in New Org. We could verify really quite
quickly that the subway attack had happened, but almost immediately
there was a picture on Twitter of one of the
alleged assailants lying in a pool of blood. Now, trying
to verify that that was true was much harder, and
(30:15):
it came down to things like working out whether that
was the correct subway floor. You can look at pixels,
you can look at all those different things, But yes,
verifying video is often harder than verifying facts. Do you
have any tools or technologies that you're licensing or spending
money on to do it. We spend a lot of
money on technology across all these fronts. With more and
more news coming directly from social media, large news organizations
(30:37):
like Bloomberg News need to be able to verify which
photos and videos are real and whether they actually relate
to the events they're investigating, which is why Harni for
Reid is in such high demand. Suddenly, the need to
authenticate content has really global implications. Everything from our court
to our national security, to our democratical actions to citizens
(31:00):
safety is starting to rely on our ability to tell
the real from the fake. And so I think this
field of forensics, this field of authentication, has never been
more important, and that's what Harney spends his days working
on at Dartmouth. He develops techniques to analyze and authenticate
digital media. Ahead of the elections, He's working on what
he calls a soft biometric tool to detect fake videos
(31:24):
of specific politicians such as Bernie Saunders, Elizabeth Warren, and
Donald Trump. UM I would say the game is going
to be that we never eliminate the ability to create
fake content, but what we do is we raise the bar.
We take it out of the hands of the amateurs,
we take it out of the hands of the average
person downloading some code, and we make it more difficult,
(31:46):
more time consuming, and more risky. And this is the
same thing that we do with counterfeit currency. You can
still create counterfeit currency today, but it's really hard, still
a risk, but it's a more manageable risk. On the
subject of money, there are digital orencies which are much
more difficult to counterfeit than coins and banknotes. You've heard
of Bitcoin and ethereum, which you're enabled by blockchain, a
(32:07):
so called distributed ledger. Information about transactions is shared between
all the users of the currency, rather than authenticated and
guarded by a bank. Sharing this kind of information across
a crowd of people with multiple backup copies has a
range of uses. One thing Harney is looking at is
using blockchain to authenticate images and videos. At source, we're
(32:31):
gonna start seeing, um the use of a different type
of camera. So there are now companies out there that
create what are called secure imaging pipelines, and so when
you record an image or video, they extract a unique
signature from that content, they cryptographically signed it, and they
put that on the blockchain. So that's basically a distributed
(32:53):
ledger that's very very hard, if not impossible, to manipulate.
Perhaps staying ahead of the perpetrators and making fakes more
differ are cooled is the best we can do. But
what about our usage? How much responsibility do we have
to navigate the web thoughtfully? And how much responsibility should
be on the platforms. We have Facebook, Twitter, we have
yelped because they're not responsible for user generated content. What's
(33:17):
interesting is that, like Nathaniel at Facebook, Danielle also sees
risks in over zealous moderation. If you put too much
responsibility on the platform, you will likely incentivize over censorship.
So all the great things that we think about a
lot of these platforms, and especially the social media, the
Parkland survivors or Black Lives Matters, right, we don't want
(33:40):
to lose the facility and new enablements for organizing and speech.
So if you put too much liability on the platforms,
they're going to overreact and anything anyone complains about and
have very aggressive filters. So we might very well miss
Black Lives Matter, we might not have Parkland and never
see it because you're gonna have overly are of censorship.
(34:01):
Here's Nathaniel again. Whenever people come together in a new medium,
you're going to have people that try to manipulate and
try to take advantage. I think one of the things
that's really fundamentally true that we have done when we
think about the Internet generally social media as well, is
we've removed some of the traditional gatekeeping mechanisms that have
existed in the past, and that has meant that far
(34:22):
more people could engage, much more quickly and much more
vocally than ever before, and that has led to some
incredible things. If you think about the me too movement,
which really part of what drives it and enables it
is the ability to route around some of those gatekeepers, right,
But at the same time, you're also going to see
malicious actors try to misuse that. I think that is
a fundamental truth for any form of media. The question
(34:42):
is how do you enable authentic engagement while making the
types of manipulation that we see more difficult. If Facebook
and other platforms are too destructive of society, ultimately everyone loses,
even the technology company needs and their shareholders. So how
do we move from understanding that to finding solutions. Here's
(35:06):
David Kirkpatrick again. If we are going to retain democracy,
we need technical systems, digital systems, technologies that more effectively
and persuasively compelling lee distribute knowledge so that we have
citizens that are capable of functioning in a democratic landscape
(35:30):
that is more complex, more rapidly changing, and ultimately more global.
And as far as Hanni ferred is concerned, this has
become everyone's problem, so we all have a part to
play in solving it. I think two things are gonna
have to change. So one is the technology to authenticate
it is going to have to get better. So whether
that's authenticating at the source or the types of things
(35:50):
that I do with authenticating content and operating that at scale,
that's going to have to get better. But I think
what's also going to have to change is how we
as consumers of digital content think about what we see.
We are going to have to become more critical, more reasoned.
We have to get out of our echo chambers. We
have to stop allowing social media to manipulate us in
the way that they do. So I think the solution
(36:12):
is at least too prong and potentially three with some
legislative relief on the line to really force the companies
to do better than they have been over the last
few years. So does the good outweigh the bad? I
don't know. We have to have a hard conversation. People
in who work in infectious disease and physicists who develop weaponry,
they think about this all the time. We as technologies
(36:32):
have not quite thought about this as much in the
past because our field is so young. But I think
now you know it's time to wake up and start
asking those hard questions and having those conversations before it's
too late. Once again, we're being aged to wake up
from our sleep book, and we do have some months
is at least when it comes to deep fakes, we
(36:53):
can make it akin to counterfeiting money. The people who
do it will get prosecuted, and program is like Honey
will work on detection technology, but we still have to
hold the bills up to the light before we decide
whether to accept them. That's our job, that is, if
we're not too busy watching Nicholas Cage starring as Thelma
and Louise in Thelma and Louise. Even more complicated than
(37:16):
deep fakes is the concentration of power at companies like Facebook.
In the next episode, we visit a secret lab at
Google to understand what happens when technology companies start taking
on the role of the state, and we speak with
Lena Khan, who has proposed new regulation to balance the
power of big technology companies like Amazon. I'm asloshin See
(37:38):
you next time. Sleepwalkers is a production of I Heart
Radio and Unusual productions. For the latest AI news, live interviews,
and behind the scenes footage, find us on Instagram, at
(38:01):
Sleepwalker's podcast or at Sleepwalker's podcast dot com. Sleepwalkers is
hosted by me Ozveloshin and co hosted by me Kara Price.
Were produced by Julian Weller with help from Jacobo Penzo
and Taylor Chacogne. Mixing by Tristan McNeil and Julian Weller.
Recording assistance this episode from tofarrelf Our Story editor is
Matthew Riddle. Sleepwalkers is executive produced by me Ozveloshin and
(38:24):
Mangesh Hattiga. For more podcasts from My Heart Radio, visit
the I heart Radio app, Apple Podcasts, or wherever you
listen to your favorite shows.