Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Thanks for tuney into tech Stuff. If you don't recognize
my voice, my name is Osva Looshan, and I'm here
because the inimitable Jonathan Strickland has passed the baton to
Cara Price and myself to host Tech Stuff. The show
will remain your home for all things tech, and all
the old episodes will remain available in this feed. Welcome
(00:20):
to tech Stuff. This is the story. Every Wednesday, we
bring you an in depth interview with someone at the
forefront of technology or someone who can unknock a world
where tech is that it's most fascinating. This week it's
Harney for Reid. He's a professor of electrical engineering and
(00:40):
computer science at the University of California, Berkeley, with a
CSI sounding specialization digital forensics. His focus is on image
analysis and human perception, so he's the guy you call
when you need to know whether or not you're confronting
at deep fake, and many do. He's constantly talking to
(01:01):
journalists to help them determine what's real and what's fake online.
In his lab at UC Berkeley, he and his students
study the various ways misinformation is created and spread and
how it erodes trust. In our institutions.
Speaker 2 (01:16):
And one more thing.
Speaker 1 (01:17):
For Reid is the founder and chief science officer of
get Real Labs, where he consults with businesses, news organizations,
and law enforcement to authenticate digital content. You might be
wondering how far Reed got into this field. If so,
you're not alone.
Speaker 2 (01:33):
Somebody said to me the other day, Oh, you were
so prescient. I'm like, no, we weren't. We were just
screwing around.
Speaker 1 (01:38):
Farred first started pondering the implications of digital images back
in nineteen ninety seven.
Speaker 2 (01:44):
This is really pre digital Almost film was still the
dominant source of media that we took photographs on the
Internet was nothing right. There was no social media, and
everything was very nascent. You could see the trends, you
knew things. Something was bubbling up with the Internet and
with digital technolog Freed was a postdoc at the time.
I was at the library getting a book, which now
(02:05):
just seems quaint, and I was waiting in line, and
there was a return card, and on the return cart
was a big book called the Federal Rules of Evidence.
I'm not a legal scholar, I'm not a lawyer, but
I was bored and I flipped it open to a
random page and it was titled introducing Photographs into Evidence
in a Court of Law. And I liked taking photographs.
I was working with digital images, but nothing to do
(02:28):
with this topic, and I thought, I wonder what the
rules are, and so I read it and there was
almost a footnote that said, there's this digital format and
we're going to treat digital the same way we treat analog.
And I just remember thinking, I don't know anything, but
that seems like a bad idea.
Speaker 1 (02:44):
This passage really stuck with him, and for years he
couldn't stop thinking about the implications of a digital world,
the fact that digital manipulation would change our perception of
what's real because the photographic medium had fundamentally shifted. What
surprised him was that few others were taking note.
Speaker 2 (03:04):
It's really unusual in an academic life where you start
thinking about a problem and you go into the academic
literature and there is nothing. It was just crickets, because
there was no reason to be thinking about the problem.
Speaker 1 (03:15):
Two years later, as a professor of computer science at Dartmouth,
he was playing around in photoshop creating a comic image
of his friend when he had an epiphany.
Speaker 2 (03:26):
Mathematically, I just did something very interesting. I introduced pixels
that have been synthesized by photoshop to make the image bigger, right,
because they didn't exist, and I remember thinking, oh, I
should be able to detect that.
Speaker 1 (03:40):
In that moment, he started writing code and actually developed
programs to detect digital manipulation. The world woke up to
the importance of this work, and he started getting asked
to chime in on serious cases for the Associated Press,
for law enforcement, for national security.
Speaker 2 (03:56):
And then twenty fifteen, sixteen seventeen AI hit and the
world exploded. But it exploded for a few reasons because one,
at least with Photoshop, there was a barrier to entry.
You had to actually know how to use photoshop. But
then when AI came around, you just go to chat
cheepetiting type, give me an image of X, right, and
give me an image of Y, give me a video
of this, give me an audio of this. And so
(04:17):
suddenly there's no barrier to entry. But more importantly, social
media dominates the landscape. We went from a few million
users to a few billion users, and so now not
only could people easily with no barrier to entry, create
fake content, they could distribute it to the masses and
(04:37):
it gets amplified because the algorithms amplify the most outrageous things.
People want things that conform to the worldview. We are hyperpartisan,
both here and abroad, and that was the perfect storm create, distribute, amplify,
rinse and repeat. And so now through the AI revolution,
it's bizarre what's happening.
Speaker 1 (04:59):
We'll dive into the world Careed does on deep fakes
in a bit, but first I had to ask you
about something seemingly completely unrelated, death bods. So you were
quoted in this Atlantic article about death bods with the
headline no one is ready for digital immortality, So nie it.
It'll be good to define on terms like what do
(05:20):
we mean by this idea of digital immortality?
Speaker 2 (05:23):
Yeah, I don't know that it's a well established term.
But here's my definition. Is that your likeness, the way
you think, the way you talk, the way you look,
lives on an eternity in a digital form through a
version of AI that embodies how I write, how I think,
how I talk in order to interact with other people.
(05:45):
It's interactive, that's the key, but it's dynamic.
Speaker 1 (05:47):
What got you interested in this topic and why did
you agree to be a source in the story.
Speaker 2 (05:51):
So this is almost a philosophical and legal question, and
I'm neither of those things. But I got to say
I've been thinking a lot about it, technically, personally, philosophically.
Here's why I've been thinking about it. So one is,
I'm a professor. I've been a professor for twenty five years.
I love teaching. I love my students. I hate them
some days, but I usually love them. They're amazing and
(06:15):
weird and wonderful in many ways. So is there a
story here where I can keep teaching after I die?
Like there's something sort of magical about that. I think
about it for my parents. Both my parents are now
in their late eighties. One of them will die first,
almost certainly, And what does it mean for the one
who They've been together for fifty years? So there's parts
(06:35):
of it. I think this is wonderful, this idea that
one of my parents can wake up and open up
their iPad and have a conversation with the person that
they spent fifty years of their lives with. On the
other hand, if that happens early in life, is that
healthy for somebody? If a thirty year old loses their spouse,
is that good? That they never sort of physically move on.
I also think about it from a technical perspective, what
(06:55):
would that look like for somebody who's famous where there's
a big digital footprint. I think we have all the
pieces to do that. We have the large language models,
we have voice, we have likeness, we have video, and
you're already seeing people do this creating digital avatars of
both people who are with us and not with us,
so that you can interact with them. I can go
(07:16):
scrape every single piece of writing that Martin Luther King
Junior wrote. I can grab his speeches, I can grab
his likeness, I can grab his voice, and I could
create an avatar of him that I could interact with.
Speaker 1 (07:24):
Well, it reminds me of your work on deep fis
in some sense, because, as you said, exactly, all the
pieces are there technically and otherwise. Yeah, but society's clearly
not ready.
Speaker 2 (07:34):
I don't think we're ready. But look a lot of things,
if you look at the last two, three, four or
five decades from technology we weren't ready for and we
became ready for it.
Speaker 1 (07:43):
Right.
Speaker 2 (07:44):
Look, you can go back to in vitual fertilization. When
it first started, people were freaked out by that completely
normal now, right, And by the way, this could also
be generational. I can imagine some of my students here
at UC Berkeley think sure, who cares, right, And I'm
an older guy and I'm like, ah, that seems a
little weird. So this may just go away generationally, which
is usually how this happens.
Speaker 1 (08:04):
By the way, do you think we'll see a fundamental
shift in our society in this case in terms of
how we think about death.
Speaker 2 (08:12):
I think this idea of a digital immortality is really profound.
And look, I don't know where this AI revolution is
going right now. I don't think anybody really does. But
something is happening. There is something here that is quite dramatic.
I think it's going to reshape society. I think it's
going to reshape education. I think it's going to reshape
the workforce. I think it's going to reshape a lot
(08:34):
of things. And I do think your likeness or your
being or your essence or whatever you want to call
that can live on and you can interact with people.
You can continue to have a podcast after you die,
you can keep interviewing people.
Speaker 1 (08:49):
When we come back. How deep fakes impact everyone even
if you don't know it. There's an interesting point of
intersection between so death bots and your more core field
of study, and that's this Indian politician who a parliamentary candidate,
(09:10):
who created a video of his deceased father endorsing him
as his rightful heir.
Speaker 2 (09:16):
Yeah.
Speaker 1 (09:17):
I mean this is kind of a world's collide moment
between misinformation, deep fakes, and digital immortality.
Speaker 2 (09:22):
Yeah. Yeah, So for people who didn't see it, India
had an election this year, big one and you know,
billion plus people voting. It was chaotic, and a politician
did exactly this. His father was a well known politician,
and he created a digital recreation with his voice and
his likeness and he was talking and endorsing his son.
So I have a couple of thoughts on that right now,
(09:43):
in this particular moment, as we're still grappling, I think
there should be two rules, which are consent and disclosure.
And it's really simple, like, if you're going to use
somebody's likeness, you should have consent, and if you're going
to distribute it, you should have disclosure. Now, consent is
difficult when somebody is dead. But if I want to
get an endorsement from somebody, who's living. I have to
(10:04):
get their consent yep. And if I distribute that, it
has to be very clearly labeled and disclosed as this
is AI generated. I'll give you a really nice example
of this where it was sort of cool. I was
during the Olympics. One of the newscasters, well known and
I'm just blinking out his name right now, was creating
AI generated personalized summaries. So my wife was watching the
(10:26):
Olympics and she would get these personalized summaries from the broadcaster.
So the content was personalized to her based on what
she was watching. And then the voice being generated was his,
and the script was being AI generated. Everything was with
his permission, and it was disclosed to her that it
was AI generated and summarized. And I think that was
really well done in terms of the things that were
(10:48):
made clear of what you were getting and how it
was being delivered to you.
Speaker 1 (10:52):
That's sort of a high watermark for how this stuff works.
When it works well. Yeah, Do you think as a
society we're more likely to move toward that high water
mark through collective demand or through regulation or through some
decision from the tech overlords like what gets us there?
Speaker 2 (11:08):
More broadly, yeah. I mean, there's nothing in the last
twenty years or twenty five years that gives me confident
that our tech overlords are going to do the right thing.
They're going to do the thing that maximizes their profits.
And we know this. Let's stop pretending otherwise that Silicon
Value is anything other than it is. It's a modern
day Wall Street in some ways, by the way, even
more powerful, right, because they control information, not just money,
(11:31):
and that arguably is much more powerful. I don't think
this comes from consumers, because we're not customers, we're the product.
We as users, I should say, have almost no power
at all. And so the media we tried, right, we
tried criticizing and embarrassing, and we tried dragging them in
front of Congress. Nothing effects change. So what does good regulation?
(11:54):
We got to put guardrails on this and look, there's
nothing there is nothing in our physical world that it
is not subject to regulation to make products safe and reasonable.
But somehow we've abandoned that for the last twenty five
years because it's the Internet. So I do think it's
going to have to come I don't think it's going
to come from the US. It is coming from the UK.
It is coming from the EU, it is coming from Australia,
(12:17):
and I think those are going to be the leaders
in this space. And you saw this with GDPR with
the privacy rules in many ways that I don't think
it solved the privacy problem around the world, but it
moved the needle on the problem. And the EU and
the UK have moved very aggressively on AI safety, on
digital safety, and on misuse of monopolies, and I think
it's going to have to come at that level.
Speaker 1 (12:38):
I want to talk about some of the more personal
ways in which we can experience deep fakes. I think
a lot of people think maybe it only touches politicians
or celebrities. But there was an NPR story about a
case you worked someone that involved a Baltimore teacher. Can
you talk about what happened there?
Speaker 2 (12:55):
This case is I'm fascinated by, and I still don't
think we've gotten to the end of it. Tell you,
first of all, your listeners, what the case is. Baltimore
Public School audio of the principle saying things that were
racist was leaked, and it was leaked to some news outlet,
and it was bad and it was if you listen
to it it's pretty bad, and the principal said, this
(13:17):
isn't me, this is AI generated, And we analyze the audio.
Several labs analyze the audio. There is alteration to the audio.
That is, we can hear and see that it's been
spliced together five or six segments, but when we analyze
the individual segments, it is not one hundred percent clear
(13:37):
to us that it is AI generated. It could be
that he said these things, but they were sort of
stitched together in a way that put them out of context,
which would be deceptive. It could be that it's AI
generated and our tools simply didn't detect it. It could
be that this is a case of the liar's dividend,
where the principle really did say this, but he's claiming
he didn't say it.
Speaker 1 (13:55):
Honey, can you explain exactly what the liar's dividend is?
Speaker 2 (13:58):
The liar's divining go something like this. It says, when
you live in a world or anything can be manipulated.
Any image can be fake, any audio can be fake,
any video can be fake. Nothing has to be real.
I get to use the fact that fake things exist
as an excuse for what I've done. But this case
is a really good example of how dangerous this technology
(14:22):
is for two reasons. One is, with twenty to thirty
seconds of your voice, I don't need hours. I can
clone your voice. I can upload it to an AI
tool that I use and that I can type and
have you say anything I want. That means anybody with
twenty seconds of their voice available has a vulnerability. So
this is not for movie stars and podcasters. This is everybody.
(14:45):
Number one. Number two is anybody who's caught saying or
doing something that they don't want to take ownership of
can say it's fake. Yep, the dog ate my homework,
all right, this is easy. And so both of those
are problematic because where's our shared sense of reality. It
used to be when you had images and video, despite
(15:07):
the fact that there was photoshop, despite the fact that
Hollywood could could manipulate videos, we had a pretty reasonable
confidence in what we read and saw and heard. And
you can't say that anymore. This is why I spent
so much time talking to journalists and fact checkers and
lawyers and law enforcement. So on this particular case, it
really showed how this has trickled all the way down
(15:28):
to high school teachers.
Speaker 1 (15:30):
Zooming out from the individuals to the collective. One of
the interesting things that happens is whenever there's like a
world event that everyone's paying attention to, you get this
fire hose of fake images. I remember in the early
days of the conflict in Gaza, there was this aerial
image with what was supposed to be Palestinian tents making
(15:52):
the word help us or you know. Right out to
the La fires began, there were these images of the
Hollywood Sign on fire. I don't know how many people
believe these images were actually true or in some ways,
what the harm is if they did. But what's going
on here?
Speaker 2 (16:08):
So let's start with the La fires. First of all,
many images coming out of those fires were fake. What's
the harm, Well, this one's easy. If people believe there's
fire in this neighborhood, that is very bad. Fire departments
are going to get distracted. First responders are going to
get distracted. People are scared that their neighborhood is on fire.
They're going to get distracted. So I do think there
is real harm I think in the Gosspt images. Also,
(16:30):
this is a complicated conflict, and we are all trying
to get our heads around this thing and figure it out,
and meanwhile people are fanning the flames, trying to push
a particular narrative on either side, and I don't think
that's healthy. Look, we can have serious discussions about how
to combat climate change, we can have serious discussions about
how to resolve the Israeli Palestinian conflict. We can have
(16:53):
serious discussions about a lot of things, but we've got
to start with a set of facts. And when you
pollute the higher information ecosystem, we are at a loss.
You could say, okay, well somebody believe the fake image
of the tense. Okay, who cares? But here's why you care,
Because then when the real images come out showing human
rights violations, showing people being killed, people being bombed, how
(17:16):
do I believe it? When you pollute the information ecosystem,
everything is in doubt. And suddenly you have people who
are denying that anybody's died, You have people denying that
the fires exist, you have people denying that people are
dying from COVID. Because this is how untrusting we have become,
and that I have a real problem with, because look,
no matter what side of the political or ideological aisle
(17:39):
you are on, can we at least agree that if
we don't have a shared factual system, a shared sense
of reality, we do not have a society or democracy.
We can't be arguing about whether one plus one is two.
And I would argue that this problem started well before
deep fakes. Social media is the one that is amplifying
and encouraging this type of behavior because it engages users,
(18:04):
drives ad drives attention, drives profits. The problem is not
just the creation side, it's the distribution side, and that,
I would argue, is the bigger problem here than the
deep fake.
Speaker 1 (18:16):
Coming up. Harney for Reid on what it takes to
identify a deep fake stay with us. When we first spoke,
it was just five years ago in twenty nineteen. The
big question at the time was is there going to
be a causal piece of fake media that measurably sways
(18:43):
the outcome of an election? And some people say the
answer to that is no. I mean the New yorkd
apiece in twenty twenty three saying basically, you know that
the deep fakes haven't had that characterismic effect that some
people thought they would. The Atlantic run a story recently
under the headline AI's fingerprints were all over the election,
but deep fakes and information weren't the main issue, and
(19:05):
the kind of the point about both pieces was that
what deep fakes are really being used for is to
create memes and satire rather than to directly trick people.
And the second point was quote to growing numbers of people,
everything is fake now except what they know or other feel.
Speaker 2 (19:21):
Yeah.
Speaker 1 (19:22):
So has this been less explosively destructive than people thought
it would be? Or are the New York and the
Atlantic slightly missing the point in your view?
Speaker 2 (19:32):
I agree and disagree with them. I agree that there
was no single atomic bomb that got dropped, and that
you can draw a line from me to be saying
this change in election, But nobody thought that was going
to be the case. So I think that's a little
bit of a straw man argument.
Speaker 1 (19:47):
Right.
Speaker 2 (19:47):
Okay, here's the other reason I disagree. Go talk to
the people in Slovakia, because what they will tell you
is that forty hours before election, there were two candidates,
a Pronato and a pro Putin candidate, and the pro
NATO candidate was up four points. A deep fake of
the Pro NATO candidate was released saying We're going to
rig the election, and two days later the pro Plutin
(20:09):
candidate won by four points. There was an eight point
swing in the polls in forty eight hours. Now were
the polls wrong? Possibly? Did it have anything to do
with the deep fake? Don't know, but this could have
been the first example, just a couple of years ago,
of where a deep fake was a tipping point. So
I'm not sure i'd buy that story. I think this
(20:31):
is more about death by a thousand cuts than by
dropping an atomic bomb. I think that when you keep
polluting the information ecosystem, everybody loses trust because you don't
trust NPR, you don't trust in your times, you don't
trust an end Who do you trust? Well, you trust
the guy who's yelling at you telling you what to believe, right,
because you've sort of given up. Yeah, And I would
(20:53):
say that, you know it's fundamentally Is that a deep
fake problem? No, I think that's a social media problem.
I think that's traditional media problem. I think that's a
polarization problem. I think it's the nature of politics today,
both here and abroad, because we have politicians who are
just outright lying to us now. So I do think
that can you point specifically to deep fakes? No, but
(21:14):
I do think it was an accelerant. I do think
it contributed to our general distrust and then our inability
to hear things that go against our worldview, and I
do think that that affected change. I do think you
can't look at the landscape of what Facebook and Twitter
and YouTube and TikTok, how they control the information ecosystem
(21:35):
for the vast majority of Americans, how they have promoted
false information, both traditionally false and deep fake falls. You
can't look at that and say that has had no
impact on the way we think. I think that's probably wrong.
Speaker 1 (21:47):
So you mentioned you've been at this for some time
since opening that legal textbook all those years ago. Could
you have imagined how much trust in society has raided?
And where did you see it kind of happening all
the way? So the answer is no, I didn't see
this coming. And in the early days the liar's diving,
(22:08):
it didn't exist. When there was when there was film
in audio of you saying and doing something, nobody said
it was fake. And by the way, here's how you
know I'm right. Go back to twenty sixteen. Then the
first candidate Trump got caught on the Access Hollywood tape
saying what that he grabs women in places that I
won't mention on this podcast. And when he got called
(22:28):
on it, he didn't say it was fake. He apologized
three months later, when he was now in office, he
said it was fake. That was the moment when I
realized this was a real thing. So it was actually
fairly recently, because up until then the tech wasn't good enough,
and frankly, nobody had thought about it. But once Trump
normalized that you don't like information, call it fake news,
(22:49):
suddenly this became the mantra. AI was still pretty nascent,
but now it's actually a plausible deniability. Now it's actually
not an unreasonable thing. And if you go back and
look at that Access Hollywood tape, you never see him talking.
Speaker 2 (23:01):
It's just audio. And so if that was released today, yeah,
you we'd have to think pretty carefully whether it was
real or not.
Speaker 1 (23:09):
Your vacation in some ways to talk about this and
bring attention to it in the media. But your business
is also to bring some technological solutions to the detection problem.
Speaker 2 (23:19):
Is that right? Yeah? Yeah, So I will tell you
I say this only half jokingly. I started the company
just because I couldn't keep up with the demand. I
just needed people to help me do this.
Speaker 1 (23:28):
Because the best way to start a company, I think.
Speaker 2 (23:31):
Yeah, I'm like guys, I used to get one call
a week, and there was one a day, and now
it's time to day and pretty soon it's gonna be
one Hundreday I can't. I honestly can't keep up. But
more less snarky, if you will, Like you know, we
really need to get a handle on this problem. And
I think there's a couple of places we want to
help organizations get a handle on it. So clearly, media outlets,
clearly you have to help the big news wires and
(23:54):
the major news agencies when they are dealing with breaking
news of La fires and Gaza and Inaugurate and whatever.
They've got to know what the hell's going on. We
have to help them. We clearly have to help law
enforcement at national security agencies reason about a very complicated world,
from evidence in a court of law to things with
geopolitical implications. We have to help organizations. We are seeing
(24:17):
massive frauds being perpetrayed on Fortune five hundred companies. We
are seeing imposter hiring. We are seeing people attack companies
with fake audio and video of CEOs, to damage their
stock price. We want to help individuals right deal with
the stuff when they are getting information, how do they trust?
And so we are developing a suite of tools that
(24:38):
would authenticate content, images, audio, and video to help people
make decisions. And it's not a value judgment. We're not
saying this is good or bad or ineter. In fact,
now we're even saying if it's true or false. We
are simply saying is this an authentic photo, image or
video or is it not. It's a pretty simple question
with a very very complicated and difficult answer. And by
(24:59):
the way, if that's not an if, it's a when,
it's a when that happens that you have to start
thinking about this because it will happen, right, because anybody
can create these fakes. Now there's somebody doesn't like their
seat on an airline, they're going to go off and
attack your company by creating a fake image or a
video or an audio and they're going to try to
hurt you. And it's frankly not that hard to do.
Speaker 1 (25:19):
And the protote elements of what you're working on, what
is the technology that enables it.
Speaker 2 (25:24):
Yeah, I'm going to tell you a little bit about it,
but not all of it, because you know, in the
cybersecurity world you have to be a little careful. But
underneath it is I've been doing this for twenty five years.
We have developed a suite of different technologies that look
at content from many different perspectives. We think about the
entire content creation process. So let's take an image for
an example. What happens with an image. You start out
(25:46):
here in the physical three dimensional world. Light moves and
hits the front of a lens. It passes through an
optical train, It hits an electronic sensor where it gets
converted from light photons analog to digital. It goes to
a series of post processing steps. It gets compressed into
a file, It gets uploaded to social media, it gets
(26:06):
downloaded onto my desk, and then my job begins. And
what we do is we insert ourselves into every part
of that process, the physical world, the optics, the electronic sensor,
the post processing, the packaging, and we build mathematical models
that we can say this is physically plausible, this is
physically implausible, this is consistent with a natural image, this
(26:27):
is consistent with an AI generated image. And we have
this suite of tools and then collectively, those come together
to tell a story about our belief that that piece
of content is authentic or not.
Speaker 1 (26:38):
What degree of conviction do you have on any given
piece of content that you can verify with on it
is real?
Speaker 2 (26:44):
First of all, great question, and I don't think it's
going to surprise you that the answer is complicated. I mean,
I'd like to be able to tell you ninety nine
point seven percent. And by the way, anybody who tells
you ninety nine point seven doesn't know what they're talking about.
And here's why it depends. So for example, if you
give me a high resolution twelve megapixel image, to its
high quality, we can say a lot. If you give
(27:06):
me an image that's three hundred by three hundred pixel
and has gone through five levels of compression and resizing
and uploaded and downloaded, it's really really hard. So it
depends on the content. So there's a number of factors
that play in, but the obvious ones are this. If
you have a high quality, high resolution piece of content,
we're pretty good at this, and that level of confidence
(27:27):
and ability goes down as the quality the content degreates.
It's like a physical DNA sample. You find a pile
of blood. Your DNA sample is good, You find a
tiny little half a drop of blood not so good. Look,
anybody who knows anything about the space knows there are
days where you say I don't know. I would much
much rather say I don't know than get it wrong.
Speaker 1 (27:49):
So we told you about a regulation solution, you're working
on a product solution. What about the average person who
is listening to this podcast. What is the way to
protect in this changing environment?
Speaker 2 (28:02):
This is easy. I really like this question because the
answer they answer to everything is hard. This one's easy.
Get off of social media. Stop getting your news from
social media. That's it. You're not going to become an
armchair analyst. You're not going to become a digital forensic expert.
You're not going to become a misinformation expert. You can't
do that, you can't do it at scale. But here's
what you can do. Stop getting your goddamn news from
social media. Annie, thank you, great talking to you guy.
(28:26):
I can't believe it's been five years. Okay, let's do
this again in five years and see and see where
we are, and maybe it'll be my avatar that'll be
talking with you.
Speaker 1 (28:33):
Then that's it for this week in Tech the text
off i'mos Vloshin. This episode was produced by Eliza Dennis,
Victoria Dominguez, and Lizzie Jacobs. It was executive produced by Me,
Karen Price, and Kate Osborne for Kaleidis Kote and Katrina
Novelle for iHeart Podcasts. Jack Insley mixed this episode and
(28:54):
Kyle Murdoch Rodolphine Song join us on Friday for a
special crossover episode with the podcasts part Time Genius. We'll
be talking to Brian Merchant, author of Blood in the Machine,
about being a ludd eyed. Please rate, review, and reach
out to us at tech Stuff podcast at gmail dot com.
We're excited to hear from you.