Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Why are we humans so easy to deceive? What are
the tricks of the trade, and how can we train
ourselves to be more aware of these? And what does
any of this have to do with fahnose or forging
letters or the shell game. Welcome to Inner Cosmos with
(00:28):
me David Eagleman. I'm a neuroscientist and an author at
Stanford and in these episodes, I examine the intersection between
our brains and our lives, and today's episode is about deception.
(00:49):
You presumably wouldn't do something to cheat a stranger out
of twenty dollars, So why are there people who would
do that?
Speaker 2 (00:57):
And what can we do.
Speaker 1 (00:59):
To be a little more thoughtful and aware and immune
against deception? So in a previous episode I talked about
a really impactful event when I was a neuroscience graduate
student getting my PhD. I was a second year student
in the department and this new young woman came in
as a first year student. We'll call her Tanya, and
(01:22):
everyone could see that Tanya was great. She had great grades,
top standardized test scores, terrific letters of recommendation, and in
the interviews she even won over my graduate advisor, who
was famously spiky towards people. And I tell her full
story in episode sixteen, but the short version is that
(01:43):
she faked everything on her graduate school application. She faked
the school transcript and the GRE scores and the letters
of recommendation, and she was only caught because an administrator
at the school was so impressed with her that she
decided to call the professors who had written the letters
(02:03):
of recommendation to ask how they'd produced a student like Tanya.
And that's how the whole house of cards came tumbling down. Now,
for those of you who listened to episode sixteen, you'll
remember that Tanya's story then got much weirder because she
went to Yale University and tried to pull exactly the
same trick, and when she was caught there, they put
(02:25):
her in jail. And then she and her mother got
caught doing a drug deal with two undercover agents. And
then Tanya decided to try murdering a girl who looked
vaguely like her to avoid going to prison. Now that
plot failed, but only barely. So that's the quick recap
of the story. But the part I want to concentrate
(02:46):
on today is why did none of us see this coming?
We all thought she was great. And this was a
neuroscience graduate program full of people who were aspiring learners
about the human brain and faculty who were presumably already
experts in the brain. And yet every single one of
(03:06):
us thought that Tanya was great. None of us even
had the briefest glimpse of doubt or suspicion when she
started school, And we were all maximally surprised when we
saw how completely we had been fooled.
Speaker 2 (03:23):
So why were we so blind?
Speaker 1 (03:26):
Well, first of all, none of us would have thought
about faking our transcripts and writing fake letters and so on.
Speaker 3 (03:33):
That kind of.
Speaker 1 (03:34):
Deception didn't exist in our mental models, and so it
was totally invisible to us when it was sitting there
right in front of us. And one of the themes
of this podcast and of my next book is that
we need to get better at seeing outside the garden
walls of our own internal models. This is really what
(03:56):
the passage into maturity is about, seeing the limitations of
our own thinking and realizing that what's going on in
someone else's head might be very different than what's going
on inside hours even if we're not the kind of
person to do something, even if it seems absolutely unimaginable
(04:18):
to us, it doesn't mean that.
Speaker 2 (04:20):
It seems that way to someone else.
Speaker 1 (04:22):
And if you heard episodes twenty and twenty one, you'll
know that we dove into some of the really awful
things that happened during wartime. And again, just because you
can't imagine hacking your neighbors to death with a machete
or shooting your neighbors or bayonetting them, it doesn't mean
that someone else can't imagine that and won't foment violence
(04:44):
without having much compunction about it. So an understanding of
history requires an expansion of our mental models, and that's
what's required for navigating day to day life as well,
because if not everyone is just like you on the inside.
For example, psychopaths make up about one percent of the population,
(05:09):
and by the way, they make up about twenty to
thirty percent of the prison population. They don't care about you,
they don't simulate what it is like to be you,
and they can be violent towards you because they just
see you as an obstacle to.
Speaker 2 (05:25):
Flow around to get what they want.
Speaker 1 (05:27):
And I'm going to do an episode on psychopathy soon,
but the point I want to make right now is
that if you are not a psychopath, this is very
difficult to imagine someone behaving that way. But you'll be
smarter in your daily life if you understand how other
people can be different from you. Now, sometimes people are
different in wonderful ways, like when you see some situation
(05:50):
in which someone is braver than you, or just more
charitable with a higher percentage of their money, or more
willing to do the right thing, like to climb the
side of a building to save the toddler hanging off
the balcony, even though you would be more scared. But
sometimes we see people different from us in the other direction,
(06:11):
people who cheat and lie and steal, and it's hard
to understand because we don't have a good model of that,
and so we're often caught completely by surprise. I'll give
you an example of this when I was young. When
I was sixteen years old, I was traveling with my
parents in Barcelona, and I was spending an afternoon walking
(06:31):
around by myself, and I saw a crowd of people
playing a shell game. You know, this is the game
where a person puts a small ball under one of
three cups and then rotates the cups around and around
and then you have to guess which cup the ball
is under. So I stopped to watch because there was
(06:52):
a small crowd and the dealer was moving the cups around,
and there was this pedestrian like me who had put
down some money. And pedestrian watched the cups go around
and round, and when they stopped, he pointed to a
cup and it was.
Speaker 2 (07:06):
The wrong cup.
Speaker 1 (07:07):
But I could see where the cups had moved, and
I knew it was the cup on the left, but
this pedestrian pointed to the middle. So the dealer uncovered
the middle cup, and the whole crowd made a whooping sound,
and so the pedestrian put down more money to play
another round, and the dealer shows the ball under the
left cup, and then he rotates the cups around faster
(07:27):
and faster, but I kept my eyes locked on the
correct cup, and again the pedestrian guessed wrong, but I
knew where the ball was. So this happens a few
more rounds, and the pedestrian gives up, and the dealer
looks at me and motions for me to put up
some money, so I did so he shows me the
ball and rotates the cups around and around, and I
(07:48):
keep my eye on it, and when he stops, I
point to the correct cup and the cup was empty.
Speaker 2 (07:55):
I got it wrong, what was going on?
Speaker 1 (07:58):
So he motions for me to put down more money,
and I want to win my lost money back, so
I put down more and he runs the rotations again,
and I point to the cup where the ball should be,
and again it's empty. And before I know it, someone
in the crowd makes a whoop sound, and suddenly the
dealer folds up the board and the entire crowd disappears,
(08:18):
and I'm standing there all by myself in the street,
and I felt like such a fool because I had
just been deceived. Now this is embarrassing for me to
tell the story, and even all these years later, there
is some pain in the remembrance.
Speaker 3 (08:31):
But my hope in.
Speaker 1 (08:32):
Relating the story is that at least one teenage listener
gets an expansion of their mental model from this and
doesn't have to play this game. Just in case you
don't know, there is no honest version of the shell game.
It's always performed by hucksters who use sleight of hand
to move the ball from one cup to another, and
the whole crowd is in on the deception. Now, when
(08:55):
I did some research on this, I found the shell
game is very old, so the game of people trying
to deceive other people is ancient.
Speaker 3 (09:04):
Now.
Speaker 1 (09:05):
Sometimes deception is planned in advance like this, and sometimes
people are just trying to get out of bad situation.
They make it worse. Sometimes it's hard to tell. I mean,
look at the company, Thearrhannos, which you've probably heard of.
They were a health tech company founded by a young
woman named Elizabeth Holmes, and they were out to develop
(09:25):
a biological test that could measure a whole bunch of
things using just a drop or two of blood. So
they raised seven hundred and twenty four million dollars from investors,
including the media mogul Robert Murdoch and former Secretary of
State Henry Kissinger, and all kinds of big players were
(09:45):
enthusiastic about this revolutionary technology. But this all came crashing
down in twenty fifteen when it surfaced that Sarahnos had
been lying about what their technology could do. Holmes was
charged with fraud and conspiracy, and she was found guilty
in twenty twenty two. Now you probably know this story,
(10:06):
but the interesting backside of this story is So many
people here in Silicon Valley have said things to me
suggesting that they would have never fallen for Farnos, like, oh,
I would have known right away that couldn't work. But
that's silly, and I generally don't believe them because it's
easy to get douped. And if the other people who
(10:28):
believe in it and invest in it and sit on
the board are billionaires and big shots, what's to make
you think that wouldn't have a great gravitational pull on you.
Because the truth is, when we look at things that
other people believe in, or even simply things that match
our expectations, we often don't do any further looking into it. Now,
(10:51):
this puts us in a tough spot because we have
to trust other people. We really have no choice, because
we can't disbelieve every thing or have time to check
on everything. So how do we work around this bias?
How can we take some of the tools of science,
which are all about clear thinking and import these into
(11:13):
our daily lives. Well, I'm no expert on this. I
often err on the side of believing everyone, but I
knew who to call. My colleagues Dan Simon's and Christopher
Shubrie recently wrote a terrific book all about deception called
Nobody's Fool. So I wrung them up.
Speaker 4 (11:31):
The most interesting thing we found about what all the
cons and deceptions have in common is that the con artists,
the scammers, the swindlers, whatever you want to call them,
all seem to be taking advantage of sort of the
same set of our cognitive proclivities and our attentional biases.
What we like to pay attention to, what attracts us,
(11:54):
and what mistakes we tend to make, and decision making
that we may not be aware of.
Speaker 1 (12:01):
That's Christopher Shabri, a professor of psychology and director of
Decision Sciences at Geisinger Research Institute.
Speaker 4 (12:08):
They may not be consciously aware of, but somehow they
have sort of learned that adapted their schemes to things
that tend to exploit these loopholes in our thinking. Not
loopholes that are sort of design flaws necessarily, they're actually usually,
you know, good things about how we think. But when
someone is really trying to take advantage of us, they
can cleverly exploit those and gain the advantage.
Speaker 1 (12:29):
So what are the important lessons for all of us
to think about? Given that I'd say there are a few,
and that's Dan Simons, a professor of psychology at the
University of Illinois. One is that we have a tendency
to assume that only the most gullible or naive or
(12:52):
uneducated people fall for scams. And that's partly because we
generally only see the results of cons and scams after
they're over right, So it's easy to see what the
red flags were when you knew it was a scam,
you found out it was a scam in the same
way that we can easily spot, you know, the obvious
ones in advance, things like the Nigerian prints email scam
(13:15):
that we know about. We can spot those red flags
because we've seen them before. But we tend to assume
that all scams prey on the people who are gullible.
And one of the key insights we've across all of
the sorts of scams that we've encountered is that scams
can affect anybody. Cons can affect anybody if they're targeted
in the right way to our wants and desires and needs. Yeah,
(13:37):
you know, I thought about this a lot with Sarahnos
here in Silicon Valley retrospectively, everyone acted like I would
have never fallen for that. But it's obvious that a
lot of good and smart people got sucked up into that,
And so how do you how do you interpret that? Well?
Speaker 5 (13:55):
I think it's the way we see something like Sarahnose
is in hindsight, after the fact, in the same way
that we might watch a heist movie or a who
Done It movie, where we know there's a heist, right,
we know there's a con artist, we know that. In
the context of the movie, we know to look for
those red flags. We're trying to figure it out, and
the characters in the movie aren't. But when it's viewed
(14:17):
from the outside, it's kind of obvious, right, So Pharaohnose
after the fact, Yeah, there are lots of red flags
along the way, and they've been reported thoroughly, and it's
great narrative. But when you're immersed in it and you're
trying to figure out what's the next best investment or
what do I want to get in on really quick?
If you're a venture capitalist trying to kind of get
in on the next big thing, spotting all those red
(14:37):
flags is more difficult because you're incentivized to act with
efficiency and to try and catch things before they take
off and before people know about them. So those are
the contexts in which they're the marks, rather than watching
some interesting, engaging movie.
Speaker 4 (14:51):
In the case of Pharaohose. Also, you know, there were
people who didn't invest and who didn't join the board
of directors and so on. They don't get as much
publicity as the unfortunate ones who did and look like
marks and suckers and so on in retrospect. But some
professional investors who specialized in biotech and healthcare investing, they
asked a lot more questions about the product, about the technology,
(15:13):
about clinical data, about all of that stuff, and then
they walked away. And I think one other important point
about their nose is I think, although I don't know,
because I'm not inside the heads of all those people,
but I think a lot of people didn't even consciously
consider the idea that there might have been a scam
or a fraud going on. Everything seemed good, everybody was optimistic,
(15:33):
there was a great vision. Little things that seem kind
of odd. Maybe you can explain away, this is just
a quirky company. The CEO is a little odd. You know, Well,
they've got all these famous people on the board. That
must be a good thing. Simply considering the possibility in
a big decision making situation that you maybe are being
scammed or there's something going on that you're not aware of,
you know, could be the first step towards like starting
(15:55):
to see those red flags or look for those red flags,
and maybe you can actually find some of them if
you were even thinking about the possibility that they might
be out there somewhere. We require a lot of trust
just to get by in life. And so how do
you guys think about striking a balance of trust and
a little bit of suspicion.
Speaker 5 (16:15):
Well, trust is essential, right. In fact, we tend to
assume that when we hear something from somebody it's true
and until we take time to think about it otherwise.
And most of the time that's a great thing because
in most conversations, nobody's trying to lie to you. In
most interactions, nobody's trying to con you. I mean, the
odds of any of us being a victim of a
bernie made off or a their noose is pretty low.
(16:37):
The odds of any of us receiving fake information on
social media is pretty high. But we tend to be
trusting of the information we get, and it's a good
thing that we are.
Speaker 4 (16:47):
Right.
Speaker 5 (16:47):
If we were constantly skeptical of everything we encountered, we
could just never do anything. We could never have a conversation.
You can't check everything. You can't be a perpetual skeptic
or cynic about everything. You're not going to go check
in the grocery store if you buy an organic apple, right,
You're not going to go out to the farm and
make sure they didn't use any pesticides. Right, It's too much.
(17:09):
We can't really check that. We have to accept that
some of the time we're going to have to be trusting.
And the key is to kind of figure out when
are those times when we're at the greatest risk, When
are those times when the consequences could be bad enough
that we really would want to check and see if
we're being scammed.
Speaker 2 (17:25):
So before we go on to some other topics, just
can you give a few examples of hoaxes or swindles
or scams so that our listeners can understand what we're
talking about.
Speaker 4 (17:37):
I have a good one that I think a lot
of people probably haven't heard of, but they really should have,
which is called sometimes the president scam or the CEO scam,
and I didn't discover it. It's been it was going
on for a while and it was documented elsewhere. But
I think it's a great example of some of some
of the key ideas. So this French Israeli fraudster named
(17:59):
Jilbert Shickley developed a scam in which he would call
up sort of mid level employees of French companies, pretending
to be the CEO of the company, reaching down through
the ranks and calling up some middle manager and giving
them a task to do directly for him. And the
task always wound up in money being transferred directly to
(18:21):
some bank account or person or something like that, where
of course it wound up with Shickley and whoever his
associates were in their bank account somehow. And I think
it's kind of an audacious con because it's one guy
with a telephone calling people up who he's never met
before and talking them into essentially giving him a lot
(18:43):
of money. But it does illustrate sort of the idea
of truth bias that Dan was just talking about, that
if you don't believe that the person on the phone
is the CEO calling you, the whole thing goes nowhere.
But once you believe that, then the scam has a
chance to get through. And it also illustrates sort of
some of selection, sort of someone selection bias we see
in cons Like we hear about the ones that worked,
(19:04):
but we don't know about all the people who just
like hung up the phone or deleted the email when
he tried to talk to you know, to start talking
to them into it. Just like the millions and millions
of people who delete the Nigerian prints emails never get
it mentioned anywhere. You know, it's just a few people
who actually wind up going through with it. That CEO
scam or President scam went along for quite a long
time and sort of morphed and changed into different versions
(19:26):
where eventually people were pretending to be the Defense Minister
of France calling, you know, contacting wealthy individuals, especially with
French ties, and saying that the government of France needed
their help getting hostages, secret hostages out of Syria and
Pharmisis and so on, and wound up taking i think
something like eighty million dollars or eighty million euros in
(19:47):
total from a number of you know, French companies and
wealthy wealthy individuals by sort of similar tactics.
Speaker 5 (19:53):
There's a new modern version of this which is much
dumber and much simpler. It doesn't require any sort of
sophisticated persuasion. People just send an email purportedly from the
boss of a company and saying, Hey, I'm in a
meeting right now, but I need to transfer these funds
right away or I need to close this sale. Can
you just go ahead and make this transfer for me?
And if the email happens to reach somebody who is
(20:15):
responsible for doing that, then they might go ahead and
do it without even double checking, when in reality the
money would just get paid to some other count of
the scammers. And this is so pervasive that I have
a cousin who teaches tennis. She runs a tennis club
and regularly gets her underlings. The other people teaching there
regularly get emails from her, not really from her, but
(20:37):
from her saying hey, I'm in a meeting, can you
run this? Can you make this payment? And her employees
and coworkers know that that's not true. She's pretty much
never in meetings, and they're not the sorts of people
who make purchases. But again, if you send it out
to enough people, you're going to happen to hit. Some
who in that moment are busy doing things are used
to getting emails from their boss asking them to do
(20:59):
something really quickly, and we'll go along with it without
questioning it. And this is a major source of business.
Speaker 3 (21:05):
Run.
Speaker 1 (21:06):
Do you guys have any other examples, just something that
some hoax or swinger or something that you came across
that you think is really illustrative.
Speaker 4 (21:15):
I can give another example from the world of chess.
Speaker 3 (21:19):
So I'm a chess player. Well, Dan is also a
chess player.
Speaker 4 (21:22):
I'm probably a more serious player than Dan is, and
I'm a funnier player. Dan's a funny player. I'm a
serious player. I'll try to be funnier though. When you
play chess online, you don't see the person you're playing.
It's just a screen name and all you see is
the moves they play on an animated chess board, kind
(21:42):
of like you're playing a video game, right, You just
see the moose being made. And So I was playing
a game once. This has happened to me more than once.
But the occasion that I remember is I was playing
a game and it was a guy I had never
played before and the game started, and he had a
similar rating to mine, meaning, you know, we were both
pretty good players. I should be ready for a you know,
I should be ready for a good game. And every
(22:03):
move I made, it seemed like he always found like
a great response, and he never made a mistake. And
times when I thought I was winning, I really wasn't
winning because he found the escape. And moreover, he was
moving quite quickly, like he would make every move in
like five to ten seconds. And I was like, wow,
this guy's putting a lot of pressure on me. You know,
I'm thinking for a minute sometimes on these moves, and
he comes back in ten seconds all the time. And
(22:24):
in the end I got checkmated and I lost the game,
and I thought, wow, that guy like that guy played
a really good game against me. But then when I
looked at the game afterwards, chess dot com where we
played this game shows you exactly how much time both
players use in every move after the game, and I
noticed that he was making all of his moves within
that very tight band of five to ten seconds per move, basically,
(22:46):
never less than five seconds, never more than ten seconds.
Maybe it was twelve seconds, I don't remember, but if
you looked at my own graph. There were a couple
of moves that I took like one or two minutes
on and some moves I played almost instantly, like one second.
The consistency of his timing, and also the consistency of
the fact that he never made a mistake. All of
his moves were you know, almost the best move, if
not the best move according to computer analysis, really reveals
(23:10):
that all he was doing was being a conduit for
a computer, Like he was just typing my moves into
a computer and typing and putting back into chess dot
com the moves that the computer told him to play.
And here's an example where the behavior of a human
you know, really ought to be more noisy in some
fundamental way than the behavior of a computer. Humans never
play moves with robotic cadences. They never play the correct
(23:33):
move forty or fifty moves in a row, and there's
just much more variability in human decision making and almost
any human activity than there is in computer based activity.
So I filed a report. You know, you can report
any player, and sure enough, like a day or two later,
chess dot com came back and said, we're giving you
back the rating. Points you lost. This guy is violated,
you know, violated our fair play policy. And that kind
(23:54):
of thing happens all the time in online chess because
computers are so good that they can be used easily
to cheat. The real problem is are people using it
sort of like in over the board and like serious
tournaments and matches. And that's a whole other controversy. But
certainly was a case where you know, I was essentially
the victim of a little minor scam. I happened to
figure it out, but as a scam based on people
(24:14):
not noticing that, you know, not noticing the absence of variability,
which is a critical thing and a lot of and
a lot of cons I was.
Speaker 5 (24:22):
Gonna say one thing about that that particular case. It's
a fairly minor scam, right, It's just one element of
the kinds of habits and hooks that we find really compelling,
the hook of consistency, right or that. Yeah, it's just
something that we don't look for the noise when we should.
But if you look at bigger scams, things like Bernie
(24:43):
made Us, Ponzi Ski or fairness, they rely on a
whole bunch of our cognitive tendencies and they appeal to
a lot of kinds of information that we find really
valuable and that do help us most of the time,
but they take advantage of.
Speaker 3 (24:55):
Those to doup us.
Speaker 1 (24:59):
So can you unpacked that a little bit about noise
in data and what we should be looking for.
Speaker 5 (25:05):
Well, I mean, really, in any human behavior, anything that's
governed by interaction of people, we don't expect people to
perform the same way every single time. We don't expect
a three hundred hitter or three thirty three baseball hitter
to hit one out of every single every one out
of every three at bets, right, they will on average
average about one out of every three. But in any
(25:26):
game that doesn't guarantee they're going to get at least
one hit, right, we tend to confuse the sort of
on average performance with what happens every single time. So
take the case of Bernie Madoff's Ponzi scheme. Right, this
wasn't a sort of classic Ponzi scheme where he promised
fifty percent returns in six months like a you know,
current crypto scam. Would his returned eight to fourteen percent
(25:48):
or eight to twelve percent almost every year for the
entire life of the Ponzi scheme, with never a down
year and almost never a down month. It was like
a smooth, steady growth, And that's not what you expect
for some thing is complex. As a financial system, you
expect ups and downs. Sometimes you'll be up twenty five
percent in a year, sometimes you'll be down ten percent
in a year, and the average might be eight to
(26:09):
twelve percent, and that'd be pretty good, But you don't
expect the average to be true of every single case.
And this plays out in many many contexts where usually
consistency is a sign that we have great understanding of
how things work. We can do it the same way
every single time. We want things to be reliable, but
the tendency to have things be too consistent can be
(26:30):
so appealing to us that we don't realize when noise
should be present. This is common in a lot of
science frunt as well, that you find results that are
just too consistent to be believable, but the people who
are making it up don't realize that they need to
make up noise too.
Speaker 1 (27:00):
So a lot of the reasons that we fall for
hoaxes their scams is because of cognitive shortcuts that we're
taking so tell us about that and what we can
do about those shortcuts.
Speaker 4 (27:12):
Well, one of the most important shortcuts, I think is
it's not even so much a shortcut, it's just an
un standard operating procedure. We are very good at paying
attention to things.
Speaker 3 (27:23):
You know.
Speaker 4 (27:23):
Attention is a wonderful thing. We can do things with
attention that we can't do without attention. Like we couldn't
even follow a soccer game or a football game or
a basketball game without attention, and otherwise would just be
a big blur of bodies moving around and the little
round thing like you know, flying back and forth. Occasionally
we'd have no hope of understanding it. But with attention
we can focus on selected aspects of it and sort
of put together the plot and the sequence of events
(27:44):
and what people are trying to do, and understand the
intentions behind it, all the way up to the strategies
and so on.
Speaker 3 (27:49):
It's great.
Speaker 4 (27:50):
However, the downside of attention is that when we're paying
attention to something, we may not notice other things that
we're not paying attention to. And a Hoover fraudster knows that,
and they know that if they can get our attention
on one thing, kind of like a magician, then we
might not notice other important things that are happening. And
of course they're not doing magic, they're actually trying to
(28:12):
deceive us for profit.
Speaker 3 (28:14):
So many of.
Speaker 4 (28:15):
The basic sort of deceptions in areas like marketing, where
it's sort of not even deception in some cases, it's
just kind of like the way business is done. You
get the recipient to focus on what you're showing them,
and you can count on them usually to not ask
questions about what you're not showing them. So, for example,
like a product demo video, like this startup company called Nicola,
(28:39):
which is still around trying to build electric vehicles trucks.
In this case, they created a demo video of one
of their trucks tooling down a highway, looked like going
at a nice rate of speed, and addingpressive music behind
it and so on, counting on people not to realize,
not to think, well, wait a minute, what happened before
the demo started. What was the angle that the camera
(29:02):
was at. Actually the camera was tilted a little bit,
so actually what was appeared to be rolling down on
a flat surface was rolling down a hill. So the
thing actually had no functioning motor you know, and so on.
It just rolled down a hill slowly, and then the
positioning of the camera and the video, you know, cutting
did the rest of the work. And those are things
we just don't think about. Right, We're focusing on the truck.
It looks nice, it's moving, nice background, and so on,
(29:23):
and we don't ask what's missing, Like, what information are
we missing about what's here? What information are they not
providing to us? Are they telling us about all the
times they tried to make the vehicle work but it
just didn't work at all, and just showing us the
one time that it did. So attention focus is really useful,
but it creates, you know, it sort of creates a loophole.
It creates a way for other people to, you know,
(29:45):
to exploit that well.
Speaker 1 (29:46):
One cognitive shortcut that you mentioned in the book is prediction.
So how is it that we become victims of our
own life experience?
Speaker 5 (29:56):
Yeah, And I think that's a great way of phrasing it,
that it's our life experience. It makes sense for us
to have expectations based on our past experience and to
use those predictions. And the vast majority of the time
we can use our past behavior to predict what's going
to happen in the future, right, that that's a really
important thing to be able to do. The challenge comes
in that we don't tend to question enough the information
(30:18):
we get when it's perfectly consistent with what we predicted. So,
and this is something that I think is really interesting
in the context of scientific errors. So let's say you
run an experiment and you've got an experimental group and
a placebo group, and you want to see which one
does better. Right, and you're predicting your new experimental intervention
is going to do great. And let's say that you
find that the placebo condition actually does better than the
(30:41):
experimental condition. Well, you're going to really dig into those results.
You're going to dig into the data. You're going to
look at your code. You're going to make sure that
everything was coded correctly, that there weren't any data points
that didn't make sense. You're to make sure you didn't
swap the names of the conditions so that you got
it wrong. You're going to look into it pretty carefully
because it didn't match what you were predicting.
Speaker 3 (31:01):
Had it come out.
Speaker 5 (31:01):
Exactly the way you predicted, you might not dig as
closely and that's been something that's led to a lot
of errors. Right, So you have a spreadsheet that produces
the right results, and you don't double check to make
sure you didn't fill down the column incorrectly because it
matched what you were predicting. So that sort of error
is I think a really common one. We're really good
at applying our critical faculties when we see something we
(31:24):
don't like that we didn't expect that we didn't predict.
Somebody shares something on social media that was counter to
your views, you can rip into that, and we're all
pretty good at doing that, But when it perfectly matches,
we're much more likely to just quickly pass it along
and retweet it and not necessarily think through carefully is
it really true.
Speaker 1 (31:43):
Indeed, when we see things that are familiar in matching
our expectation, we don't look further into it. So how
do we work around that bias?
Speaker 4 (31:51):
Well, I would say obviously the first thing is to
be where that we're doing this. That's the first step.
Second step is to and again like not every moment
of every day, but when you're making a big decision,
or when you think that the stakes are high, or
when someone might be trying to deceive you ask consciously,
explicitly whether you predicted what just happened, and if you
(32:16):
did predict it, then actually check it out as well.
I think a lot of times we don't even sort
of stop to wonder whether this is coming out exactly
the way I predict it, because you know, things rarely
happen exactly the way we predict, you know, especially in
an environment we don't have a lot of experience with before.
When we're doing a new experiment, testing a new theory,
should it really come out exactly like we predict I
(32:37):
mean maybe if we're the big best scientist ever, you know,
but often it doesn't go that way. So we should
be vigilant at those points also to see like, is
our code right, did we you know, did we make
a mistake or something like that. The example that Dan
gave about switching the columns, you know, or switching the
variable names or something like that is actually exactly what
happened in a you know, a fairly recently uncovered case
(32:59):
where where the data totally did not support the claim
that was being made. This was a study of the
idea that signing a declaration at the top versus at
the bottom would make you more honest in what you
declared on that form. So, in this case it was
an automobile insurance company. They were asking people to report
how many miles they had driven their vehicles in the
(33:20):
previous year. And the test was sign at the top
saying you're going to be honest in you're reporting, versus
sign at the bottom saying you've been honest in what
you reported. The idea was like, signing first would draw
your attention to honesty, and you'd you know, produce more
you know, more accurate, more honest results in that case.
Speaker 3 (33:36):
And when.
Speaker 4 (33:39):
When this experiment was done and the data file was
being looked at by some of the researchers, initially it
seemed like there was no effect at all, or the
effect was even the opposite of what they had expected.
But then one of them said, oh, well, accidentally switched
the columns, you know. So once the columns were switched back,
then the effect, you know, turned out to be right
basically exactly as Dan said, you know, you switched sort
(34:01):
of the you know, the treatment and the placebo in
this case, the sign first and sign and sign later columns. Well,
it turned out in retrospect that the entire data set
was fraudulent. But once they got the result that you know,
that fit the theory, or fit the prediction, or fit
the expectations, then apparently they stopped looking to see does
the rest of the data make sense? Are there any
obvious red flags in there?
Speaker 3 (34:22):
And so on.
Speaker 4 (34:23):
I think a perfect example of at least, you know,
some authors of that paper being satisfied that they're you know,
the theory had been confirmed and not looking more deeply enough.
Of course, you know, researchers are taught to look at
the distributions of their variables, look at all of this
kinds of stuff and so on, before getting too excited
about just you know, confirming their hypothesis. But sometimes that's
hard even for experienced scientists to do so. In our
(34:47):
own everyday life, we should be more aware of when
our expectations are being sort of exquisitely satisfied. That could
be someone deliberately designing something to you know, to take
advantage of.
Speaker 5 (34:58):
Us, say in a much more mundane case, you know,
before you repost something or share it on social media,
just ask yourself, is it really true?
Speaker 3 (35:07):
Right?
Speaker 5 (35:07):
And what would I need to know to be sure
that it was really true, And that's something you can do,
whether or not you agree with it, and it just
takes a second. But once you ask that question, you
might realize, I have no idea how i'd know if
that were actually true. You know, I'd have to do
a lot of digging. And then you know, maybe just
don't reshare things that you haven't been able to verify
that might might actually help prevent the spread of information.
Speaker 4 (35:28):
I think most people really do want to only share
true stuff. I don't think people deliberately want to spread
false information a lot of time. I think they're just
not often thinking like whether it might be false that
they're being swept along by other cues.
Speaker 1 (35:38):
Besides that, I wanted to jump back to the science
the practice of science for just a second, which is
I was just talking some colleagues who got a big
data set and they wanted to prove something in particular
about it. They got this big thing of police records
and they had a particular thing that they wanted to demonstrate,
and they analyzed it and couldn't find evidence for their hypothesis,
(36:00):
figured out another way to look at it statistically, and
then another way, and they still could fine and finally
came up with some way, some statistical trick, and they
were very proud, they said, to have found finally this
evidence of this bias that they were looking for in
these police records. But it made me wonder about it,
because they clearly went in to find this thing. And
(36:21):
the question is did they do the right thing by
continuing to search and search and search with different statistical methods,
or is it purely that they were trying to make
the duck quack in a particular way. How do you
think about these issues?
Speaker 5 (36:37):
It's a really complicated problem because, of course you want
to be able to explore your data right. The problem
comes when you don't think about all of the alternative
paths you could take to get to the outcome that
you want to report, right. And Andrew Gellman refers to
this as the garden of forking paths. And I think
it doesn't imply any sort of malicious intent or intent
to deceive at all. But we make lots of choices
(36:58):
along the way that can influence the result, and sometimes
we don't even think about what those choices were. So
I think the problem comes not in exploring your data
really fully. It comes in only reporting the thing that worked,
the one example, the one analysis that was successful, And
what you really want to know is, hey, is this
hypothesis robust to a whole bunch of different ways of
(37:19):
testing it? And it sounds like in that particular case
it wasn't at all Right. All of the other ways
you look at it, you don't find anything. You only
find it if you look in this one particular way. Well,
that would be an important thing to know, right, That
would be important for the science in the field to
know that this only works in this one study if
you measure it this way, and if you fish around enough,
you'll find something that could be consistent, which means that
(37:42):
maybe we shouldn't trust that a whole lot until we
can replicate with that particular way of analyzing the data
and see if that holds up consistently. We should also
check to make sure that it holds up for real
reasons as opposed to just something odd about how you've
constructed the measure. Right, it might be that it's completely
reliable when you measure it that way, but it's sort
of an artifact of the structure of data of that sort.
(38:04):
So I think the most powerful way to do that
to say, hey, I want to make a claim that
we've discovered some relationship with some bias. Well, if you
want to claim that it's a general truth about how
the world works, you want to be able to show
that it works under a range of different ways of
measuring it and under a range of different conditions, not
just the one special one that you identified. And I
think this has been an issue in our field for
(38:25):
a long time, is that you know, there's obviously a
goal to try and support the theories that you're working under.
That's a natural thing to be doing, and it's not
necessarily a terrible.
Speaker 3 (38:33):
Thing to be doing.
Speaker 5 (38:35):
But we haven't been completely straightforward as a field in
reporting all the things we've tried, and depending on the
kind of approach you're taking, that can be really misleading.
If you don't report everything the way it was done,
it's cherry picking in a sense. You're taking out the
results that you wanted and ignoring all the ones that
didn't work, and you're left with the reader only having
(38:57):
in the paper that you read about it, focusing on
the information that you've told them. And it's just like
a magician who's directed you to the thing they want
you to see and hidden all of the secret methods.
Speaker 3 (39:06):
That get you there.
Speaker 4 (39:08):
Yeah, you, in a way, as the researcher who did that,
have inadvertently become the con artist, although you had no
intention to do it, and you're not actually trying to deceive,
but you're accidentally sort of using some of those very
same techniques that people could use, you know, to do
worse things than publish a paper that didn't you know,
that didn't actually have good evidence for its conclusions.
Speaker 1 (39:28):
Yeah, it strikes me that one of the things as
I was reading this excellent book, a lot of this
just has to do with taking the tools of good
science to the way that we interpret the world around us.
So the things about asking more questions and digging deeper
and so on. But it's interesting that even scientists don't
always do good science.
Speaker 5 (39:50):
Yeah, we're all capable of being fooled, right in. Scientists
and maybe journalists are trained to dig more and ask
more questions and to think critically about what they're hearing.
But you know, we're human. We have the same sorts
of habits and ways of thinking. We tend to like
results that support what we predicted, and we're drawn to
the same kinds of information, and if somebody's looking to
(40:11):
sort of hide what they're doing, they can fool scientists.
And there are plenty of fraudulent papers out there that
got through peer review even though there were red flags.
And just like that sort of heist movie that you
watch from the outside and you see all of the
red flags along the way that people are falling for,
but you're not falling for them because you're watching them.
In hindsight, they're all obvious, right, but in that moment,
(40:34):
you don't necessarily see them as red flags until you know, oh, wait,
that paper was fraudulent. I found out through other means.
Now I can see all the red flags that are there.
Speaker 1 (40:42):
Yes, So as I was reading the book, the way
I was thinking about it was, you know, the brain
is of course locked in silence and darkness inside the skull,
and it's just trying to make an internal model of
the world out there, a mental model, and we're always
we're always very limited in what our internal models can detect,
can see, And so one of the most important things
(41:05):
to expanding our model is to ask questions. And in
a sense this is the same as paying attention to something.
We ask a question that forces us to attend to
some aspect and then that updates our model a little bit.
So in chapter four, you guys had an example of
a chess grand master that asks his students to always
(41:29):
ask three questions when they're looking at the board?
Speaker 2 (41:31):
Can you tell us about that?
Speaker 4 (41:34):
Would I would love to tell you. I would love
to tell you about that. So I actually took a
during COVID, I took a summer chess camp on Zoom
with this guy and it was me and like twelve
people aged ten and under, which was a fun you know,
which was a fun again because who goes to summer
chess camps, right, It's like, you know, kids gauge ten
(41:55):
and under. Very good players by the way. And you
know one thing that he often that the coach would
often do. His name is jako Ogard. He's one of
the most famous chess coaches in the world, and it
was privileged to be able to sort of be in.
Speaker 3 (42:07):
His camp for a few hours.
Speaker 4 (42:08):
He would constantly say, we need more, you know, you
need to think of more moves. Right in a chess position,
there's like thirty to forty moves you can typically play,
you know, with all your pieces and people are often
become too focused on one. So he would say, think
of more candidate moves. Think of more moves you might
want to play and analyze. And if you're having trouble
thinking of them, he has specific questions that you can
use to try to generate ideas, and one of them
(42:30):
is what's your worst place piece? Maybe you should move
it if that's the one that's in the worst position.
Or what's the opponent's idea, Well, maybe you should come
up with a move that stops their idea. The third
one is what are the weaknesses? Maybe you should come
up with a move that attacks something that's weak. I
mean this is this will make sense to people who
play chess, but these are they're in almost all fields.
There are sort of general kinds of things you can
(42:50):
look at and principles you can use to generate, you know,
to generate more information and as you say, like you
I like your way putting it to sort of improve
your mental model of what's really going on, because in
order to in order to play good chess moves, you
have to have a good internal model of what's going
on on the board someplace in your brain. You've got
to have it and that's a way of sort of
generating more ideas, more analysis that then updates that then
(43:13):
updates the model.
Speaker 5 (43:15):
And most of the time the models that we have
for how the world works are pretty.
Speaker 3 (43:18):
Good, right.
Speaker 5 (43:18):
I mean, we're not, you know, constantly getting conned. We're
not you know, we don't have trouble getting around, we
don't have trouble communicating with other people. Most of the time,
our models of how the world is working are great.
They work very effectively. And it's only in those cases
where we need to dig a little more to update
our model for the possibility we're being teated or deceived
(43:40):
that we need to ask a lot more questions. Right,
most of the time, we've built up these models from
a ton of experience, and they generally do okay, Yeah.
Speaker 1 (43:48):
And we have expectations about what we're looking for given
these models, such that most of the time we're filling
in the blanks. And that is at the heart of
all these hoaxes and scams that you talk about throughout
the book, is we're filling in the blanks, and the
things that aren't said, we assume we.
Speaker 2 (44:06):
Know what they mean.
Speaker 1 (44:08):
So I'm curious what you guys think about the Turing
tests and for the listeners in case someone doesn't know.
The Turing test was proposed by Alan Turning to figure
out when a machine has become as smart as the human.
The idea is that you are the evaluator and you're
talking to a machine, and you're talking to a human,
(44:29):
let's say, by text, and you don't know which is which,
and the question is can you tell the difference between
the human and the machine. And the interesting part is,
because we bring so much to the table in any
conversation and we fill in the blanks, what do you
guys think Is that a good, meaningful test or is
it flawed in that way?
Speaker 4 (44:46):
I think we have a lot of evidence that it's
not so good from chat, JPT and large language models,
which I don't think are actually intelligent in the way
that we should. I mean, I realized there might be
some dispute about this. Some people make some sort of
extravagant claims about signs of general intelligence, but I don't
think they're actually intelligent, or at least not in a
(45:07):
useful way. And yet they are extremely convincing. I mean,
I think they show that you can sort of dissociate,
you know, producing what humans expect to see next, which
is basically, you know, basically what large language models do
because they've been trained to sort of, you know, output
the most probable next token or word and so on.
You can dissociate that capability from having sort of a
(45:27):
true understanding of what's going on. For example, you could
ask an LLLM to play chess with you, and it
wouldn't do very well. It would produce a lot of
stuff that sounds like chess and so on. But if
you really knew the game of chess, you would know
that this is sort of gibberish. If you don't know,
it sounds perfectly good, right, You just you just sort
of you fill it in with sort of the assumption
that this guy sounds like he knows what he's talking about,
(45:48):
you know, which is which is not not you know,
not necessarily intelligence things.
Speaker 5 (45:54):
Like chat GPT they speak with absolute confidence and certainty, right,
and that's regardless of whether or not they're generating true content.
You know, they're the consonant bullshitter right in that they
are equally confident when they're completely wrong and when they're
completely right. Because there's no grounding to any sort of
reality in the world. All they're doing is predicting what
(46:16):
comes next.
Speaker 1 (46:18):
That's true, although I have to say one of the
things about chat GPT that I've really come to appreciate
is that almost any question that you ask it, it'll say, look,
some people think this, some people think that, and in conclusion,
we need to balance these points of view. And at
first I found that really annoying, but I came to
understand and appreciate that it is trying, you know, because
(46:42):
of the reinforcement learning and so on, it's trying to
give different perspectives instead of sounding totally confident about just
a single answer.
Speaker 4 (46:52):
You kind of wish politicians would talk to you that
way sometimes exactly instead of the way they do it. Yeah,
well it's it's it's reasonable. I just don't think chet
GPT believes that both of those things are equally you know,
are equally likely to be true and so on in
any in any meaningful way. I think, you know, it's
the part of the danger I think of of models
(47:13):
like this is if you don't understand how they work,
and if you just see this is a I like
a lot of times you see these posts on Twitter
that say and A n AI did this that's just
designed to impress you, right and to mislead you by
thinking that therefore the results must be created by genius
and totally accurate. But if you actually understand something about
how it works, then you will have the reaction you did.
Speaker 3 (47:34):
You'll say, well, this is this is kind.
Speaker 4 (47:36):
Of interesting that it's trying to give you know, sort
of too equally like probable you know, schools of thought here,
or you know common you know, views on the topic
or something like that, but you understand something about how
they work, so it doesn't you know, they their output
sort of is more sensible to you than it might
be to people who don't know.
Speaker 1 (48:08):
I want to come back to just something you mentioned
about politicians. One of the things I was thinking about
as I was reading the book was politicians often get
points deducted if they change their mind on something.
Speaker 2 (48:19):
They're called flip floppers.
Speaker 1 (48:21):
And it's such a shame because we know that if
people are using scientific reasoning, they might reasonably change their
mind about some issue.
Speaker 2 (48:32):
At some point.
Speaker 1 (48:34):
The question is, how would you guys see a way
to change something about the way we educate the public
so that politicians who change their mind on a topic
are not considered flip floppers and deducted points.
Speaker 4 (48:48):
Now, I guess I would say that this refers back
to our taste for consistency. So there's something appealing in
consistency of a wide variety and consistency and a person's
behavior is also appealing to us. And many times that's good.
Speaker 3 (49:04):
Right.
Speaker 4 (49:04):
You want someone who keeps their word. You want someone
who says what they are going to do and then
does what they said they would do. You want someone
who's always on time. Like those are generally positive things.
But when you're talking about complex subjects like what should
climate policy be? You know, what should tax policy be?
Any of these kinds of things and so on, facts
change over time, you know, and that alone, never mind
(49:25):
you know, using scientific reasoning, but just the changing of
facts might, you know, might change people's views. I think
teaching people about the trap of consistency, you know, may
start to help. I'm not sure there's like an easy
nudge that is going to make people suddenly prefer the
flip flopper, because, after all, sometimes the flip flopper is
just being expedient, right. It's not as simple as saying, like, well,
(49:47):
we should anytime someone flip flops. We should just assume
that they are have done some complex ratios nation and
integration who do information, and now they've changed their mind.
It could be they're just saying something to a different
audience because that's what is expedient for them at the moment.
It's a tough problem to sort out, but certainly I
don't think we should have the bias against changing one's
mind that it is really seems to be built into
the political system, because I do think it contributes to
(50:08):
some polarization also, Right, in order to be a consistent conservative,
you always have to do this, in order to be
a strong you know, it's definitely it definitely has its cost.
Speaker 5 (50:18):
Well, and you know, scientists also aren't necessarily the greatest
at updating their beliefs in light of evidence.
Speaker 2 (50:23):
Right.
Speaker 5 (50:24):
There are plenty of people who get evidence that should
contradict their claims, but they continue arguing for the old
position without fully updating their beliefs accordingly, Right, every time
somebody encounters have failure to replicate their own work, right,
what's their reaction, Well, I'm going to try and find
all of the things that were done differently in that
replication attempt that could maybe excuse why they didn't find
(50:45):
what I found, And what they really should do is say, Okay,
you know that may or may not be right. I
might disagree with it, but it should make me a
little less likely to believe in my original result. And
maybe there are alternatives and I need to go test those,
and if those alternatives don't work out, then I should
be changing my beliefs. But you don't see that all
that often. You don't see the person going back to
the study and saying, Okay, I'm going to prove that
(51:07):
it was this moderation that explained why they get They
didn't get it, but I did. More often than not,
you have a dismissal Right, Well, that's really not all
that different than what a politician does when they refuse
to change their views in light of new data.
Speaker 3 (51:22):
Yeah, you know.
Speaker 1 (51:23):
I think this is back to this issue of complexity
in science, which is that it essentially works by the
advocacy system, which is that you're supposed to defend an
idea all the way down until you can't defend it
anymore and then you give up on it. But so
when someone says, oh, I didn't replicate your thing, you
might be the only one who says, hey, I'm willing
(51:43):
to defend this and really fight for the idea until
I reached some point. The question is what is the
proper point? And as we said, scientists are humans too,
and so they really care about their reputation and their
previous publications.
Speaker 5 (51:59):
And that's fine, So it's what's what's the point? But
also what evidence do you use to defend it?
Speaker 3 (52:04):
Yeah? Right?
Speaker 5 (52:05):
So if if the evidence you used to defend it is, hey,
we can still get our result and we can show
why you didn't get it, that's great, right, Then you
then you've bolstered your position. You've let everybody to update
with their beliefs. Are If your defense is they're incompetent,
that's not a great defense. It might be true, but
you'd have to show why and then show that if
you do it in the incompetent way, you don't get
(52:26):
the result.
Speaker 4 (52:28):
Well, according to the adage that science progresses one funeral
at a time, that point is the point of death. Right,
And so there is a generational aspect to some of
these to some of these things. I think there's there, seriously,
is right, there's sort of generations. Generations often are associated
with particular schools of thought or views that sort of
pass and.
Speaker 3 (52:45):
And and evolved.
Speaker 4 (52:46):
It would be great if we could get there before then, right,
So we should get there before that point. But it's
I don't think I'm not so bleak. I'm not so
blik on this maybe as some people, because I think
a lot of times what happens is people just sort
of drop out of the debate. Right. They may not
have changed their mind, but they don't become an important
person in that in that area anymore, or at least
(53:06):
the next generation who's doing all the interesting work doesn't
take those people that seriously anymore. They're still alive where
you know, their funeral hasn't happened yet, but they maybe
moved on to another topic. They're doing something else in
science maybe, but they're not like exerting like an iron
you know, an iron fist, you know, rule over some area,
like when a science works that way, that's pathological.
Speaker 1 (53:26):
Right.
Speaker 4 (53:26):
But you know when when someone's views like you know,
must be then then we're you know, you're not talking
about science anymore, right. I think a lot of people
just sort of move on instead of before it's too late.
Speaker 1 (53:35):
Yeah, And this is the great part about science is
that it is the only endeavor that's constantly knocking down
its own wall.
Speaker 2 (53:41):
So with enough time, the truth will out.
Speaker 1 (53:45):
Things are changing really rapidly ever since you, you know,
finished writing the book and sent it off, And so
one of the things that I want to ask about
are things like what's happening with AI and deep fakes.
What are the new kinds of hoaxes that you see
coming down the line.
Speaker 5 (54:02):
Well, I'll raise one that's not new, And I think
it's important to point out that none of the hoaxes
that have happened over the thousands of years are really
fundamentally all that different in the way that they take
advantage of our tendencies, right, And that's the thing that
we noticed across all of these different domains, from chess
to sports, to art to science, that they all take
advantage of the same sorts of tendencies. And new scams
(54:25):
are going to do that too, they just might do
it more effectively. And even the Nigerian Prince email scam
was originally a Nigerian prince mail scam back when people
sent letters.
Speaker 3 (54:36):
It's more effective in.
Speaker 5 (54:38):
That they don't have to spend as much time and
effort finding potential victims. Right, some of these scams with
the advent of AI are going to become more effective.
Speaker 3 (54:48):
Right.
Speaker 5 (54:48):
So one that's common right now is it's either the
you know your kid's been arrested scam or has been
in an accident or is being held hostage. It's a
horrible thing, preying on people's fears. They'll call up a
parent or a grandparent and say that the kid needs
to be bailed out immediately, right, and often they'll call up,
you know, a relative like a cousin or something. Now,
(55:10):
that scam's pretty effective because people want to quickly solve
this problem, right, They want to quickly fix what is wrong,
and often we don't have the preventive measures in place
to stop that. But imagine how much more powerful that
is if you're using AI voice synthesis to make the call,
right and it actually sounds like it's coming from that person.
(55:31):
That's going to ramp that one up, same principles, it's
just more potent.
Speaker 3 (55:36):
I think a.
Speaker 4 (55:38):
Whole area which is rife with scams, which is a
new area, but you know, sort of re scamming based
on old principles as cryptocurrency, So there are thousands and
thousands and thousands of cryptocurrencies and coins being issued and
so on, and you know, as far as I can tell,
most of that is mostly fraud. But yet it relies
on all the same principles. You've got sort of like
(55:59):
famili your celebrities advertising these things. You've got time pressure,
there's a limited offering. You know, you've got to make
a decision. Now, You've got these sort of like fake consistency,
Like people will claim that like our crypto fund is,
you know, never had a down month in all of
its three months of existence or something like that, you know,
but consistently going up, and all the same stuff just
being applied to a whole new being applied to a
(56:21):
whole new thing. And I don't really see that, you know,
getting any better. I as far as AI in deep
fakes and so on, I do have some optimism that
it's going to increase the value of truly trusted sources
who bother to check that stuff.
Speaker 3 (56:35):
Right, So I.
Speaker 4 (56:36):
Noticed during there were not long ago there but this
is sort of pseudo attempted coup revolution weird thing that
happened in Russia. You know, a sort of paramilitary group
kind of turned on the military and started marching to
Moscow and so on. And I was fascinated by this
and paying attention to Twitter, and there were all kinds
of reports on Twitter, people claiming to be like eyewitnesses
(56:56):
to things and so on, and very little of that
made it to the mainstream media or to legitimate sources.
And I thought about it afterwards, I thought, why is that?
You know, well, maybe some of it was true, but
probably most of it just couldn't be verified. You know.
It was like one guy said they saw something, they
couldn't find someone else who saw the same thing, They
couldn't find the underlying you know, whatever it was that
(57:17):
was supposedly the source of the evidence. But nonetheless the
story that emerged, although a little more vague and abstract
with less detail, was probably much more likely to be
true because it was sort of filtered through agents that
bother to check and try to only pass on verifiable information.
And they are now faced with the problem of how
do you tell whether this video of Trump doing X
(57:37):
is actually Trump doing it or some fake that someone created, right,
But I don't know who else to put more trust in,
you know, for sorting that out than journalists and or
and there are some organizations that they work with who
are experts at detecting these kinds of things and so on.
So I think maybe it might paradoxically increase the value
and the attention paid to more legitimate sources, which I
(57:59):
think would probably be a good thing on balance.
Speaker 5 (58:01):
I mean, the pessimistic view is that these things get
increased in scale, right, it makes it much easier to
scam at large scale and make it sound plausible, right.
But the optimistic take is exactly what Chris was saying,
that once we realize that these things are possible at scale,
maybe we start being more skeptical of most of the
sort of rapid information that we get, and we withhold
(58:24):
judgment just a little bit longer until we can have
some verified sources. And the idea would be if we
could actually have verified sources again, we haven't had that
for a while. Now that anybody can start up a
cable network and say whatever they want.
Speaker 1 (58:36):
This is something I've been wondering about recently, is all
of these things that we're very concerned about, like deep fakes,
will the younger generations be much less susceptible to them
because they're well aware that if you see a video
of something, it might be real, it might be fake,
as opposed to know, those of us who are older
are really concerned about it in a way that we
might not need to be.
Speaker 3 (58:57):
I think that's a really good question.
Speaker 4 (58:59):
I think the jury still out on that, because I
think in some ways younger people are a bit more
naive about some things. They don't have certain experiences and
so on. On the other hand, as you say, they
may be more used to the idea that videos are
not proof the way people who grew up in an
era of less video and less awareness of video editing
and so on might not be. I'm reminded of the
(59:21):
sort of discussion that you heard, you know, fifteen to
twenty years ago about the so called digital natives and
how having grown up with technology, they were so smart
in using it and so on.
Speaker 3 (59:30):
And then when I became a college.
Speaker 4 (59:32):
Professor, I found out that students didn't really know how
to do a proper Google search, you know, and so on,
even though they were supposedly natives, like it's not in America,
not being able to, you know, to speak English correctly,
So that gives me less optimism. But I think in general,
across generations. I think there's going to be a rise
in a rise in skepticism, may be somewhat of a
(59:54):
decline of truth bias. Truth bias can't decline too far
otherwise we just can't interact with anybody anymore. But maybe
a sort of a decline or a specialization of truth
bias where you have sort of a little bit more
truth bias in some areas, like when you're talking to
an actual human being standing in front of you, and
less when you're watching a video on TikTok. Like that
would be a nice balance to have, right and not
(01:00:14):
to pick on TikTok, but there seems to be more
nonsense there than most other places, just from what I've noticed.
Speaker 1 (01:00:20):
Okay, so zooming out, give us some practical advice for people,
some tips they can take home. Well.
Speaker 5 (01:00:29):
I'd say one quick one is that whenever you're in
a situation where the consequences could be big, be willing
to ask more questions. And it can be socially awkward
to do that, right, to kind of press for more information,
but doing that's essential if the consequences of being deceived
are big, and sometimes you can kind of get started
on asking questions without actually, you know, being hostile and aggressive,
(01:00:51):
like can you tell me more? Is a way of
getting somebody to talk a little bit more, give you
a little more information that might actually make it more
comfortable to ask questions about that more information they give you. Right,
So the sorts of skills that many of us develop
an academia. Or you're giving a talk and you can
stand up and ask the hostile question, or you could
ask a question that reveals more information, and the goal
(01:01:13):
is to try and reveal more information and remain a
little uncertain until you have that information. One broader one
is if somebody were trying to scam me in this situation,
Let's say you're investing in something. If somebody were trying
to scam me, how would I know?
Speaker 1 (01:01:26):
Right?
Speaker 5 (01:01:26):
So, if I'm thinking about investing in crypto and say,
is that a scam? How would I know if that's
a scam. If you can't answer that question, then you
probably should walk away.
Speaker 3 (01:01:35):
So if you.
Speaker 5 (01:01:36):
Don't understand how blockchain works and how crypto coins work,
you probably shouldn't be investing in crypto. Regardless of what
a celebrity tells you. If it were a scam, how
would you tell well, and be really hard to tell
if you don't understand how it works intimately. I'll give
two practical ideas also. I think one is don't make
really important decisions all by yourself. We came across many
(01:01:59):
example where people were about to make mistakes, big mistakes, like,
for example, one of those guys was about to give
to wire money to the fake French defense minister, and
his friend walked into the room where he was having
this call with him, and he immediately right away said
this can't be real. This must be a scam. And
why was the friend able to notice but the victim,
the intended victim wasn't. Well, probably the friend had not
(01:02:21):
been in on all the previous conversations, so there wasn't
sort of that sunk costs idea, that idea of a relationship.
Speaker 3 (01:02:26):
And so on.
Speaker 4 (01:02:27):
And maybe it was just he had a different mindset
that day, He had a different attitude, he was thinking
different things, and he never got sucked into the whole thing.
So ask a friend, get an outside view before you
make a big decision. Should I really send all my
life savings to this guy, you know, just because everybody
says he's the greatest thing, or is there any other
consideration I should be using when investing my money.
Speaker 3 (01:02:46):
So that's one.
Speaker 4 (01:02:47):
The second one is like, do your work on deadlines,
but don't like give away your money on deadline. So
if anybody ever says, like, you know, you've got to
do this within a certain period of time, the police
are coming to your house if you don't like pay
this bill right away, or this offers exploding very quickly,
you know, or there's only one of these things left
or whatever, just be aware that, like that's a prime
(01:03:10):
environment to not have time to ask questions, not have
time to think about the information you're missing, not go
through any of this, and realize that, like, you can
still buy that thing the next day if you really
want it, you can still invest your money next week
after you have checked the guy out.
Speaker 3 (01:03:23):
You're not going to lose much. I would go with
those two.
Speaker 1 (01:03:31):
So that was Dan Simon's and Christopher Shabri. Now what
we learn from them is that a lot of protecting
ourselves against deception is about taking the tools of science,
which is nothing but the tools of thinking clearly, and
applying those to our daily lives. So, for example, if
somebody says something is true, whether from their position of
(01:03:54):
authority or religious status, or with a trust me on
this one vibe. The key is to trust, but verify.
The important thing to get in the habit of is
just asking the next question. And it's tough because life
doesn't allow questioning everything. Our schedules just don't allow that,
(01:04:15):
and we have to operate on trust for most of
what we do. And sometimes we find ourselves in a
situation where someone doesn't quite answer the question we've asked,
and it feels impolite to keep pressing on it. And
also what is life if we don't trust? But the
fact is we can always get a little smarter, a
little less gullible by knowing that reality can be different
(01:04:39):
in different heads. And whether we're talking about Tanya my
fellow graduate student, or the dealer of the shell Game,
or Elizabeth Holmes and Farahos or whatever, it's incredibly useful
to stretch beyond the parochial limits of our mental models
of the world, because with with more knowledge comes a
(01:05:02):
bit more immunity, and understanding the character of our brains
allows us to move through the world a little bit
more smoothly than we would without that knowledge. Go to
Eagleman dot com slash podcast. For more information and to
(01:05:24):
find further reading, send me an email at podcasts at
eagleman dot com with questions or discussion, and I'll address
those in a special episode. Until next time, I'm David Eagleman,
and this is Inner Cosmos.