All Episodes

January 20, 2025 • 47 mins

Why is it so difficult to define intelligence? What does this have to do with being a fish in water trying to describe water? Might we humans possess one kind of intelligence in a constellation of many other types? And what does this have to do with empathy, AI, and our search for extraterrestrial life? Join Eagleman with guest Kevin Kelly as they dive into whether there might exist very different kinds of minds.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Why is it so hard to define intelligence? And what
does this have to do with being a fish in
water trying to describe water? Might we humans possess one
kind of intelligence in a constellation of many other types?
And what does this have to do with empathy or

(00:25):
artificial intelligence or our search for extraterrestrial life. Welcome to
Inner Cosmos with me David Eagleman. I'm a neuroscientist and
an author at Stanford and in these episodes we sail
deeply into our three pound universe to understand some of
the most surprising aspects of our lives. Today's episode is

(00:56):
about minds and whether there might exist very different kinds
of minds. So I'm going to boot this up with
a prediction that I made last year. I've always noticed
that some people feel annoyed when they ask chat GPT
a question, Let's say a political question like was so
and so a good president and chat GPT answers with

(01:18):
something neutral like Some people feel this way. Some people
feel that way. More discussion and debate is needed here,
and a lot of people who feel that they hold
very clear political stances they criticize chat GPT for this.
They say, look, it feels wishy wash. It's not taking
a stand on anything. It is stuck in neutral. It's

(01:40):
not brave enough to take a position. But I feel
these sorts of answers from lms are the sign that
we are currently living in the golden era of AI.
And this golden era is sure to end, just like
the golden age of the Pax Romana. So the prediction
i'm year ago on this podcast is that people on

(02:03):
the far ends of the political spectrum will start getting frustrated.
How come this AI isn't telling me the true answer,
the answer that I can so clearly see, and anybody
who is saying would clearly agree with me. So my
prediction is that quite soon, as soon as the architectures
drop in cost for training these models, we're going to

(02:24):
see the far left progressives training their own model and
the far right conservatives training their own model, and both
sides will say, look, we don't want this garbage literature
to pollute the training data, so we're gonna leave all
this stuff out and just include the writing that is

(02:45):
consistent with the truth. And obviously anything we disagree with
comes from people who are just trolls or obstreperous or
at minimum are badly misinformed, and if they'll just come
to read the AI wisdom that we know to be true,
then they will see the light of our way and
the error of theirs. So this is my prediction about

(03:08):
the Balkanization of AI that will come about. But perhaps
there's a broader way to think about this. Perhaps there
are indeed different types of artificial intelligence, not just in
terms of how they're trained up, but more fundamentally about
their architecture. Perhaps there are very different ways to think.

(03:30):
In other words, what if there are different types of intelligence,
not just one artificial intelligence, and also many different types
of natural intelligence. Now on this topic, there's no one
better in the world to talk with than my friend
Kevin Kelly, who's been chewing on this for a while.
Kevin is one of the fathers of Silicon Valley, not

(03:52):
because he's a techie, but instead because he's one of
the deepest philosophers and connoisseurs of technology. He has a
long and storied history that I'll link on the show notes,
but I'll just mention for now that he's the founding
executive editor of Wired magazine and a former editor of
The Whole Earth Review. He's the founder of the cool

(04:14):
Tools website, and he's one of the founders of the
Long Now Foundation. He's written multiple best selling books about
the future of technology, which I'll link. And one of
the many things that I love about Kevin is that
he's a radical optimist and an extremely creative thinker. So
I wrung up Kevin to talk with him about his
view of intelligences. Does intelligence mean just one thing? Or

(04:39):
might there be lots of different ways it could manifest? So, Kevin,
we're all talking about AI, and people are talking about AGI,
and we're aiming towards these things. But you have a
very interesting view on it, which is that we can't
look at AI as one thing, but instead there are

(05:01):
multiple ais we need to be thinking about. So tell
us about this.

Speaker 2 (05:03):
Yeah, Yeah, I think we should force ourselves to use
the plural AIS whenever we're talking about this. It's very
to me, very similar to machines. We have a lot
of machines in our life, but we don't talk about
the machine. We don't have a single regulatory apparatus for
the machine. We don't have a single operating manual. We

(05:26):
have multiple machines in our lives, and we're going to
have multiple ais in our lives, and those ais individually
will have different characters, they'll have different needs, they'll have
different tasks, they'll have different abilities, they'll be different species.

Speaker 3 (05:46):
Almost.

Speaker 2 (05:46):
We can think of them as different species of mind.
Something that might be a slow, minimal kind, something that
might be fast and peculiar, one might be imaginative and
one dimension, but not others. And so I think the
fallacy that we have is we imagine intelligence as a
single dimension, kind of like amplitude or sound decibels kind

(06:11):
of goes up abup, and there's like a ladder and
we're kind of climbing up, and we've got the rats
and you know, the chimp, and then the human, and
then we've got Ai above us, and that there's this
this one dimensional thing. But what we're going to discover,
and we have already with with what we've made so far,
is that it's multi dimensional. It's a very big space.

(06:31):
So the space of possible minds is vast, and human
kind of intelligence we're at the edge. We're an edge species,
like we're at the edge of the galaxy. We have
a very peculiar mix of different kind of primitive cognitions

(06:51):
and we have one species of mind, and the things
that we're inventing are going to be many other kinds
of thinking. And the reason why that's important is that
there may be certain things that we want to do
and understand that our own kind of intelligence can't by itself,

(07:14):
but we can understand it with the two step process
of inventing another kind of intelligence that can work with us,
so that we can understand or make something that we
can't do by ourselves.

Speaker 1 (07:31):
Yeah, you know, in neuroscience, this has been one of
the challenges for the last one hundred years, is even
defining intelligence for humans. So some people think, well, maybe
it's about being able to squelch distractors, or some people think, well,
maybe it's about being able to simulate possible futures, or
you know, there's twenty theories out there about what intelligence is,
but it's probably one of these words that has too

(07:53):
much semantic weight on it is trying to incorporate many things.
And so your take on this is that the way
to go about this is to think even more broadly
about what intelligence could be, what we might mean by it,
and all the ways that we're going to invent different machines,
forms of AI to get there. So you recently made

(08:15):
a list, well, actually in your book The Inevitable, right,
you made a list of possible minds. Yeah, let's start.
Let's start there, give us a sense of possible minds
to get us to expand our brains on this.

Speaker 2 (08:28):
Let me just say one of the things that's very
common when you talk to people about what they imagined
super intelligence is something that's super beyond human and what
does that look like or feel like or how do
we see it? And the most common response is sort

(08:50):
of like human thinking but faster. Yeah, billion times faster.
And okay, that's one version anything else. So there's again
there's this idea of a singular thing. We have this
thing and we're gonna make it faster, and that's just
super and then that's it and I don't think so.

(09:13):
So part of the chore about the challenge about thinking
about possible minds is to think about things other than
just this time element, although that's one of them. And
so one of the first possible minds is something that
thinks really really slow. You can imagine it's kind of
mind waves being so slow that we can't even see it.

(09:37):
We don't even recognize them, they're kind of really really slow,
and so in a certain weird way, evolution is that
evolution is a kind of learning that happens on a
very large time scale, and we can't see it on
an every day perspective, but we can see it over time,
and so there could be slow kinds of intelligences that

(09:58):
may be more powerful in the sense that they're wider,
but not its early faster.

Speaker 3 (10:02):
So that's just kind of like the first one.

Speaker 2 (10:04):
One of the challenges again, as you mentioned, is that
we use the word intelligence, and I think it's a
form of ignorance in the sense that they're probably well,
I've been reading about the early days of the discovery
of electricity or the invention of electricity, and it was

(10:26):
really early interested. And this is way before Tesla. This
is like at the Faraday in Davies, where they're trying
to really understand what it was. And it was remarkable
because a lot of the smartest people in the world,
Newton and others, were completely wrong. They had these ideas
of Flakistan and the ether. They just had no idea

(10:48):
what it was, and they were really struggling. At the
same time they were trying to figure out what materials were,
what substances were before we had the idea of a
periodic table, So there's stuff. Well, then we eventually understood
that there were elements and the elements were recombined to

(11:10):
form compounds, and that there was a fixed number of elements,
and then you could identify the elements and then they
were pretty distinct and that you made up. Salt was
not an element, so it was actually a compound. Water
was a compound. So you have all these compounds made
from the elements. And what we're trying to do right
now with intelligence is like, what are the elements that

(11:33):
make up this compound that we call intelligence? And so
there are things like maybe logic and deduction or reasoning,
which may be different, and then there's memory and short
term memory. There's a bunch of different elements that we
don't know about that we don't know and haven't identified

(11:54):
and can't describe, that are probably fundamental to making this
thing we call intelligence. And so part of this little
challenge of imagining possible minds is to think about what
might those elements be and how might one rearrange them
to make a different compound.

Speaker 1 (12:15):
That's very good, So tell us give us a sense
of some other possible minds. Sure, so we can start
thinking about what the periodic table will look like.

Speaker 2 (12:22):
So I've made a little list and I'm going to
just use it for the prompt.

Speaker 3 (12:26):
And so.

Speaker 2 (12:28):
One of the elements in our own intelligence is self awareness.
But it's possible that you could have degrees of intelligence
without self awareness. So we think somehow self awareness is
instrumental for high intelligence, but you may be able to
do a lot of things without any self awareness without
the entity being a self aware that's what we have now, right, right,

(12:52):
and vice versa. You might be able to have self
awareness with very limited intelligence, yeah, okay, and so you
could have kind of like this thing is really really
good at doing things, but it's not aware of itself.
And this thing isn't that capable in other words, but
it's very aware of itself.

Speaker 3 (13:10):
Okay.

Speaker 2 (13:10):
So self awareness is one of those elements that you
can kind of conjure with another one would be. One
of the things about human intelligence is that I think
we have very deliberately through evolution, restricted our access and
ability to change ourselves to have access to our operating system.

Speaker 1 (13:29):
Right, because it's really dangerous, that's right. We have almost
no access to what's happening under the hood.

Speaker 2 (13:35):
Yeah, and I think that there's a reason for that.
But you could have minds that were much more had
much more access to that mutability, to that changeability, to
the reprogrammable nature of them. So there could be sophisticated
compounds that had a reflexive ability to mess with themselves. Okay,

(13:58):
so that's a different kind of mind. There's also one
of the things about is like, like, what is the
smallest possible mind that could accomplish something? This idea of
like trying to minimize it. We're trying to do that
with life right now. It's like, what's the smallest possible
sell you could have and how far could you go?

Speaker 1 (14:18):
I mean, with something like a transistor count if you
have current here and here, then it opens and otherwise
it doesn't.

Speaker 2 (14:24):
And it's probably not complex enough because we I don't
think we would say that it was intelligent. So when
once we understand what we need to do reasoning, it's like,
what's the smallest amount that we could do to get
some reasoning?

Speaker 1 (14:38):
But a transistor does do reasoning in the sense that
it says, hey, if I've got these two things, then
I go, and if I don't, then I don't go.

Speaker 2 (14:45):
We haven't yet figured out what reasoning is. I don't
think we have a good definition of reasoning. And so
right now we're kind of in this incredibly exciting period
where we say, some of these lms have some amount
of reasoning. Can we measure that? Can we can we
what's the metric? We don't have that yet, but I
think we will come to understand that there's a difference

(15:07):
between say, learning and reasoning. And again, this is our idea,
like we're at the beginning of understanding the periodic table
of cognition, and so another one would be you could
have minds that forget things. Forgetting is actually very very

(15:27):
important in many kinds of cognition.

Speaker 1 (15:30):
As the writer ball Zach said, memories beautify life, but
only forgetting makes it bearable.

Speaker 2 (15:36):
Right, you could have ais that never forget anything, and
you could have ais that have learned how to forget
certain things. For again, we're going to again these are
being engineered to do different tasks, and so there will
be certain kind of tests where we want them to forget.
So there'll be some you know, Nobel prize in the

(15:59):
future for some AI researchers who figures out how to
do intentional forgetting.

Speaker 1 (16:05):
Yeah, it appears, by the way, that is an aspect
of our intelligence. We forget most of the things happening
in our lives. And one hypothesis that Francis Kriik suggested
is that dreams are our way of taking out the
garbage at nighttime. Yeah.

Speaker 2 (16:22):
Yeah, so another aspect of different kinds of possible minds
or minds that are would be easy to migrate versus
ones that were difficult to migrate off of the substrate.
So I am a big believer that the Church Turing
hypothesis about computation.

Speaker 1 (16:45):
Can you explain that this misunderstood?

Speaker 2 (16:46):
So the Church Turing, how about this is about computation says,
given enough storage and memory, all forms of computation are equivalent.
That you can emulate what one computer can do another
computer can do if you have enough storage and capacity.

(17:10):
And I think that's the key thing is that in
real time there isn't equivalency that that it makes a difference.
It actually makes a difference. What's the substrate you're running on.
They're not equivalent because there's a matter of time. Yeah,
if you have an infinite space. But no computer has
infinite tape. You have you're always finite. That's reality, your finite.

(17:33):
You have finite time, you have to make a decision.
If you emilate it, you're going slower. And so those
make the difference when we come to intelligence, and so
that the kinds of ais that we make on silicon
will always behave differently than one running on wetwear on tissue. Okay,

(17:54):
So one of the other hypotheses about the possible mind
landscape is that a lot of those varieties come from
operating on different substrates on different brains, that the brains.
It's not like the materialists where it doesn't matter what
the brain is made out of. I'm saying it absolutely

(18:15):
does matter what the brain is made out of, because
you will get a different compound from that, because there
is a spatial element that makes a difference in the
actual output of this. And so one of the things
that the possible mind landscape would say is that all

(18:38):
the minds that we're making are alien.

Speaker 3 (18:42):
They're not human. They're not human like.

Speaker 2 (18:46):
The only way we can make human like intelligence is
to have a human like brain. We could make artificial
minds that are based on cells, and the more cellular
and gray matter or like they would be, the more
that compound intelligence would resemble ours. But when we're making

(19:06):
them on silicon, there are going to be alien intelligences,
which doesn't mean that they're stupid. They could be very,
very smart. It's just that they're different. They have a
different character, they have a different personality, they're different. They're
different in the way that Spack a Star Trek is
different than Kirk, and that difference.

Speaker 3 (19:32):
Is actually their benefit. It's the fact that they're not
thinking like we think.

Speaker 1 (19:39):
Right in the sense that a show like Westworld, there's
an amusement park that's made, and we build robots that
look just like humans so we can interact with them.
But I've often thought that we probably are never going
to build robots that look just like humans, because there's
no point. We already have humans. Humans are easy enough
to make, and what we do with our machine means

(20:00):
in general is build things that do things that we
are different. Yeah, exactly right, right.

Speaker 2 (20:06):
I think we will make humanoid robots because that's the interface,
because we want them operating in our world, and we're
comfortable with that scale. We're comfortable with the emotional connection,
and so there are a lot of reasons to make
them humanoid, but there's no reason to make them look
like exactly like us. Okay, that they will be alien,

(20:27):
and that's good because we need the alien intelligences to
do the things that we can't do by ourselves. We
can make another human in nine months, so we want
to make other kinds of minds, and all those minds
are going to be aliens to us in the sense

(20:48):
that because they're running on a different substrate, they can
fake a lot of human behavior, and they will because
we're going to be comfortable with it, but they won't
be exactly like this because they're running on a different substrate.

Speaker 1 (21:03):
Now, can you imagine a scenario where a system is
built such that it's just call it a bigger a
much bigger brain than we are, and it can emulate
our intelligence on part of its hardware.

Speaker 2 (21:14):
It will imitate a lot of it, but because it's
running on a different substrate, it cannot do exactly what
we're doing. So my hypothesis is that this brain makes
a difference to the mind. Okay, that they're not equivalent

(21:37):
if you take into account the fact that they're that
they're limited by time and space. Doing the conversation that
we do in our brains is will give a different
quality and maybe different answers to something that's done on silicon.

Speaker 1 (22:10):
Okay, And so so that we can keep stretching our
minds here give us some other possibilities.

Speaker 2 (22:16):
So one is I think we could imagine minds that
are very very specialized in working with individual peoples. It's like,
not quite a clone to me, but a dedicated, very
personalized AIS, very very personalized to my specific need and

(22:37):
personality in my own brain that would be different from
other humans.

Speaker 3 (22:41):
And they're pairing.

Speaker 2 (22:43):
So it's sort of like a paired mind, a mind
that is built to be paired with my mind to
work with my mind, and that's its only job, and
it's really good at that. It's not good at anything else,
and it's not good with working with you. So this
idea of a paired intelligence.

Speaker 1 (23:01):
Oh, that's cool. So right now people are doing that,
of course, with AI that learned your stuff, maybe reads
all your emails and song. But you're saying, instead of
a general AI that learns learns me, one that actually is.

Speaker 2 (23:13):
Built, right, It's constructed, its programmed, it's the weights, it's
trained on me, so that it is unusable by somebody else.
That's how dedicated it is. It's it's it's paired in
that sense. Another one would be so so we can imagine,
of course I had this, I had this kind of

(23:35):
a grid of like the four possible directions of humanity
with just two axises. One is, we can imagine a
future of humanity that has many species of humans and
many minds. Okay, so we speciate, yes, over time, we

(23:59):
could imagine. Another one is we have many species of
humans and one mind. Well, like, okay, where we are
telepathy just does we're connected so much, even though we're
different species, we're all in tune, and we create kind
of a single borg intelligence on the planet.

Speaker 1 (24:22):
Then you can imagine before we move on to what
degree do we have something moving in that direction with us,
say the Internet? Yeah, yeah, right.

Speaker 2 (24:31):
So that's just to kind of indicate that there's a
there's a there's a cold class of minds, that these
are kind of superminds, these these aggregated minds made up
of many many humans, and so we have we have
the mind of all humans working together.

Speaker 3 (24:48):
If we had some kind of.

Speaker 2 (24:48):
Technologies that would allow us to telepathically connect to each other.
It was like a neuralink, right, So you had a
neuralink that kind of works at the large scale, and
we're connected to others and we are we're creating some
kind of superhuman mind at the skill of literally having humans,
billions of them connected together. That would be a mind.

(25:10):
That's the possible mind that we don't know very much about.
So that's a possible mind. Then there's the possible mind
of all the little ais linked up together. You have
a thousand different species of ais and they are also connected,
forming another level of AI that's operating much different dimension.

Speaker 1 (25:33):
And so, in a sense, is what everyone's going for
with agentic AI. Where you have agents, you have lots
and lots of agents going out and doing things, and
presumably you get an emergent property out of the top.

Speaker 2 (25:42):
So you have this emergent one of all the a's,
and then you have this other one of the emergent
superhuman and the emergent AI, and together they form another
huge thing of all the humans connect together and all
the Ais connect together, and that makes something else that's
another possible mind.

Speaker 1 (26:03):
So let me just double click on this because it
feels like there's a real sense in which many of
these are already happening, which is to say, we have
emergent properties. Right, Yeah, so for example, just having the
Internet and having agents on the Internet, we're already sort
of doing that where there's stuff happening at a different

(26:24):
level that maybe we can't even see.

Speaker 2 (26:27):
Yes, so we're talking about this like especially to the fantasy,
but it is actually already happening. The part of the problem,
the challenges we don't have the vocabulary. We're not calling
it that. We want to have the better vocabulary. We
want to have a better understanding of what these nuances
and differences are in types of cognition and types of

(26:49):
intelligence that we can actually map it and say this
kind of aggregate emergent intelligence does X and y. But yes,
we are making these very large emergent things, and we
can actually describe the substrate. I mean, you know, I've
done these calculations. Imagine the Internet was one machine and

(27:11):
it was like at a refresherrate of you know, twenty
exhibits per second. It's got to you know, how many
floating operations systems that does per second as a whole,
It's just like insane and so we can specify the
actual specifications of this as a machine, and it's and
it's a very very large machine. We have a lot

(27:33):
more difficulty in talking about the actual the intelligence because
of what we said earlier that we don't have very
good concepts about what intelligence is, what self awareness is,
what consciousness is, what even how we measure this, what
the elemental particles are of So we're really constrained there.

(27:58):
But I think we're going. What I'm suggesting is we
should work on that.

Speaker 1 (28:03):
You know, I remember about eight years ago you and
I had a long hiking conversation about the search for
Internet intelligence. It's analogous to search for extraterrestrial intelligence, which
was just hypothesizing what if this giant machine of the
Internet has developed its own kind of intelligence? How would
you how would you take as an example of the

(28:23):
tools of neuroscience where we stick electrodes in and we
look at things, How could you do that on the
scale of the Internet.

Speaker 3 (28:29):
Right?

Speaker 2 (28:29):
And so when I was thinking about the analog the
first thing I did was go to the SETI search
and say, well, how do they recognize intelligence? Well, the
answer is that they have no idea. They aren't even
searching for intelligence. They're only searching for anonymous signals, little

(28:52):
signals that appear to me not map on natural that's all.

Speaker 1 (28:58):
They have no.

Speaker 2 (29:01):
Metric, they have no threshold, they have no criteria for intelligence.

Speaker 1 (29:05):
And so although wait just to challenge it, it does seem
like that's a pretty good metric, just as an analogy,
when people are searching for extraterrestrial life, the smartest way
to do that is just looking for things that wouldn't
happen by chance, molecular combinations that seem unlikely.

Speaker 2 (29:23):
So I went to see and there have been a
couple of examples of things in the Internet that aren't
explained that nobody can explain. There was a flash crash
right some stock market some ten years ago. Nobody has
ever had any explanation about there. There was a couple

(29:43):
other little people who are looking at these anonymous signals
that don't have any source that they can find. And
so there are these signals that are hard to explain.
And so is that enough for us to deduce this
intelligent Oh?

Speaker 1 (30:03):
I see right, But right, probably not, because simply because
we can't explain it doesn't necessitate some other thing. But
it's certainly something to sniff after R right, it's it's
the first step.

Speaker 2 (30:14):
So other kinds of possible minds are we could We
could imagine minds that are very very smart intelligence by
almost any measure that we have, but they're incapable of
making something smarter than itself. In fact, we might even

(30:36):
if we know about it, we might even design some
ais like that, or of course we could make ais
that had that even ability beyond what we have. It
may be that our own minds, or that kind of
a mind, it may be that our own minds are
not capable of making a mind smarter than it self,

(31:01):
but we might be able to make a different kind
of mind that's not smarter ourselves, but to the other
of the two of us can make something that's smarter
than itself. And so there's a whole bunch of things
about this ability of kind of bootstrapping and its abilities
and whether we have So there's a bunch of different
possible minds that would have capabilities of bootstrapping and others
that don't have that, or others that require multiple kinds

(31:25):
of like a complex or ecosystem of minds to produce it.
So that's another threshold. In fact, I actually think a
kind of a mind that we could make that could
imagine a greater mind itself by not being able to capable.

Speaker 3 (31:42):
Of making it that and we might have that kind
of mind.

Speaker 2 (31:49):
And then one of the primitives that I think are
going to be that we're going to discover is the
primitives of emotion. I think the next big shock is
when we give emotions to these AIS, human emotions, real emotions.
I mean, again, we have real intelligence. It's like synthetic

(32:11):
intelligent emotion.

Speaker 3 (32:12):
And so.

Speaker 2 (32:14):
We have this idea that when I was growing up
that sort of like that emotions was something you had
on you got after you were intelligent. But emotions are
very very primitive, very primitive, very foundational, very fundamental, and

(32:34):
as we begin to employ them in the AIS, it
really changes the nature of those relationships and the power
of what we think. And so here's a couple of
things about emotions. One is that first of all, they're
real emotions that we synthesize. Secondly, as possible that we
could uncover new emotions, okay, devise new kinds of emotions

(32:58):
with words and names. As we try to give them
different kinds of emotions.

Speaker 1 (33:03):
Would it be purely academic for us as in its
feeling this, or would we be able to learn how
to experience the emotion ourselves.

Speaker 3 (33:12):
That's a good question.

Speaker 2 (33:13):
There may be people who were able to mirror those
new kind of emotions. And by the way, the same
thing's happening with AI and intelligences. Again, the way that
AI's play chess is they play it differently than humans do,
and even though they can beat humans, world class chess

(33:35):
players are learning to play chess differently from watching how
the AIS play chess.

Speaker 1 (33:41):
That's exactly right.

Speaker 2 (33:42):
And so it's possible that you could have new kinds
of emotions and some people who are very sympathetic to
it could maybe begin to have a different kind of
emotion that normal humans don't have. That's very possible, it
seems right.

Speaker 1 (33:58):
And of course we use vocabulary words to distinguish emotions,
and as you, as you have your passage and dematuration,
you realize the different shades and subtleties of emotions, and
so maybe AI will be able to help us along
with that. And as you're pointing out moving to different
spaces where we hadn't even realized that that was the

(34:20):
thing that we feel sometimes exactly right.

Speaker 2 (34:22):
Yeah, So, as we imagine the possible minds, one of
their elements will be emotional component, which will vary tremendously.
So we might want to have We can say, we
can have some kind of AI that doesn't ever get depressed,
or ones that had, you know, were constitutionally very optimistic,

(34:44):
or other things. And so again, this will be somewhere
we will be programming for particular purposes different elements of
the emotional spectrum onto these AIS to accomplish certain things,
and other kinds of emotions will be emergent from the
other components. And it's like it, but again we will

(35:06):
take an engineering approach to it.

Speaker 1 (35:23):
How are we going to be smart enough to know
what kind of combinations we want? Because you know, you
get certain kinds of thoughts and ideas out of a
person with depression and out of a person with media
and so on.

Speaker 2 (35:36):
So here's the weird thing about AI as a studies
as a as a as a research area, which is
that it is one of those things that we can
only discover or learn by doing. We're at the point
where we can't think about these and have advances just

(35:57):
by thinking about things anymore. So we actually have to
do all these things, try all these things, make the mistakes,
and that's the only way we're going to learn about it.
We are at the limits of how far we can
get just by thinking about these things. Part of that is,
and let me also to say is that I think

(36:18):
we as a society over estimate the value of intelligence.
I think IQ is just one component of what makes
a successful human I think intelligence is only one component
we'll make successful civilization. You need lots of other qualities.

(36:41):
It's not the smartest person in the room who necessarily
is the one who's going to accomplish what needs to
be done. And so right now, a lot of middle
aged guys who like to think will tell you that
thinking is the most important thing in the world and
that if you have really intelligence, that's all that matters.

(37:02):
And so my little joke is, you know, but Einstein
and a tiger in a cage who lives, it's not
the smartest one. You need other qualities.

Speaker 1 (37:13):
Give me an example of other qualities that you're thinking about.

Speaker 2 (37:16):
Determination, perseverance, ability to cooperate with other, cooperation, collaboration, empathy,
There's so many other things that are necessary to actually
make change. Happen in the world that those are things

(37:39):
that we were also, you know, as we generate these
agents and other beings and other kinds of intelligence, we
want to just keep in mind that IQ is overrated
by us, and you need other qualities, and so as
we make these machines, these other qualities will often be
more important. It's kind of like, you know, pixel peeping.

(38:00):
It's like there was a moment where people are saying resolution,
the number of pixels and a camera that was the
most important thing. There's just so many other things that
are important in a great photograph other than this resolution. Okay,
other than the IQ.

Speaker 1 (38:15):
What else do you have on your list?

Speaker 3 (38:16):
Two other things?

Speaker 2 (38:17):
One is I mentioned the necessity of how the brain
is important to the mind, and so I think there
will be attempts.

Speaker 3 (38:27):
To use.

Speaker 2 (38:31):
Tissue to make ais, to make some kind of biological computer,
to use neurons, wet neurons, to actually make minds, and
those minds will also have qualities, including the usual forebos

(38:52):
of sickness and illness and other aspects that any kind
of a wet.

Speaker 3 (38:59):
Biological being would.

Speaker 2 (39:01):
But that means that there is the prospect of kind
of cyborg like things as well where where we we
could have things in our own brains or next to
our brains, or along with our brains. And so I
don't know if I could describe what the minds would be,
but I'm just suggesting maybe another route to making a

(39:21):
different kind of mind is one that we not just
have the AI and silicon, but we have cyb actual cyborgs,
or trying to make a grill or a monkey mind
smarter in a different way. So we have something that
you're kind of genetically altering or ampling up, and you
have a biological brain that's doing a different kind of competition.

Speaker 1 (39:44):
So can I can I just jump in with my
fantasy of what that could look like, which is instead
of having machinery that plugs in because that's very tough,
you need to plug in electroc Yeah, there's all kinds
of infection prompts. You know what if in one hundred
years you could actually grow more neurons in there, and
maybe I mean this is bizarre, but maybe you have

(40:08):
you store them on the outside of your skull and
you put sort of another skull on top of that.
But what you have is just more brain tissue twice
twice as much brain tissue as you have. I know
this sounds creepy and insane, but yeah, maybe if a
podcast listener is listening to this in two hundred years,
they'll say, yeah, of course we got that already. But
your question is what would the what would the mind
be like? What would you what would your daily experience

(40:31):
be if you had a core text that was twice
as large as what you have now.

Speaker 2 (40:35):
Right, or even things like if you modified in some
way so we didn't forget as much, you know, biologically
so so so. So that's one thing. And the last
idea in terms of the possible minds I just want
to mention is quantum quantum computing. You know, there's a
lot of people who believe that kind of that's the
next thresholding and we get quantum and then it's like,

(40:58):
you know, it's like a singularity. We're into another realm entirely.
I actually have a heretical stance where I think that
quantum computer is inherently not going to be good for computation.

Speaker 1 (41:15):
Why it doesn't seem.

Speaker 2 (41:16):
To want to do computation, That's how I would put it.
It doesn't it doesn't want to do computation. But I
think it's going to be among the most amazing technologies
because it's going to do other things in the quantum
realm that we can't even imagine, but not computation. So

(41:38):
I think computation is it's sort of it doesn't really want.

Speaker 3 (41:42):
To do competition. It wants to do other things.

Speaker 2 (41:45):
So it could have like a very different way of
thinking that is not computationally based as our ais are,
and it does something weirdly different that we don't even
have words for it. I don't even know how to
describe that, other than to say I think there can
be possible minds with with quantum computing or quantum but

(42:10):
they aren't going to be computational based.

Speaker 3 (42:13):
That's just a hypothesis.

Speaker 1 (42:15):
And that goes along with your hypothesis that we're going
to have to try lots of things out in order
to gather the data to make the theories about things.
There's so much that's beyond what we can see right now.
And this is, by the way, of course, a general
thing in science, which is that sometimes we're at a
moment where you can make a theoretical leap and say,
here's what I think the periodic table would look like,

(42:36):
and other times you just need to gather the data
for a long time before you can get there.

Speaker 2 (42:41):
Yeah, and I think we're in this realm of what
I call the third culture. The first two cultures, being
the humanities, was kind of one culture. Then there was
the science that was Cpiece snows observation that we have
two kinds of cultures and can you just double clicks?

(43:01):
So yeah, CPS I had this idea there there's two cultures.
There's the humanists and the humanities the arts and what
they did was they kind of explored the human situation
through creativity introspection, by reflecting kind of the human condition
and something that people made. Then there was the scientists

(43:23):
who explored the human addition by probes, by doing experiments,
by testing reality. I think we're in the third We
have a third culture we've made in the last thirty
or fifty years, which is called what I call the
nerd culture. The third culture is this idea that we
explore the human condition by making alternatives to it, by

(43:45):
making synthetic versions of it. We explore life by trying
to make artificial life. We explore democracy by trying to
do simulated worlds. We explore intelligence by trying to make intelligence,
and I think we're going to actually learn more by

(44:06):
making intelligence that don't work. We'll learn more about the
human mind than one hundred years of neurobiology will teach us.

Speaker 1 (44:20):
That was my interview with Kevin Kelly, thinker and technologist.
I love this approach to thinking about different kinds of
intelligence because when we shine a flashlight around the possibility
space and illuminate what things could look like, it clarifies
our view about the things right in front of us,

(44:40):
and this is the key to understanding anything in science.
Otherwise we are like the proverbial fish in water trying
to describe water. We've never seen anything but water, and
therefore we don't have any way to describe it because
we have no way to distinguish it from anything else.
But happily, our species makes scientific progress because Homo sapiens

(45:04):
developed an enormous prefrontal cortex, and this is fundamentally the
brain structure that allows us to think about the what
ifs that are beyond our daily experience. What ifs are
the thing that drive our understanding of everything. When Einstein
thought about what it would be like to ride on

(45:25):
top of a beam of light, that opened up a
new world for him that led to the special theory
of relativity. When Charles Darwin looked around and thought, what
species aren't here now but once might have existed? That
ushered him down the path of understanding evolution by natural selection.
And so it will have to be with our understanding

(45:47):
of what intelligence is. We are the fish stuck in
the water, with nothing to compare water against and therefore
no way to make distinctions. That as we move forward,
will increasingly build and study many different flavors of intelligence,
and those other minds will be like things other than water.

(46:11):
We'll see a bubble rise up past us, and we'll
think what is that. We'll swim near an island and
we'll see dirt and we'll think what is that. We'll
dive down and circle a thermal vent and we'll think
what is that? And with each new discovery we'll make
new distinctions and get a better understanding of water. It

(46:32):
will allow us for the first time to see what
we've been swimming in the whole time. Go to eagleman
dot com slash podcasts for more information and to find
further reading. Send me an email at podcasts at eagleman
dot com. With questions or discussion, and check out and

(46:54):
subscribe to Inner Cosmos on YouTube for videos of each
episode and to leave comments. Until next time, I'm David
Eagleman and this is in Earth Cosmos
Advertise With Us

Host

David Eagleman

David Eagleman

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.