Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
What is special about the wrinkly outer layer of the brain,
the cortex, And what does this have to do with
the way that you come to explore and understand the world.
Speaker 2 (00:16):
And by the way, why do you.
Speaker 1 (00:17):
See a whole image when you open your eyes even though.
Speaker 2 (00:20):
Each part of your visual.
Speaker 1 (00:22):
Cortex has access to only a tiny bit of the image.
And for that matter, the brain is divided into different
areas for sight and sound and touch and so on.
And so why when you're petting a cat, why does
the cat seem unified? Why doesn't the site of the
cat seem separate from the purring and the feel of
(00:44):
the fur. Can we build a new model of how
the brain works and in what ways is what the
brain doing something very different than what's happening in current AI.
Welcome to Intercouse with me David Eagleman. I'm a neuroscientist
at Stanford and in these episodes we sail deeply into
(01:07):
our three pound universe to understand.
Speaker 2 (01:09):
Why and how our lives look the way they do.
Speaker 1 (01:24):
Today's episode is about a new model of the brain
developed by my friend and colleague, Jeff Hawkins, and we'll
get into an interview with him shortly but let me
preface by saying that for centuries people have stared at
the brain and tried to figure out how this thing works.
Because when you stare at it, it's just a huge
(01:44):
lump of cells.
Speaker 2 (01:46):
You can see that.
Speaker 1 (01:46):
There's a wrinkled layer on the outside. And when people
dissect that, they can see that that part is about
three millimeters thick, and it looks a little different, looks grayer.
And so that part is called the gray matter. And
we call this the cortex, which means bark, like tree bark.
And the stuff below that thin layer is called white matter.
(02:09):
And it looks white because the tiny data cables coming
off the cells, the axons, these are wrapped in a.
Speaker 2 (02:16):
Little sheath called myelin, which makes it look white.
Speaker 1 (02:19):
Okay, Now, what you immediately noticed by looking at brains
across different mammals is that all the stuff you find
under the cortex, all the sub cortical stuff, looks essentially
the same. Horses and elephants and mice. They all have
the same architecture going on that we do.
Speaker 2 (02:37):
They all have.
Speaker 1 (02:38):
A thalamus and hippocampus and cerebellum and so on. But
there's one thing that really distinguishes us from our cousins,
and for that we return to the gray matter, the cortex.
It's not that our cousins don't have a cortex. What
distinguishes us is the absolute enormity of our cortex. We
(03:00):
humans have a ton of this stuff. So take four
pieces of paper from your printer and place them next
to each other to make one really large piece.
Speaker 2 (03:11):
That's how much cortex.
Speaker 1 (03:13):
A human has. If you were to spread out the wrinkles. Now,
our nearest cousins, the great apes only have about one
piece of paper worth, and most mammals have a lot
less than that. So something about the story of the
runaway human success has to do with the fact that
we have way more cortex for our body size than
(03:34):
any other creature. And side note, I'm really talking about
what's called the neocortex or new cortex, because we also
have a little bit of paleocortex or old cortex. But
the thing that really makes us outstanding is the amount
of neo cortex that we have.
Speaker 2 (03:51):
But what is this neocortex doing.
Speaker 1 (03:54):
Well, if you look at any neuroscience textbook, you'll see
that this part of the brain, the cortex, is often
drawn with different colored regions like this red region over
here is devoted to vision, and this green one is
devoted to hearing, and this yellow one to touch and
so on. But something I've been obsessed with and write
about in my latest book, Live Wired, is that this
(04:16):
is the wrong way to think about it, because the
neocortex is remarkably flexible.
Speaker 2 (04:22):
It's not a fixed map.
Speaker 1 (04:24):
If you are born blind, the part of your cortex
that we would have thought of as visual cortex gets
taken over by hearing and touch and so on. Now
let me just be really clear what I mean by
taking over. The neurons there are the same The cortex
looks exactly the same from the outside, but the function
of those particular neurons is now not visual. They have
(04:46):
nothing to do with visual information anymore. Now that same
neuron instead of firing when it detects a moving object,
now it responds to a touch on your.
Speaker 2 (04:58):
Toe, or hearing a B flat note or whatever. So
the little labels that.
Speaker 1 (05:04):
We draw onto the brain, these maps that we impose,
these are.
Speaker 2 (05:09):
Actually massively flexible.
Speaker 1 (05:11):
And as you may know, I gave a talk at
TED about this a while ago, where I showed that
you can feed in new kinds of information, let's say
through the ears or the skin, and the brain will
figure out how to deal with that data. It will
flexibly devote part of its cortical real estate to that.
And this line of thinking led some scientists, like Vernon
(05:34):
mount Castle some decades ago to realize that the cells
of the cortex are.
Speaker 2 (05:39):
A one trick pony.
Speaker 1 (05:41):
No neuron is inherently a visual neuron or a neuron
devoted to hearing or touch or smell or taste or
memory or whatever. All parts of the cortex are perfectly
capable and willing to take on any job. So that
suggests they're all running some sort of basic algorithm. And
(06:02):
it doesn't matter what kind of data you feed in.
Different parts of the cortex will say cool, I'll build
a representation of that data. I don't care if it
comes from photons or air compression waves or temperature or whatever.
I'm on the job here to build an understanding of
whatever is coming in locally. Now, it's not individual neurons
(06:24):
that are building models, but instead groups of many tens
of thousands of neurons arranged in a six layered cylinder.
So think about this like you're a geologist and you
drilled out a cylinder of rock and you saw six
layers in it, six sedimentary layers.
Speaker 2 (06:43):
That's what the neocortex looks like.
Speaker 1 (06:45):
Six layers. And it's built out of these columns which
have the same types of neurons with the same connection
patterns in each column. And so think about the cortex
as being made of lots of these columns, like taking
hundreds of thousands of grains of rice and standing them
up on their end and packing them all next to
(07:05):
each other. Now, people have known about cortical columns for
many decades since Vernon Mountcastle first discovered these in nineteen
fifty seven. But recently someone has pulled together several different
threads to propose how this could underlie what the cortex
is all about. And that's someone is Jeff Hawkins and
(07:26):
his team. And so I met with Jeff in my studio. Now,
Jeff is one of my favorite people because he does
theoretical neuroscience. He really tries to figure out the big
picture of what the brain is doing. Now, Jeff has
a very interesting history, so I'll just mention that in
the nineteen eighties he was a graduate student at Berkeley,
where he proposed a PhD thesis on a new theory
(07:50):
of the cortex, but his proposal was rejected, and so
he ended up pursuing his vision for mobile computing instead,
and in nineteen ninety two he launch the company Palm,
which made the Palm Pilot. If you remember that, this
was this little handheld device and you could write on
it with a stylus and it would translate your handwriting
(08:11):
into text. And you can use this for your address
book and your calendar and your contacts and note taking.
This was the first entrant into the world of portable computing,
and it.
Speaker 2 (08:21):
Really changed the world.
Speaker 1 (08:23):
Anyhow, A decade later, Jeff returned to his original love,
which was theoretical neuroscience, trying to figure out what's going
on with the brain, and he wrote a book in
two thousand and four called on Intelligence, which was very
influential on me and lots of other thinkers I know.
So I was very excited when Jeff recently came out
with his next book that represents his last decade and
(08:47):
a half of research. It's called One Thousand Brains, a
New Theory of Intelligence, and it describes his framework for
thinking about the brain. So, without further ado, let's dive
into a very cool new model of the brain. Okay, Jeff,
So you are a theoretician. You think about the brain
(09:09):
from a high level. We're in this era now of
AI where AI is doing all kinds of things that
are amazing and no one expected. But you see the
brain as being very different from what is going on
with let's say, large language models. So tell us about that.
Speaker 2 (09:22):
That's absolutely true. You know, the current AI wave is
really amazing, but those models don't work at all like
the brain. And I think you could start with one
really fundamental difference. Brains work through movement. We move our
bodies through the world. We move our hands over objects
to touch and learn what they are. We move our
eyes constantly, so the inputs of the brain are constantly changing,
(09:45):
but mostly because we're moving through the world. And the
term for that is a centory motor system. And the
brain can't understand its inputs unless it knows how it's
moving through the world. So we learn by exploring, by
moving different places, picking things up, touching, so on, and
that's all Animals that move in the world learn this way.
So this idea that the brain is the central motor
(10:07):
system has been known back in the late eighteen hundreds,
but it's pretty much ignored by everybody. But it leads
to a very fundamental different way of how we acquire
knowledge and how knowledge is represented in the brain. Whereas
today's AI is most of it's built on well deep
learning of transformer technologies, which are essentially we feed data
(10:28):
to it. We don't it doesn't explore it, and we
feed to large language models. We just feed a language.
So there's no inherent knowledge about what these words mean,
only what these words mean in the context of other words. Right,
But you and I can pick up a cat and
touch it and feel it and know this warmth, and
we understand how its body's moved because no one has
(10:48):
to tell us that. We just experience it directly. So
this is a huge gap between brains. Pretty much all
brains work by century motor learning and almost all oh
it doesn't. And you can just peel the layers apart
and see what the differences are, and it makes a
huge difference. So I'm not a fan. I'm a fan
(11:08):
of AI today, but I don't think it's the future
of AI. I don't think it's going to get you
to what people really want or truly intelligent machines.
Speaker 1 (11:16):
Okay, terrific, And we'll dive into that more in a
little bit.
Speaker 2 (11:19):
Now.
Speaker 1 (11:19):
When we look at let's say the human brain, there's
lots of areas that we can point to. There's the
cortext and wrinkly outer bit, there's all these subcortical areas.
When you think about intelligence and the stuff that we're
going to talk about today, what is the part that
you concentrate on.
Speaker 2 (11:38):
Well, we concentrate first and foremost in the New York cortext,
which is about seventy five percent of the volume of
your brain. I mean it's what you see, as you said,
if you can take a scope, and that's what you
see in the New York cortext. And so it's a pretty
dominant part of what we think of intelligence. You can't
consider it completely on its own. I mean, it's connected
to all these other things. And so we also study
(11:59):
those other things and in service to the new or cortex.
So we study the thalmbists, and we study the cerebellopment,
We study the basic just because you have to know
how the cortex workship these other things. But our primarily
our goal and many neurosign this goal is to understand
the New York cortex, because that's what mammals have. We've
got a big one. You know, everything we think, most
(12:21):
of what we think about being intelligent, about our ability
to understand the world and generate language and see and
hear and and so on, is the New York cortext
not one hundred percent, but most of it. Unfortunately. Also,
not only is it the biggest structure, but it's a
very very regular structure. So you can look at this thing,
the New or Courts is like a sheet of cells.
It's you know, it's like a size of a large
(12:41):
dinner napkin and only a few millimeters thick, and it
gets wrinkly because it's stuck it in your head that
everywhere you look on it, it looks remarkably complicated and remarkably
the same. So the areas are doing vision, look like
the areas are doing languages, look like the areas are
doing touch. Look the areas that are doing everything. Really
(13:02):
and so there's been long speculated that there's sort of
a common algorithmic principle that's applying to everything everything we do,
all of our sensory inputs, all of our thinking, all
the language, it's hard to believe, but the evidence is overwhelming,
and so our research has really been to understand what
is that that algorithm, corical algorithm, often referred to as
(13:23):
a cortical column. You know, this repeated structure that seems
to underline vision and hearing and touch and thought and
everything we do. And that's that's just just an appealing
thing to try to understand. And we've cracked it. We've
actually we actually cracked it. We understand what's going on.
That's awesome.
Speaker 1 (13:38):
Okay, so a couple of things, right, So the way
I sometimes phrase this to people is that if I
had a magical microscope and could show you a part
of the brain and you can see all the activity
running around in the cortext there, could you tell me
is that visual cortex are auditory or somatosensory? And the
answer is you couldn't tell me, and I couldn't tell you.
Speaker 2 (13:56):
Guys, Right, it all looks the same, and there's a
and there's as you know, oh, there's these experiments people
have done where well, first of all, if you have
trauma to one part of the cortext other parts will
pick up the same function. You can also people re
routed sensory and puts the different parts of the cortext
in animals and they seem the work.
Speaker 1 (14:13):
So, for example, you have visual information instead of going
to the back of the visual cortex, that gets rerouted
to the auditory cortex, and that auditory cortex becomes visual cortext.
Speaker 2 (14:24):
Right, It's incredibly powerful and flexible system. And mammals, you know,
all we have we have a set of sensors, quite
a few action more than most people think because the
skin there's a lot of different sensors. But other animals
have different sensors and they have cortext too, And so
there's seems to be this universal algorithm that can be applied.
(14:46):
And now we know it's a century motor algorithm. It
can be applied, and and we've spent decades trying to
figure this out and we've cracked it. Oh that's awesome.
Speaker 1 (14:55):
Just before you tell us about that, So tell us
what a cortical column is.
Speaker 2 (14:59):
Okay, So imagine we talked about. The near cortex is
a sheet of cells like three milimeters stick. A cortical
column is a little section of that going through the
three free milimeters. It's convariating with from a third of
a millimeters to a millimeters in diameter. It's it's it's
a it's not something you would see. It's not like
it's sitting there to be plucked out. But we know
they exist, and so within that, let's let's say it's
(15:21):
a three millimeters tall and a half millimeters wide cylinder
that goes across the cortex that contains all the neural
machinery that you would see anywhere in the cortex and
in each cortical column, because they look like a little
grain of rice in some sense. Why you can imagine
what's a little brains of ice stacked next to each other.
Each cornico column gets input from some well in parts
(15:44):
of the bend they get from some pats of sensory input,
so from pats to the retina, from pats to the
cochlea of patrio skin. Other parts get information from parts
of the near cortex. So the cortex is connected to cortex,
but each one is looking at a small If you
think about the primary entry regions of the cortex, which
are quite large, they're getting input from a small sensory area, right,
(16:06):
And so people used to think that, well, if this
bil colm is only getting input from a small part
of the rent. Now, it can't really doing very much right,
It can't be very smart. All you could do is
process a little piece of information there, and therefore maybe
it's going to detect an edge or something like that,
and there's a lot of evidence for that. But we
now know what happens is that the cortical comms they
(16:27):
get input from over time, from different parts of the world.
So the eyes are moving like three times a second,
and so that cortical comms may be looking at three
different things every second, and it can integrate how the
sensor is moving, how your eyes are moving, with what
it's sensing to build models that are much larger than
it can sense. And the same way that you could
(16:47):
take your finger in a dark room and say, okay, David,
I want you to learn this new object. Let's call it.
You know, a coffee cup. You never touch it, and
so you could do is you touch the coffee cup
and you move your finger along and around, and as
you do, you build a three dimensional model of the cup.
Even though you're only getting input from one fingertip, the
eyes are doing the same thing. It's surprising you don't
realize this so every quarter do Colm, when we understand
(17:10):
now is doing this sort of processing movement, information and
sense for information, building what we call structure or three
D models of things in the world. So it's quite
different than even those neurosciences think about it, and there's
a lot of reasons we can talk about how it
was missed for all these years.
Speaker 1 (17:25):
So in the court, you have a century six layers
of cells, and a column is all six layers, is
all six layers. It's going up and down. It's like
think of it like layers of a cake. And the
column is you're taking a straw and shoving it through
the top, and so you've got.
Speaker 2 (17:42):
This, Okay, got a straw cake.
Speaker 1 (17:46):
Okay, great, And so the idea is if you're looking
at some column in you know, in primary visual cortex. Yeah,
your point, Jeff was that, you know, it's it's like
looking at the world through a straw.
Speaker 2 (18:00):
It only sees a little tiny piece of the world.
Speaker 1 (18:02):
But because the eyes are moving arout, because you're exploring
the world, this is actually getting lots of parts of information.
It's exploring the world in the same way that your
finger typically.
Speaker 2 (18:12):
Right, and it has to integrate information over time, that's
the key, right, And you can literally do this. You
can look at the world through a straw, right, and
and you can say, oh, what am I looking at? Well,
you can't tell them. You start moving the straw and
then you can start and you can also learn objects
that way. So literally you can learn by looking through
a straw, which is what sort of what one column
is doing? Got it?
Speaker 1 (18:33):
And in your model there are thousands of such columns
and each one of these is learning a model of
the world as it's going. So tell us about right, right.
Speaker 2 (18:44):
So I think this idea that there's all these columns
is not a new idea and that they have this
fundamental argithm. But what we were I think the first
people to kind of figure out what it is and
what it's doing. So the trick of this thing is
it's trick it here. You know, when you look out
at the world, you have a sense anybody you have
a sense where things are. I have a sense where
(19:06):
you are relative to me. I have a sense where
this microphone is relative to me. I know where my
hand is relative to this cop I Now there turns
out that you have any kind of sense of location
in space, you have to have neurons representing it. There's
nothing goes on in the brain if there aren't neurons
firing doing it. Turns out most of the machinery in
the New York cortex is keeping track of where things
are relative to other things. So those six layers, all
(19:28):
those cells, at least half of that circuitry is tracking
where the sensory input is coming from in the world.
So if I move my finger over this coffee cup,
the part that's getting information from the sensory like I'm
sensing an edge, for your example, as I move my finger,
it has to keep track of where my finger is,
a location of it and its orientation relative this cup.
(19:50):
Is quite complicated, but that's what it has to do
to build the models. And now we know how it
does it. There's all this evidence for it. So the
brain is just trying to track of where all of
its inputs are in the world, all relative other things.
Then it builds up these three dimensional models of the world.
So tell us about how it does that then, right,
so you can think about when you're in high school,
(20:10):
you learned about Cartesian coordinates, x, y, and z coordinates, right,
and so if I wanted to say where is something?
Where are your relative to me? I might say, okay,
your nose the origin, and I could say it's some
distance from here, and you know X, Y and Z.
Well you have to have something like that. But brains
don't do it that way. They do it another way.
And this was some very clever research in the last
(20:31):
twenty years that people discovered in the antarilo cortex and hippocampus.
These cells called grid cells and play cells, which actually
operate as reference frames. They are a way of neurons
to represent locations and they work differently than X, Y
and Z, so there's no origin. It's kind of really
clever how they work. The nature has discovered a different
(20:52):
way of doing this, so yeah, make sure you tell
us a little bit about that. Well, okay, but these
these are well known thing. It's just like grid cells,
which entronoic cord six. What they do is they these cells,
if you take a set of them, individual cells could
are not unique, and any d real sell me said,
I fired different locations in space, but if you take
a set of them, they're unique, and so you can
encode a unique location in space. And the key thing
(21:14):
about them is these cells automatically update as you move.
So the original grid cells are where your body is
in a room, and as you move, it's called past integration.
It says, okay, you're moving at this dis direction at
this speed, so we'll just automatically update these neurons. Is
if we know where you are right and so it's
it's what sales used to do dead reckoning. You just say, oh,
(21:36):
you know, I could I'm heading north for an hour
or three knots there for all these three miles in
this direction. So we know that these cells exist. They've
been well studied, people with Nobel Prize for these things.
So we speculated that the same neural mechanisms, these grid
cells and equivalents would be in the cortex at every
corticle home and sure enough they're finding that now. So
(22:00):
all kinds of research now they're finding in humans and
other animals that there are grid cell like structures in
cortical column And so what does that tell you? It
tells me that that's the mechanism by which the brain
uses for reference frames. And so literally, when you build
a model of something in the world, like a model
of a cup or a model of anything, it's essentially
what you're doing. You're just saying, here's the sensation, and
here it's location. Here's another sensation a different location. Here's
(22:22):
another sensation at a different location. You add all these together
and you get a three dimensional model. You can say,
this thing consists of these features in these locations relative
to each other. And so literally, in our head we
build models of the world that are three dimensional analogs
of the physical things we interact with. And that's why
you appears three dimensional to me. You know you're not
(22:43):
an image, You're a three dimensional structure because I have
a three dimensional model of humans, and I have a
special model for you, David.
Speaker 1 (22:51):
Okay, great, okay. So you've got these columns in the cortex.
They're building three dimensional models or keeping track of where
(23:12):
your fingertips are, where your eyes are. So we've got
these different windows into the brain. You've got these data
cables coming in carrying spikes. It's all spikes, but some
of them carrying visual intraces to monitory is some touch.
Speaker 2 (23:25):
Every brain doesn't know that, by the way, exactly right.
It's all the spikes exactly right. And so for any
particular column, it might only be getting a subset of
those tell us about that, right, right? Well, any well,
I'm not shore going to be a subset.
Speaker 1 (23:42):
But what I mean is if I if I am
a cortical column that happens to be sitting in the
visual cortex, that I happen to be getting visual information,
but I'm not getting auditorial So.
Speaker 2 (23:50):
There's a real One of the first things we had
to address with this series is why does the world
appear unified? Right? I don't feel like you know, I
I don't feel like, oh, I'm touching something with my
hands and I'm looking at something else in my eyes.
It's all one thing. There's this cup, right, and I
feel the warmth of it, and I know it. I mean,
it's one thing. It's and yet we have all these
different models. So it turns out you have models of
(24:11):
cups and that are taxile models. They're based on how
how it feels. You have models of how it looks.
You might even have a model how it sounds like
this particular ceramic cup. I have an expectation what it
sound like. I put on this account here my different
ceramic counter. And yet these models are they're all independent,
but they're not completely in pandit so there's these long
(24:32):
range connections in the cortex. They go from all different
sides to the left side of the brain and the right
side of the brain and all over the place. There's lots
of different types. What they're essentially doing is they're voting.
They're all saying, like one my finger says, I think
I'm touching something that feels like a cup, and I
may not be certain. Another thing, I have something too
that's I'm not really certain, and the nice thing and
they very quickly are reaching the set. The only thing
(24:52):
that makes sense for all our input is we're all
looking at the same object. And so there's like across
these long range connections, it'll into a percept that's what
you perceive. You don't actually normally perceive the individual sensations
from your eye or your fingers. You just say, I'm
holding this cup in my hand, and it's one percept.
And so it's these long raine connections and how these
(25:13):
columns vote all the time. This is why I can
flash an image in front of your eye and say, okay,
well each column is looking at part of that image.
Who decides what the whole image right? And by the way,
I don't even have time to move my eyes. Once
I've run into objects, I don't have to move my
eyes to recognize them. Man, what we call a flash inference.
The reason is because each part of the court to
(25:35):
visual cortext has a hypothesis about what it might be seeing,
and they vote, and the only thing that makes sense
is the final thing they agree upon. So I have
to learn by moving my eyes, by tending to different
things and my fingers. But I don't always have to
infer or recognize things by movement. I don't always have to.
I can just flash an image in front of you
and you say, I know what that is, and you
don't have time to move rise. This folded a lot
(25:57):
of vision researchers for many years because they assume that
the movement was necessary because I can flash an image
in front of you. But you can't learn that way.
You have to learn by attending to different things, quite right,
Just so it's clear to the audience.
Speaker 1 (26:10):
So this issue about voting, it's not that they're all
submitting their votes to some central agency. It's that they're
all talking with one another simultaneously, simultaneously. And something about
the spike patterns holds into shape.
Speaker 2 (26:24):
Right, right, well, we know exactly how this occurs. We
have models of it, and we've simulated and matches of neuroscience.
It's a little it takes a little while for people
to get the sense of it. You're right, there's no
central voting tally. It's like it's and I don't have
all the commns, don't have to talk to all the
other comms. It turns out they only have to stalk
to a few other commns as long as everyone talks
to somebody and the whole thing is connected, so they
(26:45):
don't have to like in Zilian connections. But it's it's
more like you have a neuro you know, magic neurons
are spiking, and in I have I have five thousand
neurons that representing what I'm seeing. That's not that man actually,
so five thousand neurons and in the brain. We're getting
a little technical here. Activations are typically sparse, meaning of
(27:07):
those five thousand cells, maybe only two percent or one
hundred are active at any point in time. The others
are silent. So I'm representing something by saying there's one
hundred neurons active out of five thousand. Now, if I
wasn't certain, I might say, oh, well, let's do this.
I'm going to say it could be object day, it
(27:27):
could be object being, could be object C. And I'm
gonna activate them all the same time. So now I
have three hundred neurons out of five hundred they're simultaneously active.
Now that might seem confusing, but it isn't. No trouble
is this and everybody's doing the same thing. They're all
doing multiple hy positives, and it very quickly says you're
supporting this positive and you're supporting this po. It happens simultaneously.
(27:49):
No one has to go through so early. There's no
like counting the vote. So let's try this like positive
in this it all settles very very quickly.
Speaker 1 (27:57):
It's kind of cool thing if you thought about what
happens when you settle on a hypothesis and then you switch.
For example, looking at the Neckar cube, this cube made
out of it, yes, twelve lines.
Speaker 2 (28:09):
What you know?
Speaker 1 (28:09):
You see it one way, then you see it the
other way? What is it that allows it to switch? All?
Speaker 2 (28:14):
Right? Now, a Necker cube is a two dimensional image, right,
it's a two dimensional image of a three dimensional wireframe
cube or something like that. Right, And so it's not
three dimensional. It's really two dimensional, but your brain wants
to make it three dimensional, right because it doesn't know
two dimensional things that look like that, And so everything
(28:34):
we try to do fits into our models, right, Right.
We don't say, oh, that's a two dimensional image. It
can't be a cube. No, you says, oh, no, that's
got to be a cube, because I know cubes. I
don't think it looks like that's not a cube. So
it wants to settle on a hypothesis. Is like, okay,
well this corner is in front of that corner, and
this corner is behind that corner, the corner to the
left of that corner. It just has to do that
to fit its models. That's right.
Speaker 1 (28:53):
But why doesn't it land on a hypothesis and stick there.
Speaker 2 (28:56):
Well, I don't really know, but there's other people hypothesis
about this is that the evidence goes both ways, right,
there's multiple hypositive and so neurons have a way of
getting tired about what they're doing. After a while, they say,
you know, literally, they have a way of they say,
you know, I'm not going to keep finding on this forever.
You know, things are changing the world. We don't just
get stuck. So there's various speculated mechanisms BEHUWD neurons, and
(29:21):
it's been observed. We'll sort of, you know, say okay,
I'll be active a little while and then I'm going
to stop U. Lets someone else try something, right.
Speaker 1 (29:28):
Yeah, it, But what it means is that the other
hypothesis has to be kept alive somewhere somehow.
Speaker 2 (29:33):
Well it maybe not. Maybe just like I have this
hypothesis that locked in on it, and now I'm going
to say that's no longer possible. Just go back to
square one. What is possible? You know? So it's not
like I have these two images in my head conceptual
or perceptually. You don't feel that way, right, You only
one or the other? Do you lock in the one?
(29:53):
The other is forgotten? But then if i'd say disabled
the first hypothesis, We're not gonna allow that be anymore.
Then it's okay, what's possible. This one's possible, I'll switch
to that one. It's not like they're both active. One's
active and then it gets tired and then the other, well,
I work.
Speaker 1 (30:09):
So coming back to the main thing, one part that
I want to return to is just this issue that
a particular column might only be receiving touch information. Another
call might be receiving only auditory information, and so on.
Speaker 2 (30:23):
Well, they build independent models, right they. I could have
a tactle model of an object, a visual model of
an object, right, they're not the same. The visual model
of the object will have color, perhaps the tact will
have temperature and texture and things like that. So they're
different models. But because they can vote, you have a
single percept of it. Yeah, okay.
Speaker 1 (30:42):
And one of the things that's important here, which of
course you have, so you and I both emphasize this
is a lot in our books, is that all we
are ever seeing is our model of the world, right,
and so we don't have any direct access to what's
actually out there.
Speaker 2 (30:55):
And so the fact.
Speaker 1 (30:57):
You mentioned earlier the binding problem, if you mentioned it
by name. But the binding problem is this issue that
when the coffee cup is here and it's moving, how
come the color doesn't bleed off the cup?
Speaker 2 (31:06):
And how come it seems like one thing and so on.
Buying problem is a poorly defined problem. Exactly. It means
a lot of different things, a lot of different people.
So you gotta be really careful a say, oh, I
let's talk about the binding problem, I might have a
different perception of what the binding problem is.
Speaker 1 (31:20):
To me.
Speaker 2 (31:21):
The binding problem is the one I've already discussed, which
is you have these different sensory inputs that but somehow
they lead to a single perscept and you can switch
back and forth. It's like, I don't It's like, how
do I bring these things together? How do I say
these are all the same thing? And people used to
think in the binding problems like, oh, if I have
(31:43):
the auditory cortex and the visual cortext and somautter centric
cortex touch, then they must all project to someplace where
they are binding together into a single model. And we
flip that on its head. They don't bind. They bind
together through just long range connections. But there's no place
I have to do that. There's no nobody's sitting on
(32:03):
top of it and saying, hey, what's your vote? What's
your vote? Now? It's just like, so there, we don't
need a model that incorporates all the aspects of objects.
We have independent models that we can invoke as needed,
and they all they all vote to reach common consensus.
So I have no problems navigating, you know, doing things
in the dark. I have no troubles doing things just
(32:23):
by vision. I have no I can do things sometimes
of audition, you know, like I know the same things
are going on. I have the same model in the world, right.
You know, if I if I'm walking at night between
my bed and the bathroom and it's pitch black, I
still have the same model in the house. I still
know where the door is going to be and everything else.
You know, if I can do with touch for the vision. Right,
So there isn't a central model that says, here's a
(32:45):
model of my house of touch and vision hearing. It
says all these independent models. Right.
Speaker 1 (32:49):
And now the reason you called your book a thousand brains,
we call this ypods.
Speaker 2 (32:53):
It's one thousand brains theary theory, right, is.
Speaker 1 (32:57):
Precisely because you've got all these cortical calls and they're
each making a little model of the world, and they're
all talking to one another. Right, So you know, hi,
this is this feels like coffee cup. This looks like
coffee cup. It sounds like coffee cup when it plays down.
This is the temperature of coffee cup, and so and
so these are all talking with one another.
Speaker 2 (33:15):
So so the reason they call the thousand brains. Is
that each cortical column is doing what the entire brain
is doing. Right, each quarter column is a senior model
learning system. And and when we ask where is a
model of something? We've been talking about this, where is
the model of the skull or the microphone or whatever.
(33:36):
So many things we know, where is that model? It's not,
it's it's in many different places. So there's a thousand
models of coffee cops, a thousand models. You don't perceive that,
but they exist. And and so it's like it's it
was really trying to capture that original idea that cortical
columns are common and that and that there's all these
(33:57):
different models out there that are different and they can
vote one day, they vote to reach a consensus.
Speaker 1 (34:02):
Yeah, and it's certainly consistent with the idea that you know,
for example, if someone is born blind and then visual
cortex gets taken over by hearing and touches on, they
are better at hearing in touch presuming because they just
have a lot more real estates.
Speaker 2 (34:14):
Voted right, right, or real estate is a lot more
practice too, right right. So it's amazing how flexible it is.
Speaker 1 (34:21):
Yes, given your model of the brain. Let's talk about
AI and what you think is going on currently with
LLLMS and what that is missing.
Speaker 2 (34:31):
Right, LMS flipp is interesting. Well, let me start with
the criticism AI in general. Okay, AI has always been
focused on what they call benchmarks, like how well can
you solve problems? So, how well came this system recognized images?
How well can play chess? How well can play go?
How well I can translate from one language to the other.
And you have all these benchmarks, and everyone competes against
(34:52):
these benchmarks that they're kind of diverse all over the place.
That's the wrong way to think about it. When we
let's use computer as an analogy. When we say something
as a computer, we don't based on what it's doing.
We based on how it works. Alan Turing and John
Fan Nordmann define what we now call a universal turning machine,
which is like, okay, if a system has memory and
(35:13):
a processor, and the memory has data and instructions, and
you can change the instructions and change the data, it
can do anything. And that is a computer. So I
can say my toaster is a computer, even though it's
a very limited computer, because it has one of those
things inside. Right, if it was hard coded with springs
and wires and stuff, it wouldn't be a computer. But
(35:33):
because it has a little microprocessor has those definitions, it's
a computer. So that's how we do it in the
computer world. We say, these are the functions that it
has to perform, and you can apply it to big problems,
little problems, different types of problems all over the place.
And AI we've been focused on this idea that oh
a benchmarks, you know, and we always want to be
beat some human Well, like a dog. Almost everyone who
has the dog says it's intelligent, right, but doesn't have language,
(35:56):
it doesn't play standswer, it doesn't play go. But why
do we say it's intelligent because we can tell the
that dog has an eternal model of the world. It's
kind of like my internal the model. He knows where
the door is, it knows how to get go on
the walk. And so why why focus on this issue
of like, well, it's not intelligent because it doesn't play
go better than the best human player. So I think
part of the problem was that people didn't know how
(36:17):
brains worked, and so if you don't, what are you
going to do? Right? We don't know it. We we
know enough to build this stuff. So I think in
the future that's what's going to be. We're going to
say AI systems don't have to be like humans. They
don't even have to do the same things humans do.
Some of them are going to be very dedicated. It's
very focused tasks, and we're going to be very broad
So might be you know, engineers building space stations, all
(36:37):
this huge rider, but they're all going to work on
the same principles that biology has discovered. Today's AI doesn't
work on those principles, you know, most of it. If
you talk about the large language models, these are transform
well models. We feed in a string of tokens basically
words or wordlike things, and it just learns the structure
of that string, and it's very good at what it does.
(36:57):
But there's no inherent knowledge of the actual the world.
It doesn't have a three dimensional models of the world.
It doesn't if someone's written about it, it'll tell you
about it, but it can't experience it itself. So you
couldn't send in one of these AI systems down the
space and say, you know, go to Mars explore and
see what's out there that we can build things with
and here's some tools and start building a structure. It
(37:18):
just no why they're gonna do this though it's not
gonna happen, but to contact. The tools we're working on
can do that. That's what humans do, and that's the
promise of AI. It's not just you know, targeting things
that humans can do, like high level things like you know,
translating language or writing poems or things like that. It's
really how do you build the system to understands the
world and knows how to act in that world. And
(37:39):
that's the key.
Speaker 1 (37:56):
One of the things you wrote in your book that
I thought was great was, uh, you address this issue
of the existential threat of AI that a lot of
people are banging on about and you don't think it's
a threat.
Speaker 2 (38:07):
I don't think it's a threat. I mean you have to,
you have to tease it apart because so many people
like there's different existential threats, but you know that I
one is called the alignment problem, Like all these AI
agents are gonna you know, you're gonna tell what to do,
but it won't be aligned with our values. And I'm
just saying they don't have any values and it's just
so far from reality. I just if once you understand
(38:28):
how brains work, you said, like I'm doing any of
that stuff. It's it's hard to it's hard for me
to give a succinct answer to this. But I don't
think that today's AI systems have any of these problems.
They're not gonna run away. They're not They're not gonna
have their own desires. They're not gonna say, hey, I'm awake,
I need to survive, you know.
Speaker 1 (38:48):
Because because these current large language models are just statistical
parrots that are taking much language and spinning language back out.
Speaker 2 (38:55):
Right, and you can apply they'll prime the robotics and
other things, but they're gonna be still so statistical parents exactly.
But and and by the way, they lack the human brain.
We talked earlier, the new or corts is the biggest
part of the brain, but we have a lot of
other parts of the brain, and our emotional centers and
how much of what makes us humans, our drives and
motivations are mostly not the near cortex. Right. There are
(39:18):
these other things. And if you provided an AI system
with those other things, I might start worrying about it.
But if you're just trying to model stuff, it's it's
no threat. It's we just assume that some AI system,
because it can spew back language, is going to think
like us and be like us and have our same motivations.
Nothing like it at all.
Speaker 1 (39:38):
So tell us about the thousand brains projects and how
you're gonna make this happen.
Speaker 2 (39:42):
Right, So, we kind of been working on this theory
for decades really, and maybe five or six years ago
we really had some breakthroughs and sort of all came
together and then we said, well, I always thought that
this is the way we're going to build tuly intelligent machines.
And this is at the same time as deep learning
and transformers are taken off and all this excitement about it.
But that didn't distract us. We said, Okay, let's see
(40:03):
if we can start building this stuff. So we for
a couple of years we had a small team that
was trying to implement the thousand brains theory, modeling quarter
u cooms, the voting, all this stuff, multisensories things, all
this stuff, were modeling it, and we decided earlier this
year that the best way to go forward was to
do this in an open source project. We haven't actually
told people we're doing this before. So we've created a
(40:24):
thousand Brains project. We're taking all of our our code
and putting an open source We are taking the patterns.
We have a lot of patents. We're going to make
a non a short clause. We've done that. I'm not
a short close on our patents. We have. We hire
a team of people to like open source project manager
for the outside people. We've already got quite a few
people interested. We already have received some funding from the
(40:48):
Gates Foundation for this significant funding and help fund the
project for a couple of years. We have there's a
guy named John hen at Connegie Mellen University who's building
silicon to implement quartical columns. So there's the people around
the world who've been excited about our work and following
it and want to join in on this day. So
we figured let's get them all together, let's build a
framework open source project. And so we built out this team.
(41:10):
We have just been run by the women Vision Clay,
who's just brilliant, and technical side is by Dunam Neils
and and so we're just starting this, you know. So
we actually haven't officially and we've talked about it, but
we haven't officially launched yet because not everything is open
yet and we have to there's a lot of stuff
you have to do to put in to get those
(41:31):
whole do work. But we're going full boring this and
I think my hope is that anyone who's excited about
the work, and there's quite a few people can help
join us and work on this and propel it forward
and really created what I not only just an alternate
form of AI centory motor AI based on brain principles,
but I think what's going to be actually the ultimate,
(41:52):
uh primary source of AI, which is brain modeling a
thousand brains projects. This is amazing. So how do people
get involved in this?
Speaker 1 (42:01):
Uh?
Speaker 2 (42:01):
You can just go to our website dementa and dot
com and you know, there's a lot of information already,
a tons of information. It's like we have all the
stuff with accumulated documentation, code, videos, were all that up there,
plus tutorials and so on. So you just you can
just you can you go to our nomenta dot com.
It'll be obvious how to sign up to be informed
of what's going on or how to get involved.
Speaker 1 (42:23):
Great, So some listener to this podcast is I want
to get involved and understand more about this thing. Go
to Nementa dot com and they can.
Speaker 2 (42:29):
They can. First of all, they'll start, they'll sign up
to getting notified about things are happening. They can get
educated on the whole project. They can I don't think
right yet they can contribute code yet, but that will
happen within a month or so. It'll be obvious how
to get started. There's there's a lot of information to learn.
I would think if you haven't. If you haven't, you
might want to start with just by reading the book
(42:51):
to the Thousand Brains, because it gives you that not
only the basics of the theory, but it also gives
you the vision about how this is going to play
out over time.
Speaker 1 (42:58):
And so the idea is just so I'm straight on this.
So the idea is a person can download the code
and run this model.
Speaker 2 (43:06):
Right the first we're making said that you can do that.
You can run our current experiments, you can recreate them,
you can apply them different ways. Great.
Speaker 1 (43:16):
So something that you and I have in common that
we are obsessed about is this idea that we're living
inside our own internal models. This is all a construction.
And you had a line in the book that I loved,
which is that if you had different sensors for picking
up different information in the world, we would have a
different perceptual experience, a completely different experience of the universe.
Speaker 2 (43:40):
Well maybe completely, not completely, Like a blind person is
we're learning the worth of touch, and a person who
is deaferent, a person maybe who has sensory problems on
his hand, they will end up with a similar structure.
Speaker 1 (43:56):
Sorry, but what I mean is not in terms of
how we pick up on the visible light ratee. But
I pick up on infrared and you pick up on
radio waves.
Speaker 2 (44:03):
Okay, right, you might if you if you really did that,
then you would have a different view of the world,
like it's often you know, like it take the issue
of color. It's often said that bees, you know, seeing
the ultra violet and we don't. So what looks like
toss is a white flower to them? Is this beautifully
colorful variegated flower.
Speaker 1 (44:20):
But let's say you saw a totally different part of
the electromagic spectrum and so you see in the microwave range.
Question is would would we have color at all?
Speaker 2 (44:29):
I don't know, It's hard to say, right, there's a
there's an underlying really interesting philosophical problem called qualitia, which
is like, why does color feel like color? Right? And
it doesn't feel like sounds or tactile sensations. And it's
an interesting challenge to understand that. I've written about it
a bit.
Speaker 1 (44:49):
Yeah, do you have a hypothesis about this, I'll tell
you what mine is, But it is it's always sort
of half one, which is I think it's about the
structure of the data coming in defines the quality.
Speaker 2 (45:00):
I don't know why or how that's true.
Speaker 1 (45:01):
But you know, with the eyes, you've got two two
dimensional sheets of data coming in, and so vision feels
like something with hearing, it's a one dimensional signals just
going up and down and vibringing your ear drum. That
feels like something. You don't confuse vision with hearing. That's
like completely different worlds to you. My interest has been
in what happens when we feed news structures. We've done
(45:23):
a lot of interesting stuff in this area.
Speaker 2 (45:25):
Exactly would you have a completely new quality? Is it possible? So?
I mean, certainly you can imagine. First of all, I
agree with you again, it's all spikes, right, So there's
nothing there's no color spikes, there's no heat, spikes. It's
just spikes. And so obviously the different quality it has
(45:45):
to come about somehow from the structure of the data
spatially and temporarily, and also sensory motory, you know, it's
like how things change as you move, and I think
that's a big part of it. So I agree with
on a fundamental level that it has to be some
in the data and it's nothing else. And we can
then ask ourselves something like, well, imagine you've been blind
(46:06):
your whole life. You don't have a sense of color.
You've never experienced color, and so to you would be
kind of mysterious things. Someone can say, well, can't you
tell that's that's you know, that's this type of orange
and that time? What are you talking about? Right? They'd
have to accept that you have some super sense and
the world looks different to you because you have vision
and I don't. And they may be able to touch
things that you know, sense things that I don't. Sign,
(46:28):
So I could be able to try to read braille
if you're not a braille reader, that feels like what
the stuff? It's a blur? Right? So they oh, no,
I feel everything there, right, So we can we can
just ask ourselves a questions like what's the world like
to different people, and sometimes we'll end up with a
similar model, like yeah, well you and I would have
no matter what censors you have, we'd have the model
of physical structure of a coffee cup. But other times
(46:51):
it could be quite different, you know, and certain if
you start like sensing parts of the radio spectrum or
other things, just be you know. One of the things
I always wondered, like, what would be like if you
had if you had smell sensors stand under your fingers, right,
and then everything you touch. Well, we kind of we have.
We have temperature sensors, and we have tapping with all
(47:13):
kinds of But what I could smell like you could
tell chemicals that were on the surface of objects. This
is what dogs do. You know. Dogs they don't just smell.
They stick their nose right on the thing and they smell.
They moved to the next bot it smell. Dogs build
this freedom, actual structure of smells. We don't have that
smell for us. It's kind of like wasting in from
some direction, right. Dogs have this incredible model of the
(47:33):
world smell mod and it's hard to imagine what it is.
But I'm sure they have it, so I think it's
fun to think about these things. I don't you know,
in the future will build machines that perceive the world
different than we do. But that'll be great.
Speaker 1 (47:46):
Yeah, Okay, Jeff, this has been wonderful.
Speaker 2 (47:50):
Thank you for being here too. Thanks David. It's always
great talking to you and I enjoy it. It's a
lot of fun we were and I love your podcast.
Speaker 1 (47:58):
So that was Jeff Hawkins, theoretician and author of A
Thousand Brains.
Speaker 2 (48:08):
Now.
Speaker 1 (48:08):
I love his model because it builds on previous research
and gives us a possible starting point for how this
whole system might be working. This is a view of
the brain in which you don't have just a single
model of the world being constructed, but hundreds of thousands
of little models, each viewing the world through their little straw.
(48:29):
And these models are independent, but they're not completely independent,
so they communicate with each other and they vote, and
in this way, the whole system converges on its best
guess of what's going on out there in the world.
And by this mechanism we construct a full three dimensional
representation of the environment around us, with its sites and
(48:52):
sounds and three dimensional structure. So this gives us a
clear framework for thinking about the neocortex. Now, we might
not know, oh for a while, if this answers everything,
or it needs some tweaking, or if there are far
better models coming down the pike. But what I absolutely
love about this is that this is where the endeavor
(49:13):
of science shines. Taking something that seems insanely complex, eighty
six billion neurons with two hundred trillion connections, something of
such vast complexity that it bankrupts our language, and saying, wait,
what if there's a really simple principle at work here?
What if there's a way that we could reduce all
(49:35):
that complexity by just looking at this from a new angle.
So let me give an analogy here. Just think about
what it would be like if you had a magical
microscope with which you could look into a cell and
into the nucleus in the middle. What you would see
is mind boggling complexity. There. You'd see millions or billions
(49:56):
of molecules racing around and interacting and doing god knows what,
and you'd say, wow, there's no way.
Speaker 2 (50:04):
We're ever going to understand this.
Speaker 1 (50:06):
But then Krick and Watson come along and say, actually,
the important thing is this DNA molecule and keeping the
order of these base.
Speaker 2 (50:17):
Pairs, and all the rest is housekeeping.
Speaker 1 (50:21):
And suddenly the fog of confusion lifts. Now something that
seemed well beyond us can be described in a sentence
or two, and science leaps forward and things move fast
from there. I worked with Francis Crik when I was
in my postdoctoral years, and now I look around me
at Stanford and Silicon Valley, and there are thousands of
(50:44):
laboratories and companies doing amazing work with genomes, and their
existence results entirely from this one simplifying insight about DNA
in nineteen fifty three, that new model that suddenly clarified
what what is happening inside the nucleus. By the same token,
this is what we're trying to do with the brain.
(51:07):
Brains appear to be ferociously complex, and yet we have
lots of brains running around the planet.
Speaker 2 (51:15):
We've got eight point two billion of them.
Speaker 1 (51:18):
So something must be straightforward about their architecture, or else
Mother Nature wouldn't be able to build these over and
over with such reliability. You couldn't drop this massive quantity
into the world and have them all functioning well unless
there was something pretty uncomplicated about building and running a brain.
(51:39):
So that is the overarching game of science to take
the overwhelming complexity around us and to find new angles
to look at things to reveal simplicity. Go to eagleman
dot com slash podcast for more information and find further reading.
(52:02):
Send me an email at podcasts at eagleman dot com
with questions or discussion, and check out and subscribe to
Inner Cosmos on YouTube for videos of each episode and
to leave comments until next time. I'm David Eagleman, and
this is Inner Cosmos.