All Episodes

February 3, 2025 • 56 mins

A brain's 86 billion neurons are always chattering along with tiny electrical and chemical signals. But how can we get inside the brain to study the fine details? Can we eavesdrop on cells using other cells? What is the future of communication between brains? Join Eagleman with special guest Max Hodak, founder of Science Corp, a company pioneering stunning new methods in brain computer interfaces.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Why is it so hard to reverse engineer the brain?
Can't we just measure the signals and all of the
brain cells and then figure out the neural code. And
if not, why not? And what does this have to
do with solving vision loss and eavesdropping on the activity
of cells using other cells and communication between brains using

(00:29):
something other than conversation or observing and understanding and maybe
changing our own experience of the world. Welcome to Intercosmos
with me David Eagleman. I'm a neuroscientist and author at
Stanford and in these episodes, we sail deeply into our

(00:49):
three pounds universe to understand the mysterious creatures inside the
eighty six billion neurons that are chattering along with tiny
electrical and chemical signals producing our experience. Now, today's question
is how do you actually get inside the brain to

(01:12):
study it? After all, we know that the brain is
the root of all of our thoughts and hopes and
dreams and aspirations and our consciousness. And the reason we
know this is because even very small bits of damage
to the brain change who you are and how you
think and whether you're conscious. Note that other parts of

(01:36):
your body, like your heart, can get completely replaced by
a machine and you are no different. Or you can
lose your arms and your legs and you can still
be conscious, or you can get a kidney replacement and
you're still thinking about your life and your family and
what you need to do tomorrow. But even a tiny

(01:56):
bit of damage to the brain caused by let's say
a stroke or a tumor or a traumatic brain injury,
even a small bit of damage can change you entirely.
Even if you don't lose your consciousness, you might lose
your ability to think clearly, or to speak, or to move,
or to recognize animals or understand music, or understand the

(02:21):
concept of a mirror, or a thousand other things that
have taught us over the centuries about the complex landscape
of this three pound inner cosmos. So we know the
brain is necessary for our cognition and experience, but we
didn't get to that understanding through detailed studies of the

(02:41):
intricate circuitry, but instead mostly through observations of crude damage.
So there's still an enormous amount that we don't understand
about how the whole system works. We only have a
sense of how it breaks. It would be like if
you were a space salien and you looked at cell

(03:02):
phones and discovered that if you zap the phone with
your laser, then it doesn't make calls anymore. Okay, that's important,
but it doesn't tell you how telecommunication works in terms
of base stations and frequency bands and compression and sim
cards and everything else. For that, you would need to
take off the cover of the cell phone to figure

(03:24):
out what the billions of transistors are actually doing. And
that's really our modern challenge in neuroscience to study this
incredibly detailed system more directly. So why is progress still
so slow on that front? Well, it turns out it's

(03:45):
very hard to study the brain's trillions of neurons directly,
this pink, magical computational material that mother nature has refined
through hundreds of millions of years evolution. Why because this
is the computational core, and so mother nature has protected

(04:06):
it in armored bunker plating. So that's the first challenge.
The brain is tightly protected inside the prison of our skull.
But that's only part of the challenge, and that can
be addressed by careful neurosurgery. The bigger difficulty is that
even when we can get in there by drilling a
little hole in the skull. What we find is an

(04:28):
incredibly densely packed device made of very sophisticated units that
are microscopically small, and there are almost one hundred billion
of them, which is about twelve times more than there
are people on the planet. And each one of these
neurons is sending very tiny electrical signals tens or hundreds

(04:52):
of times per second, and these signals zoomed down axons
and cause chemicals neurotransmitters to be released. And it's not
generally clear how to read this insanely dense circuitry to
understand how these trillions of incredibly small signals racing around

(05:13):
in there lead to a particular outcome at the scale
of a human like you move your arm, or you
have a craving for pistachios, or suddenly you're reminded of
the poem osomandias or whatever. What is the relationship between
this small scale and the large scale? So how do

(05:34):
neuroscientists try to decode this incredible complexity. The answer is
by marrying the technology that we have like computers, directly
to the cells of the brain. And this is what
we generally call a brain computer interface or BCEI. We
use that term to refer to essentially anything that allows

(05:56):
direct communication between the brain and to an external device.
So people use these to control wheelchairs or robotic arms,
or type directly onto a screen or speak through a
synthetic voice. The idea is to use BCIs to restore
functions in people who have lost them, like paralysis or blindness,

(06:19):
and someday perhaps to enhance the capabilities of healthy people. Now,
how does a BCI actually work. People sometimes think about
BCIs as measuring electrical activity on the scalp with an
EEG electroncephalogram, and that counts, but you don't get very
much detail from the outside of the skull. So the

(06:39):
more sophisticated forms of BCIs involves measuring brain activity directly
from the cells. And the main way to do this
is with small metal electrodes that you insert into the
brain tissue. And with these electrodes you can send little
electrical zapps to stimulate the neurons, and you can can

(07:00):
also listen to hear when the neurons themselves are giving
off small electrical signals. Now, this has been a technology
that researchers and neurosurgeons have used for many decades, but
it's still a challenge because you have to drill a
hole in the skull and these little, tiny metal electrodes.
Although they're tiny, they're actually pretty big from the point

(07:21):
of view of neurons. From the point of view of
the neurons, it's like inserting a tree trunk. It damages
the tissue. Now you've probably heard of companies like Neurlink.
They're still inserting electrodes just like neurosurgeons have done for decades,
but they're working to make them smaller and finer and
robotically inserted, and also wireless in their communications so the

(07:44):
information can go back and forth without having a cable there.
So it's a better version of the same idea of
sticking electronics into the brain. But are there new ideas
about how to read and write to brain cells, about
how to interface with the brain. Today, we're going to
talk about what is at the cutting edge, and so

(08:04):
for that I called a colleague of mine who is
shaping the future of BCI technology, Max Hodak. Max is
an unusually brave thinker. He started studying brain machine interfaces
as an undergraduate, and while most people would be thrilled
to simply be a part of that. He was already
thinking about the ways that parts of the science were

(08:27):
inefficient and could be improved. Some years later, he went
on to be a part of the co founding team
at Neuralink and he became the president, and then four
years ago he left to found his own company, Science Corporation.
When I visited him at Science Corporation recently, many of
the things I saw there would have seemed like science

(08:47):
fiction fantasy just a few years ago. So here's my
interview with Max Hodak. You started a company called Science Corp,
which will refer to as Science and tell us about
science because it's so exciting what you're doing there.

Speaker 2 (09:05):
Our main focus at Sciences is restoring vision to people
that have gone blind because they've lost the rods and
cones in the retina. And I this was not something
I'd not worked on the retina before, but I had
this thesis that that the technology was there that this
would be possible. There's I think two different ways to
do this that people have been thinking about in the retina.

(09:26):
There's a technical optogenetics, where you use a gene therapy
to deliver a little bit of DNA to the cells
of the optic nerve to make them light sensitive. That
then you could activate with a laser, or you could
put an electrical stimulator under the retina and drive the
remaining the cells that are still there electrically.

Speaker 1 (09:43):
And the mean for just one saying, the retina is
the lawn of cells at the back of the eyeballer
catching the photons that are coming in through the front.
And so if you've got a problem where let's say
those cells have died for whatever reason, lots of reasons,
then what you're talking about is how do you how
do you get those cells to catch the photons and
send their signals back along the optic nerve.

Speaker 2 (10:03):
Yeah, so I think you know to take a step back.
If you're thinking about getting vision into the brain, there's
a couple of different places you could think to do it.
The first is the retina. So the back of the
eye is the retina, which is this really nice two
D sheet of neurons and a big cable going into
the brain. So in some ways this is like a
really ideal interface to the brain. Evolution has done this

(10:25):
to give us vision. The first stop of the optic
nerve out of the eye is a structure in the
thalamus called the lateral janiculate nucleus, which is a very
deep structure in the brain. It's very old evolutionarily, and
there's about one point five million cells in the optic nerve.
There's about about the same number of cells in the thalamus.
And then from there you go out to a much

(10:47):
larger number of neurons in cortex called primary visual cortex.
And so if you want to supply vision to the brain,
in some sense synthetically your choices are really in the retina,
in the in the LGN, or in V one, and
everywhere past the optic nerve gets much much harder. Nobody
has ever really shown the restoration of form vision by

(11:10):
directly stimulating either the LGN or V one. I mean,
people haven't even really shown the restoration of form vision
simulating the optic nerve. The device that we're bringing to
market now that just recently finished to face three clinical
trial sits under the retina and stimulates a layer of
cells called the retinal bipolar cells, which are the first
cells past the rods and cones. And so this is

(11:30):
really in many ways the first opportunity to get a
visual signal back into the signaling pathways into the brain.

Speaker 1 (11:36):
So let's back up. So how does your device work.

Speaker 2 (11:39):
So the device is called Prima. It's a pretty cool idea.
So it's a tiny little solar panel chip about two
millimeters by two millimeters, so it's really very small, and
there's if you look. If you look at it, you'll
see all these little hex grids on it, these little
hex tiles. Each one of those TXT tiles is a
photodiode and an electrode. So what we do is you
implant this under the retina in the back of the

(12:01):
eye where the rods and concept degenerated, and the patient
wears glasses that have a laser projector on them, and
the laser projector projects the scene with laser energy onto
the implant in the back of the eye, and wherever
the laser energy is absorbed, it stimulates, and wherever there's
darkness in the scene that it doesn't. And so this
is a cool idea because there's no implanted battery, there's

(12:23):
no wires, there's no PCBs, there's no electronics other than
this tiny little chip. Because you send it both energy
and information simultaneously in the laser pulse and so this
is like, it's tough to imagine how you would do
this more simply than this. And when you look at
past devices, so like a little over a decade ago,
there was a company called Second Site that had a

(12:45):
retinal stimulator that is probably what people would be most
famous when people think about retinal prustcs. So it worked
very differently than than the science prima implant. First of all,
it targeted a different layer of cells. It targeted the
optic nerve rather than the bip our cells, which are
just much harder to stimulate naturalistically in this way. And

(13:05):
the second is because it wasn't it was a conventional
electrical implant. You had this big titanium box attached to
the side of the eye. You had cables going in
through through the eyeball to power it. This was a
four and a half hour surgery. Being able to just
put this little two by two millimeter chip of silicon
fully wirelessly under the eye with a little inserted tool

(13:28):
is a totally different game and it's and the clinical
trial results I think really speak for themselves. The first
time ever in the history of the world, as far
as we know, that blind patients have been able to
read again.

Speaker 1 (13:38):
Oh that's so amazing. So all of the electronics and
all that stuff is in the glasses themselves, which are
capturing the scene like a camera and zapping it back
with a laser to the chip.

Speaker 2 (13:49):
Yeah yeah, powering it, Yeah, basically like a solar cell.

Speaker 1 (13:52):
Congratulations on all your progress with that. It's an incredible device.

Speaker 3 (13:56):
Yeah.

Speaker 2 (13:56):
And also I should say we didn't develop this from
a scratch ourselves. We acquired this from another company called Pixium,
which was based in Paris and had done has started
the clinical trial. Originally the technology came from a I
love at Stanford scientist Daniel Planker in the Electrical Engineering
department who came up with the idea, did the early

(14:17):
work at Stanford that was licensed by Pixium. They started
the clinical trial which we acquired and have finished and
finished the clinical trial and to bring in to market.

Speaker 1 (14:25):
Right, I mean, I'm so I'm so jazzed that you
guys are doing that or bringing it to market and
making making this across the finish line. So that's what
you're doing in the retina for people who have lost vision.
Tell me what you're doing with uh reading from neurons?
So before just before we get there. So the challenge

(14:46):
with brain computer interfaces has always been, well several One
of them is that you know, mother nature has wrapped
the brain in this armored bunker plating, so it's hard
to get to. But then when you get in there,
you've got eighty six billion neurons and you have to
figure out who's saying what. And the traditional way to
do this is to dunk an electrode in there, which

(15:07):
really damages the tissue. So obviously people have been trying
to make electrodes thinner and thinner. But you've got an
idea that you're working on which is amazing. Tell us
about that.

Speaker 2 (15:17):
Yeah, so that I can like, there's no free space
in the brain. The brain is wet, it squished together.
Evolution has really compressed as much as much as it
can into as small a space and an energy budget
as it possibly can, and so there's there has not
really left holes that we can take advantage of in there.
Evolution is extremely good at its job, and there's limits

(15:40):
how small you can make an electrode. You can't make
a like one nanometer wire because as a wire, just
any electrical wire gets smaller, the resistance increases there's just
real limits how small you can make a recording electrode
before you lose the ability to distinguish the signal that
you care the biological activity from the background noise. And

(16:03):
then on the stimulation side, this is actually worse because
there's real limits how small you can make a stimulating
electrode before you start splitting water in the brain and
producing hydrogen and oxygen, and like, you really don't want
to be doing this, And so we think about, like,
what does an ideal neural interface look like. I think
one of the high level intuitions that I started with was, Yeah,
the brain is encased in this dark vault of a skull,

(16:26):
but it has to communicate with the world.

Speaker 3 (16:28):
There's like you, the.

Speaker 2 (16:30):
Brain is not telepathically connected to the outside world. I mean,
it's also important to realize that you're not seeing the
world out there, right, You're only ever seeing in perceiving
information that's arrived at the brain.

Speaker 3 (16:41):
And so how does it get there.

Speaker 2 (16:43):
All of the information that flows in or out of
the brain flows through a relatively small number of cables.
There's twelve cranial nerves and thirty one spinal nerves. The
optic nerve is cranial nerve two. The vestibular cochlear nerve
that carries hearing in balance is also called nerve eight.
And kind of thinking about you've got this relatively small

(17:03):
number of wires, we can think about attaching to those
like we do for getting vision into the brain through
the remnants of nerve too. But this also kind of
got in the back of my mind going this idea
of can we grow a thirteenth cranial nerve that really
feels like the ideal neural interface. Biology has given us
other examples of fiber bundles that get information in and

(17:24):
out of the brain for really any purpose that the
brain needs. It, is it possible to add a thirteenth
biological wire that, instead of having an eye at the
other end or having a bunch of muscles at the
other end, had a USBC port basically, And so the
high level intuition here is like, what can we add
to the brain? How does the brain do this? Like
how does nature do this on its own? And the

(17:45):
answers it uses neurons, And so this kind of prompts
a question what happens if we add more neurons to
the brain and the answers, they grow in and wire
up and give you these bidirectional chemical synapses. And so
this has led to an approach that we call biohybrid
like biohybrid neural interfaces, and it really feels like it

(18:07):
has the scalability that many conventional methods don't. Now there
are alternatives to electrodes, So tell us what a biohybrid
interface is. So a biohybrid neural interface is when we
take heavily engineered stem cell derived neurons in a dish,
we load those into the electronic device, and then what
you place into the brain is just the ingrafted cells.

(18:31):
So we're not placing any metal or any like, no
electronic or mechanical component goes into the brain.

Speaker 1 (18:36):
Instead, you're growing.

Speaker 2 (18:38):
We basically graft these these cells onto the brain through
an appropriate starting point, and then those grow out form
new connections just as kind of more more of the brain.

Speaker 1 (18:50):
And this is because mother nature is really good at
growing cells into groups of other cells and so on.
So you're taking advantage of that.

Speaker 2 (18:57):
Yeah, we're letting biology do as much as the heavy
lifting as we can. Now, this creates other problems, but
the and I think smart people can say, well, now
you have a really complicated selling engineering problem to solve.
But if you can solve that in the meaningful way
that you have to, yeah, you can get biologies do
a lot of work for you.

Speaker 1 (19:13):
Yeah. So these cells that you're putting on there and
growing in you have heavily engineered these cells. So tell
us about that. Yeah.

Speaker 2 (19:21):
So there's a couple of things you need to do.
The first is it needs to be matched to the
immune system. Now, if you don't do this, it's you
can still make a cell therapy for a patient, but
you need to do it on an individualized patient basis
per patient. This is very expensive. It can take a
very long time to make edit the other edits that
we need. And so the first set of editing that

(19:42):
we do is to make the neurons hypommunogenic, meaning that
they don't bother the immune system when you put them
in a patient.

Speaker 1 (19:49):
So how do you do that?

Speaker 2 (19:51):
This is a much longer topic. There's these things called
major histoic compatibility complexes, and we need to suppress some
protein expression and force some other protein expression to basically
tell the immune system not to eat you and and
also that you are fine.

Speaker 1 (20:11):
And how far along are you on that pathway? Is
that solved?

Speaker 2 (20:14):
I mean, I wouldn't say that that's a solved problem.
I would say as a as a fields, there's several
standalone companies that their ip is hypo immune agenic stem cells,
and so we are i'd say, pretty close to the
state of the art in the field, but it's not perfect.

Speaker 3 (20:30):
Now.

Speaker 2 (20:31):
In the brain, the immune system tends to leave you
alone more than many other areas like this is, for example,
a lot of the work that's been done in gene
therapy so far has been done in the eye because
the immune system tends not to overreact in the eye
because when it does in a patient and a subject
goes blind, this historically is a bad thing. And so
there's some areas where you tend to get more autoimmune

(20:52):
reactions in some are Some are anatomy where this happens less.
The brain is one of the areas where because around
the time of the surgery you're treating them with systemic
immunosuppressants anyway, and then once the bloodburned bearer has healed,
it being approximately hypominogenic is probably fine.

Speaker 1 (21:25):
Okay, So you do that to these cells, you engineer
them that way, and then you stick them on so
that they grow in. But of course you're keeping the
cell bodies outside and then what are you doing with those?

Speaker 2 (21:37):
Yeah, So the next edit that we make is we
add a protein called a light gated ion channel also
as an option to these cells, which allows us to
fire them using light.

Speaker 3 (21:49):
And this is pretty important.

Speaker 2 (21:50):
So when we have so the device that the cell
is embedded in has two components around each cell. It
has a recording electrode which allows us to detect the
date of the cell, and it has a tiny little
micro LED kind of like you'd have in like your
phone screen next to the cell. And so when we
want to fire a neuron, we turn on the LED

(22:10):
and that depolarizes the cell and sends a pulse into
the brain. And when that neuron receives input from the brain,
because it's grown out both inputs and outputs, we can
detect that with the electrode, and so being able to
optically stimulate using light and electrically record use it in
an electrode. A capacity of electrode allows us to minimize
crosstalk between these so that we can do them both simultaneously.

Speaker 1 (22:31):
And they're sandwiched in between this. So the cell body
is sandwiched in between the little light and the little
recording electrode. And so you can say, for this guy,
I want to turn him on now, and I want
to record what he's doing through time.

Speaker 2 (22:42):
Yeah, it's not quite exactly one to one, but it's
pretty close.

Speaker 1 (22:46):
Great, And how many neurons can you grow in there
at once?

Speaker 2 (22:51):
Well, I mean this is so there's the number of
electrodes in the device, or number of channels in the device,
and then there's the number of cells, and then there's
the number of synapses that you get the brain. And
these are slightly different things. So the chips that we're
working with right now have four thousand electrodes per FIN,
and so we're thin is one of these one of

(23:12):
these little sandwiches. Yeah, and so it's actually it's really
eight thousand per because it's four thousand microelids and four
thousand electrodes. But we call this a four thousand channel
fin and we're working on stacks of these to scale
this up to hundreds of thousands of channels in one
in a couple millimeters by a couple millimeters. But I mean,
you could load this with a half a milli liter

(23:34):
of cells, which easily millions of cells, and those can
form many billions of synapses through the brain.

Speaker 1 (23:43):
Do each of these cells form about let's say, ten
thousand synapses or.

Speaker 2 (23:47):
I mean it's tough to count them. I mean you
can get there's the order of magnitude. People think it's
like maybe about a thousand synapses per cell, but I
mean we can't. These are tough to actually count.

Speaker 1 (23:59):
Right, If you had a million neurons in there, you'd
get a billion synapses in the brain.

Speaker 3 (24:04):
Yeah, back of the envelope.

Speaker 1 (24:05):
Back of the envelope. And then so what you'd be
able to do is stimulate exactly as you want to, Okay,
fire number three hundred and seventy nine, now, fire number
one hundred and fifteen, and son, and then record the
activity going on there so you can read and write.

Speaker 3 (24:20):
Yeah, so you can read and write. And it's a
fairly complex.

Speaker 2 (24:23):
So you've got this transform between the activate the activities
and the cells and your device and what's going on
in the brain. We don't think of it in terms
of the single unit activity. In the beginning of the field,
we were really thinking in terms of single neurons, and
in the very beginning, the first experiments that were done
in animals didn't have a model of brain activity really
at all. What they did is they place electrodes in

(24:45):
the brain and then say, when this neuron fires, the
cursor should go up, and when this neuron fires, the
cursor should go down, and you just can learn to
separate these things. So the brain is very plastic under feedback.
Now that works for a very small number of channels,
and of course the subject isn't learning to modulate those
neurons specifically. They're actually modulating big groups of neurons around

(25:06):
where the electrode is, and so as you go to
higher level control, that doesn't really work anymore. But the
brain has these abstract informational representations of things like intended
motor activity or face recognition or other things that objects

(25:27):
that it thinks about, and so we're still at the
early stages of learning to use these devices. Really different
type of BCI, But how we think what we think
we're seeing is these cells would really join these cortical
representations and then just become part of part of the brain,
and you can do neuroscience on them like you would
any other part of the brain, except that the soel

(25:48):
body is right there in your device and really easy
to observe.

Speaker 1 (25:51):
What are the biggest challenges that you're facing in terms
of bridging these digital systems and these biological systems.

Speaker 2 (25:57):
There's many of the hard problems here are not the
really obvious sexy ones. In fact, actually I realized the
other other week that the very first piece of writing
that I put on the Internet was kind of this
like sophomore literally as a software I call it, but
the software grant about how like back in circa two
thousand and eight, everyone felt like the hard problems here

(26:17):
were understanding the neural code and like real science to
study these like deep neuroscience questions, and it was kind
of for the technicians to figure out how to get
the electrodes in the brain, whereas actually the problem is
how do you get these electrodes in the brain. And
certainly the neuroscience has advanced a lot, and the neuroscience
is very cool, but a lot of the problems here
are things like packaging, which is a fancy term for

(26:39):
when you place an electronic device in the body, it's
going to be it's going to be attacked, it's going
to be degraded, it gets encapsulated in scar tissue that
neurons are pulled away from you. There's these very harsh
chemical environments that try to attack and destroy your device.
It's important to realize that there are no truly passive
surfaces anywhere in the body, like even bone is constantly

(27:02):
getting remodeled and turned over and regenerated, and so when
you place one of these non regenerating device in the body,
it's going to be attacked. And so now we have
we have much better materials than we did ten years ago,
specifically things like silicon carbide, which is really annoying material
to work with, but a very good encapsulant that does

(27:22):
not degrade in the body in the same way as
these older polymer king capsulans do. It's like, if you
look at the history of Prima part of like how
did PIXI them, the company we.

Speaker 3 (27:30):
Bought this get here?

Speaker 2 (27:31):
They actually had an approved device in the I want
to say twenty fourteen twenty fifteen called Iris, which was
a different retinal prosthesis and it worked very differently. It
had a conventional electronics package, It required a battery, but
it was it was got on market and then was
withdrawn and it was withdrawn because of packaging failures. Basically,

(27:52):
the device didn't have an acceptable lifespan in human patients
once on the market, and that was like they were
using materials that were available at the time, which was
before we figured out, as I feel, how to work
with things like silicon carbide. And that is an example
of a problem that enabled Prima to work. So Prima
is a full carbid encapsulation and it should last I

(28:12):
mean there's now dat out to six years and some
patients and it should outlast these patients.

Speaker 3 (28:16):
It should last decades.

Speaker 1 (28:18):
Amazing.

Speaker 2 (28:18):
And so that's an example of like a big area
of progress in the last few years that people wouldn't
really think of.

Speaker 1 (28:23):
And so, what are some surprising findings or unexpected obstacles
that you've run into while doing let's say the biohybrid electrodes.

Speaker 2 (28:31):
I mean, biology is just it's when it works, it
can do a lot of things that we humanity is
just not at that level of capability yet. But also
in neural engineering or either whether that means systems neuroscience
or BCI, you'll start in mice and then maybe you'll

(28:52):
work in an intermediate species like pigs and then eventually
end up in monkeys, then end up in humans. And
when you have an electrode or even something like optogenetics
that works basically the same in mice as a dozen
monkeys as it does in humans. When you're engrafting neurons
into the brain, I mean, there's a big difference between
mouse neurons and human neurons and macac neurons are different

(29:14):
thing entirely, and so you end up having to redo
a bunch of this work in each species that you
work in, and so there's every time we switch species,
there's a lot to relearn. And fifteen years ago now
probably something like that, there is a major discovery of
the ability to turn any cell into a stem cell. Again,
this was a discovery called induced plury potency, won the
Nobel Prize a while ago, and that works really well

(29:37):
in rodents, it works really well in human cells. Turning
a macaque skin cell into an ips is like there's
just a bunch of little tricks that don't work as well.
And so the biology is pretty deep in all of
these areas.

Speaker 1 (29:51):
It's surprising that those are different, but you know, just
given the evolutionary shared history.

Speaker 2 (29:57):
But yes, yeah, I mean there's a lot that's conserved,
but there's also a lot of little things that are
slightly different.

Speaker 1 (30:01):
Yeah, quite right. So big congratulations on where Prima is
right now. That's so exciting. On the biohybrid electrodes, it
said you have growing neurons into the brain and then
being able to read and write that way. When do
you think that's going to be ready in humans? What's
your prediction.

Speaker 2 (30:16):
I think that the first human ingraftment will happen around
twenty thirty, okay, so I think like probably five years, okay.

Speaker 1 (30:24):
And what is the first thing you're going to tackle
once it gets into humans?

Speaker 2 (30:28):
Well, I mean it should be it's a communication device,
and so motory coding, speech to coding, all of that
should be possible. And so I think in the near
term it's you're looking at the figure of merit for
any brain computer interface for communication is a bandwidth measured
in bits per second. The record for keyboard and mouse
kind of low dimension motority coding is about seven bits

(30:50):
per second, which is I think neuralinks current participants. There's
a group at UC Davis led by saying Nick Card
and Sergey Staviski, who recently showed speech to coding from
pridal cortex that gets about twenty to twenty five e
bits per second. Human language is routinely rated at forty
bits per second, So you think that you can as

(31:10):
them tote towards that, and so I think in the
near term what we're looking for is a forty bit
per second communication prosthesis. Longer term, this is where neural
engineering and BCI diverge a little bit, and there's a
lot of interest internally at looking at how is this
applicable in stroke or other areas where you've lost cells
where conventional BCI techniques really won't work in the same way,

(31:33):
and potentially even organic nerd degenerative diseases, But those are
very hard and I don't want over promise on the
timeline there.

Speaker 1 (31:40):
Now, if we were just going to blue sky here,
part of the mythology about BCIs is that at some
point everyone will have one of these for summer, you know,
for communicating faster with their cell phone or their computer
or whatever. To what degree do you think that's hype
versus let's imagine one hundred years from now, where do
you realistically see think it's going to be in terms

(32:01):
of the amount of market it has.

Speaker 2 (32:02):
Yeah, I mean one hundred years from now. I have
this event horizon somewhere between twenty thirty and twenty thirty
five now that I just can't see beyond, and kind
of for my entire life, I always kind kind of
like see the future, and we are clearly in the
takeoff era now, and this is not I don't think
I'm saying anything that contrarian, at least in Silicon Valley,
but one hundred years from now is almost impossible for

(32:24):
me to imagine.

Speaker 3 (32:25):
Now.

Speaker 2 (32:25):
With that said, I don't think that healthy forty year
olds are going to be getting holes drilled in their
skull anytime soon. My view is that it'll be a
long time before these things are really augmentative, much less
elective procedures. But everybody eventually becomes a patient. There's some
point as you get older. For example, the main indication
of prima is age related macular degeneration, which is very

(32:50):
common and if someone lives into their late seventies or eighties,
is actually pretty prevalent, and so for many many of
these things. Eventually there will come a time when it
makes sense. I mean we consider retinal prostisis and cochlear
prosteces also BCIs when I look at, say twenty years
from now, the things that I that are very much research.

(33:14):
This is not a thing that's happening in the next
five years. But if you can get a neural interface
with the bandwidth of that say like the two hemispheres
are connected, which is about one hundred million fibers on
both sides that project across the midline to connect the
two hemispheres of your brain into a single thing. If
you can get something of that bandwidth, which is probably

(33:35):
only tens of megabits, then this takes you into really
interesting territory about really being able to redraw the borders
around brains and gets at this thing called the binding problem.

Speaker 3 (33:46):
And that feels less than twenty years away for me.

Speaker 2 (33:48):
This feels not like the next five years, but not
not to the distant future like within people's lifespans today.

Speaker 1 (33:55):
So let's stile click on that tell us about the
binding problem and how you think this addresses that.

Speaker 2 (34:00):
But I mean, I don't have a solution for the
binding problem. Is if the brain is made up of
a lot of different neurons and a lot of different
areas kind of connected together. Why do we Where does
this unified perception come from? You? You see the world,
you can think about it, you hear things. All of
this is fit together into a coherent hole for you.

Speaker 1 (34:17):
When the bluebird flies past you, the blue doesn't come
off of the bird, and the chirping doesn't seem like
it's coming from somewhere else. It seems like a unified object. Yeah, exactly,
even though even though blue is processed apparently in one
part of your brain and the motion another part, and
the chirping in a different part. Yeah, okay.

Speaker 2 (34:31):
And so in some there's some sense in which almost
all communication is about creating correlations between brains. There's we're
having a conversation right now. There's concept spaces in my
brain that are being active that I developed from education,
like learning English learning, math learning, science, doing these things,
and I can serialize these neural activations to vibrations over

(34:53):
the air, send over to you, receive through your ears
that then activate these correlations in your brain that allow
us to share these concepts. But we don't. Our brains
don't become one thing. And so there's there's some point
between the types of correlations that you get between the
hemispheres of a brain and the types of correlations that

(35:14):
we get between brains that are in dialogue. And where
does this Where is that crossing point? We don't know today,
but I think that biohybrid devices have the potential to
get close to there, and that takes us to really
different regimes than kind of conventional VCI technology.

Speaker 1 (35:34):
Let me just make sure I understand what you said.
So the idea is, if you're reading and writing from
my brain and from your brain, we can get closer
to being a single brain.

Speaker 2 (35:45):
Well, like, yeah, the question is like, where does that happen?
What makes I mean? People back in This has done
less commonly now, but it was never really done that commonly.
But people used to cut the connection between the two
hemispheres of the brain to treat epilepsy. You could prevent
a seizure from spreading from one to the other. And
those split brain patients were really interesting to study.

Speaker 3 (36:06):
Because you could.

Speaker 2 (36:09):
You could ask kind of the right hand a question
which would go to the left hemisphere, and then you
could ask the other hand, which was coming from the
other hemisphere, to kind of answer, and you get the
sense that there's two agents going on.

Speaker 3 (36:23):
In one head.

Speaker 1 (36:24):
Yeah, one in each hemisphere.

Speaker 2 (36:25):
And so if you take that then the opposite direction,
what do you get? I think is really interesting.

Speaker 1 (36:32):
You're saying, put four hemispheres together and yeah you get Yeah,
Now who would do this? Who would volunteer for example,
two spouses for example?

Speaker 3 (36:41):
Yeah, exactly.

Speaker 2 (36:41):
So I think this is in the beginning this this
is going to be something like you've got like a
long married couple one has a terminal disease. Can you
make the loss of that brain like having a stroke
you recover from, rather than the that rather than lights out?

Speaker 1 (37:00):
Oh wow, and we double click on that story. What
would the narrative be there?

Speaker 2 (37:05):
Well, I mean you get so the if you have
if you can build these super organisms and get kind
of equilibration of representation over some extended period of time.
I mean, people already store memories in their spouse's brains
that then they can access and recall later. Right, this
is about creating correlations between brains, and so there's some
they suspect that there's some nonlinearity in there where you

(37:27):
get something really different, but of course we don't know
exactly where that is yet. I mean, this is a
tricky field because there's a fine line between doing very
like we're right now in the process of preparing twelve
hundred pages of regulatory documentation that is like very nuanced
in exactly how you do these tests to verify these
like things that have passed clinical trials that are in

(37:49):
almost fifty patients in six countries, and then you kind
of play some of these technologies out not even that
long five ten years, and you sound like a lunatic.
But that's part of why this is such an exciting field.

Speaker 1 (38:01):
I think, right, what would you see so I know,
I know the event horizon for both of us is,
you know, not more much more than a decade out.
But what would you see is the societal benefits that
could happen from this, you know, at whatever time scale,
for example, connecting brains or something. Have you thought about
what that could what that would turn into not just responuses,

(38:23):
but for society.

Speaker 2 (38:25):
I mean at the end of that is this idea
of substrate independence, which is the thing like when I,
like I see a person, there's two parts of this.
There's the there's the robot, and there's an agent. And
I'm going to be pretty pretty disappointed if I get
murdered by my pancreas, which is like basically a support
structure for like keeping the agent going. And so there's

(38:47):
I think this takes us to Okay, if we're serious
about exploring the universe, I think we have to adapt
ourselves the environment rather than bringing little pressurized bottles of
Earth with us everywhere we go. Because our like once
great grandparents grew up on a planet that happened to
have those things, and so I think this is like
very profound technology.

Speaker 1 (39:02):
So substrate independence, just for the audience, means getting off
of this wet biological stuff and onto something more robust,
like a silicon chip or something. In other words, getting
your mind into something that can survive space travel.

Speaker 2 (39:16):
Which could be other biological brains, or it could be
an engineered system. Brains are composed of ordinary matter assembled
by the rules of chemistry.

Speaker 3 (39:26):
There's no magic in there.

Speaker 2 (39:28):
They're very complicated and we don't have obviously complete explanations
for how they work. But they're ultimately physical systems, and
so there's something that they're doing that's producing this experience
that ultimately must be explainable.

Speaker 1 (39:55):
And so what you're doing with the electrodes, the biohybrid
electrodes in the brain. How does this lead to substrate independence?

Speaker 2 (40:02):
Well, the idea is that if you can get like,
if you can really, in some profound sense, lose track
of where one brain ends in another begins, then where
does this take you. I have no idea what that
experience will feel like, but I'm pretty confident that that
device is going to get made in the next decade.

Speaker 3 (40:16):
And this is research.

Speaker 2 (40:17):
This is not a there's nothing to sell here yet,
but it's the type of frontier that is enabled by
the types of devices that are getting made now and
that and there's I think enough near term commercial revenue
from things like the from the visual pres thesis to
fund this.

Speaker 3 (40:33):
This stuff happening.

Speaker 1 (40:34):
So if you're able to read from the brain, then
you can take that data and put it into a
different substrate.

Speaker 2 (40:40):
Whether that requires so to do that, that requires new
physics that we don't understand today. We'd have to really
understand what is the brain doing that is producing this
ordered experience that we have. But I strongly suspect that
intelligence and consciousness are separate or independent. Is possible to
have a pure experience in the absence of adaptive behavior

(41:00):
and it's possible to have very apparent adaptive behavior and
the absence of experience, So these things are separate now.
In order to have true substrate independence, like you could
build a silicon based system that is as good as
our brains. This requires a physics and neuroscience breakthrough that
will produce several Nobel prizes that we don't have yet.

Speaker 3 (41:20):
But I do think that that is not one hundred
years away.

Speaker 2 (41:24):
I think that there's really compelling threads of research that
are being pulled on that have the potential to produce
those equations. But even if we don't get those equations,
if you can build brain to brain connections, then you
don't need them, because you know that brains are good
enough and if you can assemble, if you can connect
them together, then that is another approach with some drawbacks and.

Speaker 3 (41:45):
Some like big head starts.

Speaker 1 (41:48):
Do you think people would volunteer to connect their brain
to someone else's I'm not sure. I'm not sure I
would enjoy connect with everything.

Speaker 3 (41:54):
I don't know.

Speaker 2 (41:55):
I mean, I don't think that this is for everybody. Also,
this is on a thing that exists today. I think
that this is a really interesting thing on the horizon
that is like enough to notice. So like, oh that, like,
if that's possible, what does that mean? But I think
it's tough to to I think really anticipate it too much.

Speaker 3 (42:15):
Right now, you.

Speaker 1 (42:15):
Onz wrote that one of the main goals in neuroscience
is to understand the physics of consciousness so that we
can engineer experience. So tell us what you mean by that.

Speaker 2 (42:23):
Yeah, So to be clear, I don't think that's like
the only goal of neuroscience. I think there's lots people
working in neuroscience that are thinking about other stuff and
have never asked themselves those questions. But I think that,
I mean, arguably one of the kind of end goals
of technology is is recursion in the sense of we
gain the gain the ability to observe and manipulate kind

(42:47):
of our own existence, and we I think, like Earth
is small and intensely contested, and space is large, and
the speed of light is low, and there's like you
never run out of real estate, and like in the matrix,
and so getting to a point where we can we
really have we have control over our like the nature

(43:10):
of our experience feels like kind of a logical endpoint
of a lot of what we've seen over the last
like since the beginning of the technological revolution.

Speaker 1 (43:19):
So, how is what you're doing with the biohybrid electrodes.
How will this get us closer to understanding something about
the physics of consciousness? Oh?

Speaker 2 (43:27):
Well, I mean one thing, one thing that I think
is true about about consciousness is that there's a good
chance that to really know one will have to see
it for yourself. I think that the problem, one of
the problems that has made it so hard to study
is not it's not that it's magic or that there's
like some metaphysical thing that makes it inherently impossible, but
that there's no measurements that we can take that will

(43:48):
tell us things, because you can always if you believe
that intelligence and adaptive behavior is separate from phenomenal experience,
then if you run a behavioral experiment in an animal,
you can always see some explanation for what's happening without
resorting to saying anything about conciousness. And when we do
experiments in animals, we don't talk about what they see
or perceive. We say they can use the information or
they can learn the information. And so when you think

(44:11):
about what experiments can you really run that would allow
you to know if you've learned something this This often
looks like, can we add a new sensory mode? Can
we It's also pretty tough to imagine a sense that
you don't have, because again evolution is very good at
its job and it's really fit filled this available time
and space. But for example, it's sense that you don't have.

(44:32):
Is a true vector sense, So the ability to see
a field, like a three D field out in the environment,
and we don't have this because you don't have the
sense organs to do this.

Speaker 3 (44:40):
We don't make measurements out of a distance. We only get
measurements that arrive to you.

Speaker 2 (44:44):
If we had some way to get this signal, say
from remote sensors or other things, then you could get
the information. So what would a true vector sense feel
like to experience? And so at the point where we
can implement that and make and then make that available
to you, and then way that you see it, you're
I guess this was this was a new information, and
I'm experiencing it directly and I can use it intuitively,

(45:05):
and there's no other way.

Speaker 3 (45:06):
I could have experienced this.

Speaker 2 (45:08):
I think that is like the type of proof of
concept for knowing that you've gotten some of that model.
And I think that this isn't I don't think that
you can do this with conventional electrodes. I think that
you need something like a biohybrid neural interface to get
to that level. Why when you electrically stimulate vision into
the brain. So let's say that you put an electrode
in primary visual cortex. If you inject charge through this,

(45:28):
you can absolutely get a flash of light somewhere in
the visual field. And if you do this in an animal,
you can get them to look to the way you
put the flash of light, and so you can say, okay,
I got some visual signal into the brain. The problem
is that these flashes of light, these are known as phosphenes.
And what a phosphine really is is when you stimulate
lots of neurons simultaneously, you average them together. And so
if you have a neuron that represents like red and

(45:50):
some part of the visual field, next to something that
represents like a spatial frequency, next to something that represents
like emotion, like an orientation emotion, and you drive all
of these simultaneously, you kind of average them. Basically nothing
that the only information that's remaining is is i thing
called writing a topic, which is where in the visual
field was it? And if you do that, then you're limited.

(46:12):
You throw away almost all of the information that you
could have conveyed. And when you do this, also, like
this very continuous stimulation tends to produce the most intense
immune responses to electrodes that you get. And so these
these writing electrodes tend to be very encapsulated. And so
you want something that gives you access to hundreds of
thousands or millions of neurons and the at single cell

(46:37):
informational resolution in ways that will the brain will really
adapt to informationally, and electrodes don't get that type of
specific stimulation, certainly not at the hundreds and nobody's ever
done something like one hundred thousand electrodes for stimulation. And
there's the other technique, optogenetics, where you do this with

(46:59):
an optic stimulator.

Speaker 3 (47:00):
This requires genetically modifying the host brain.

Speaker 2 (47:02):
You have to use a gene therapy to deliver this
new protein to the cells of the brain. This is
not a thing that is really done in humans and cortex,
and there's reasons that that is that it's going to
be really difficult, and so there isn't I don't as
from where I said, I don't see another technology that
is really capable of getting hundreds of thousands or millions
of neurons at single cell resolution in a way that

(47:24):
is long term stable, in a way that allows those
neurons to learn the signal that you're trying to give them.

Speaker 1 (47:30):
What philosophical questions keep you up at night?

Speaker 2 (47:34):
So there's a question that whenever I go to like
things where I see my friends, there's a question that
splits the table evenly every time, which is is a
destructively scanned upload you? So these expanded you with think
side of things that my friends and I call the
transporter problems. And in some sense they're very simple, which

(47:54):
is like if you have if you take like a
scan of a brain, but at the end the brain
is no more, and then you can use this to
build a perfectly biophysically accurate like atomic simulation of that person.
Does this make you feel better about dying of cancer?
And for me, the answer to that is no. And
I think many people and faced actually with that situation,

(48:15):
would conclude no.

Speaker 1 (48:17):
This is no as in you feel you will have
died if you got destroyed, yet there was a replica
of you that got booted up a second later.

Speaker 3 (48:25):
Yeah, exactly.

Speaker 2 (48:26):
This is like I'll be survived by my friends, which
is great, but doesn't necessarily make me feel a lot
better about my specific situation.

Speaker 1 (48:32):
Right. In other words, the replica that gets booted up
a second later thinks, wow, I'm max. It was just
over there and now I'm over here. But the question
is do you get any benefit from that?

Speaker 3 (48:41):
Exactly?

Speaker 2 (48:42):
And so from its perspective, it's probably right. And I
think that people respond to this while saying like, well,
every night you lose consciousness, you wake up in the
next morning, you've broken some continuity there, which I think
is like also totally fair. That's like also not that's true,
but still doesn't really make me feel better. And so
the two camps here are my agency living on in

(49:03):
the world, which can be done through some other, some model,
some replication of me that makes me feel like my
influence will persist, versus I will accept drift in the
personality in the agency as long as I get continuity.
And so that's like the brain to brain connection there
is like you'll get significant personality drift because you're kind
of averaging together to people to some degree, but you

(49:26):
get continuity or is this living on an agency without continuity?

Speaker 3 (49:34):
Is that good?

Speaker 2 (49:35):
And what's interesting is people's brains seem to make a
choice on this early in their life and they are
unable to see the other one. They're very convinced that
this is like one of these two things is nonsensical.
And so my read on this is that this is
a there's a choice of metaphysics that's being made here
and from which you reason. So this is a kind
of a choice that your brain has made that allows

(49:57):
you to see something and then from there you start reason.
And so you can't like really talk your way through this,
But I think these are kind of the two tribes
that like metaphysical tribes here, And my guess is that
kind of people get converted to continuity when faced when
it becomes like a real thing. But that's the that's
a philosophical question for which I don't know there's a
right answer that keeps debate going.

Speaker 1 (50:18):
And do you feel any differently about the problem if
you were degraded into your atoms and then those atoms
were beamed over somewhere and then reconstructed, But it's still you.
You're degraded and you're rebuilt up. Does that make a
difference for you?

Speaker 3 (50:32):
Yeah?

Speaker 2 (50:32):
I mean, so this is this is the second transporter problem.
Is if you send the atoms, does this make it better?
And I think really the thing that I don't know?
I mean, I think so. There's a show that I
love that recently came to Netflix, was really hard to
watch for a while called Pantheon.

Speaker 3 (50:46):
Highly highly recommended.

Speaker 2 (50:47):
I think Pantheon is probably the best depiction of how
I think the next like fifteen years might go that
I've ever seen in fiction.

Speaker 3 (50:54):
It's adult animation.

Speaker 2 (50:55):
It's by that it's based on a series of short
stories by Ken Lew who is probably best known as
the English language translator for the Three Body Problem series.
And that show is amazing but also terrible metaphysics. It's
a destructive upload, it's like, but the characters also realize this.
There's graffiti on a building at one point that says
like Dina live forever, which I don't find that compelling

(51:16):
of value proposition, but it's an interesting depiction of a
world where you kind of get to the other side
of that of that choice of metaphysics to the degree
that people aren't worrying about it anymore, and from the
backwards looking perspective it works out fine. And so that's
certainly one potential view there. The other is that what
if you really believe what matters is continuity, then what

(51:36):
you have to do is you kind of have to
get a seed brain on both sides of the transporter,
briefly establish brain to brain link to get the continuity
through it, and then that's enough. As long as there's
a brief moment of continuity, then that kind of gets
you through that philosophically.

Speaker 1 (51:54):
Oh interesting, So this is where you might do your
four hemisphere trick.

Speaker 2 (51:58):
Exactly, Well, yeah, I mean typically mean and that in
the case where it's really like an adom for adam reconstruction,
and the representations are already shared, then you wouldn't need
any time. If you did this with two people do
for that to really make sense, there'd be some time
to get representational like drift between them. It's funny because

(52:19):
if we talk about these things are interesting and are
genuinely becoming from the realm of science fiction where they
some of them still are today, and to the realm
of engineering, which not all of this is today, but
also only clear Like we don't at work, we don't
really spend a lot of time thinking about like the
future of humanity. It is mostly, as I often say,
debugging Linux drivers, yeah, and writing regulatory documentation.

Speaker 1 (52:42):
So what drives you in your work?

Speaker 2 (52:46):
I mean, look, if you really believe that these things
are possible within our lifetimes, I just like AI is
also very exciting. There are other exciting things happening.

Speaker 3 (52:54):
In the world.

Speaker 2 (52:55):
But when you really believe that these things could actually
be possible, I think it is tough to think about
a lot else.

Speaker 1 (53:05):
That was Max Odak, founder and CEO of Science Corporation.
He's working on the challenge of how to read and
write from the brain, and really there are only a
handful of people who are doing that. With the smarts
and entrepreneurial bravery of Max, he and his team are
at the cutting edge of integrating with the brain, whether

(53:25):
that's by turning pixels into lasers and stimulating a tiny
implant in the back of the eye, or growing neurons
into the brain that ingratiate themselves into the network in
a way that you can spy on the activity there.
You can check out more about his company in the
show notes at Eagleman dot com, slash podcast, and Max's

(53:46):
website is science dot xyz. So let's wrap up at
its core, the idea of growing cells into the brain
as a brain computer interface. This challenges the comment intuition
of a division between biology and machinery, and more generally, however,

(54:06):
we make interfaces to the brain, these open the possibility
that we'll be able to someday not only interpret what
it is to be a human, but also enhance that
and that in the future, even things like our thoughts,
which seem unassailably private and ineffable. Things like thoughts might

(54:26):
soon traverse digital pathways the way any data flows through
a network. What does it mean when a thought leaves
the confines of the skull? The story of BCIs is
just beginning, and it's not just a story about the technology.
It's the story of a whole new channel of communication.
It's about translating the language of neurons into the language

(54:49):
of computers, or perhaps eventually into the brains of other people.
It's about giving voice to the mute, it's about giving
movement to the paralyzed, and it's about giving wings to
our imagination. So The work by Max and others in
the BCI space invites us to consider whether our brains

(55:10):
have to always remain isolated entities, or whether they can
interface with a broader universe. This work reminds us that
the brain doesn't always have to be merely an imprisoned
container for thought, but instead a living, dynamic interface with
the world, one that's going to soon enough, maybe in

(55:32):
our lifetimes, reach far beyond the biological limits to which
we have become accustomed. Go to Eagleman dot com slash
podcast for more information and to find further reading. Send
me an email at podcasts at eagleman dot com with

(55:52):
questions or discussion, and check out and subscribe to Inner
Cosmos on YouTube for videos of each episode and to
leave comma until next time. I'm David Eagleman, and this
is Inner Cosmos.
Advertise With Us

Host

David Eagleman

David Eagleman

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.