All Episodes

March 19, 2025 58 mins

Morgan Cheatham joins hosts Raj Manrai and Andy Beam on NEJM AI Grand Rounds to discuss the evolving landscape of artificial intelligence in health care, from its role in automating clinical documentation to its transformative potential in genomic medicine. A venture capitalist and future physician, Morgan shares how his background in computational decision sciences led him to medical school and investing, offering insights into how AI is reshaping everything from disease phenotyping and clinical decision-making to scaling precision medicine. He reflects on his work evaluating ChatGPT’s performance on the USMLE, the growing importance of genomic learning health systems, and why the biggest challenge isn’t technological innovation—but aligning payment models to support AI-driven advancements in medicine.

Transcript.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(00:02):
If you're a commercial insurertoday and your average member, it's
going to be with you for two years.
Why do you care about paying for theirmolecular testing to prevent some
disease that might manifest for thatperson in a 40-year time horizon?
Why do you wanna be on the hook forthe gene therapy or the siRNA therapy
that would, in theory, cure thatpatient if you're not going to be

(00:22):
responsible for the cost on the line?
These are the real questionswe have to grapple with.
Again, if people take away anythingfrom this conversation that we have,
I hope they hear me when I say:
the science, the computation,the technological innovation
is no longer the barrier.
The tech is not the problem inhealth care and life sciences.
The problem is the business model,the economic model, the way care

(00:44):
is paid for, and the incentivestructure underlying that.
Welcome to another episodeof NEJM AI Grand Rounds.
Today we are delighted to bring youour conversation with Morgan Cheatham.
Andy, I find it really hard to knowwhere to begin to describe Morgan

(01:06):
because he's doing so many things.
I have no idea.
I think we did ask him at one point.
Are there multiple Morgans?
He is a vice president at BessemerVenture Partners, while also a
medical student at Brown University.
He's doing so many things. He'sdoing them so well. And it was just a
really wonderful chance to just pickhis brain about how he approaches

(01:27):
everything and how he's thinking about
both investment strategies whilealso navigating medical school
and residency and all that.
Yeah, in addition to like doing stuffwith us at NEJM AI and this podcast, the
man has more than 24 hours in his day.
And again, breaks my mental model.
I think I, I think we discussed thisin the episode for like what a med

(01:48):
student can do and is capable of. Byday, he's rounding on patients,
doing research at the NIH, and bynight is cutting a hundred million
dollar deals in his role as a vicepresident at a venture capital firm.
So truly just an extraordinarilytalented young doctor, future
leader, all of the above.
I think also, you know, you and Ihave gotten to know him quite well.

(02:08):
Just a great guy, like super nice.
Yeah.
Very easy going and you know, it, itwas great to have him on the podcast
and give him a chance to tell his storybecause it's such a remarkable one.
And I should say it's, it'sa little bit funny to me.
He also helps us edit these podcasts.
I was gonna— He had, he hadthe opportunity— So, so Morgan—
To also work on his own podcast.
Thank you, Morgan.
So, Morgan.
Morgan, if you're listening, uh, thanks.

(02:29):
Thanks, Morgan.
The NEJM AI Grand Rounds podcastis brought to you by Microsoft,
Viz.ai, Lyric, and Elevance Health.
We thank them for their support.
And with that, we bring you ourconversation with Morgan Cheatham.

(02:51):
Morgan Cheatham, welcometo AI Grand Rounds.
We're excited to have you here.
Thank you guys so muchfor having me. Morgan,
great to have you here.
So, this is a question thatwe always get started with.
Could you please tell usabout the training procedure
for your own neural network?
How did you get interested inAI and what data and experiences
led you to where you are today?
I love this question.

(03:12):
So, to give you guys aframework, I think today my
training procedure looks a lot morelike a Bayesian process than it
does stochastic gradient descent.
And I figured our listenerswould appreciate that.
I think as humans we have this luxurywhere unlike most neural networks, which
kinda discard past intentions in favorof converging on a single solution. We
as humans can maintain these large priordistributions over complex environments.

(03:35):
And we can kind of constantlyrefine them over time.
And so what this looks like forme is a huge focus on information
diet and curating that diet in away that allows me to refine these
priors in a way that I feel like I'm,I'm converging on something novel.
And so oftentimes that meansspending time in subfields or in the
interstitial spaces between fields.
I realize it's a very amorphous answer.

(03:56):
So maybe brass tacks, my story.
Growing up right outside of Washington,DC, I actually always wanted to be a
physician from as young as I can remember.
And everything I did wasin service of that goal.
I didn't falter.
It's what I wanted to do.
I had this moment.
Did you have, sorry tointerrupt, but did you have any
physician role models early on?
Not until high school.
So, there are no physicians in my family.

(04:18):
In the Washington, DC area, it'sactually, I think a profession
that's quite revered still.
It was something thatI'd always looked up to.
And being near NIH and all these amazinginstitutions certainly keeps you inspired.
But so, I actually did havethis experience with a mentor in
high school, which was kinda thefirst time I saw the collision
between computation and medicine.
And that was shadowing ata hospital called Inova.
And at the time, interestingly,they were implementing Epic.

(04:41):
So, they were going from paper to Epic.
They weren't switchingfrom some other system.
And I think the chaos of that entireexperience completely blew my mind.
I mean, from the disgruntled commentsabout what the system was gonna do for
medicine and papers flying and the chaos,I just, I knew this was a big moment
and I knew I had to attach myself to itin some way and study it and pursue it.
And so when I went to Brown, ultimately, Icreated my own major, as one does at Brown.

(05:06):
Not in basket weaving, but in whatwe called computational decision
sciences, which was an interdisciplinaryconcentration. Frankly, way too much
for undergrad alone, where I wanted tointerrogate how humans made decisions.
And I wanted to do that through thelens of kind of classical mathematical
microeconomics but update that with whatwe now understood in computer science and

(05:27):
particularly computational neuroscience.
That experience is kind of where Istarted to go more deeply into AI
and really understanding not onlyhow these systems interact in a
theoretic context, but ultimatelyhow they would apply to medicine.
Okay, so let's keep rolling that forwardthough, because I think, let's talk
about what you did in medical school.
You did this interesting thing inbetween undergrad and medical school, too.

(05:50):
So, let's hear a little bit about that, too.
I, I will just maybe also flagthat you're now the third person on
the podcast who has had a meteoricstart to their career. Who has gone
through this program at Brown.
So, maybe there's something inthe water there, but our own
Zak Kohane has gone there.
I believe Atul Butte alsowent through the, yeah,
and, and so now Morgan. So there, there'sclearly something iconoclastic in the

(06:12):
water at Brown that makes these physicianscientists who are also like deeply
computational and deeply disruptive.
It is an interesting thread and those areobviously two of my heroes who I've been
fortunate to, to meet and hang out with.
I think that's something about the freedomat the institution, frankly, where there's
no institutional silos between departmentsthat really allow people to traverse.
Whether it's medicine and computationor humanities in medicine, et cetera.

(06:33):
But yeah, so I guess, I guess justto answer your question succinctly,
when I was at Brown, I did havea physician mentor by the name of
Graham Gardner, who was a physician,CEO of a company called Kyruus.
And what he was doing at the time washe was leveraging large data sets to
try and match the right physician tothe right patient at the right time.
And of course, we canappreciate on this podcast,
it's a problem we're still trying tosolve today. But it was really working

(06:56):
with Graham and under his team thatI learned how to program and I actually
began to gain real-world experience withthese systems in a production environment.
I will say that the other interestingformative experience I had after that was
working at Goldman Sachs for a summer.
I was a very crappy banker, butI learned a lot about the ways that
capital could scale technology andbusiness in health care and life sciences.

(07:18):
And I think that was a prettyeye-opening experience from the jump.
And so I guess the way I describedthe path was I'd almost seen too
much. Like, I'd almost refinedmy priors a little too much,
and I knew there was this expansiveworld beyond just practicing
medicine that I wanted to explore.
And so, in a very serendipitous fashion,I bumped into a partner at Bessemer.
I know folks sometimes like to hear theseorigin stories because it is very random

(07:38):
to break into VC and ultimately joinedthem as an analyst right after undergrad.
And I think it was, you know,supposed to be this two-year deferral
from medical school that soonbecame four. The pandemic happened.
There was so much happening intechnology and medicine and health care.
And then we had our ChatGPTmoment and the rest is history.
Now, in addition to AI, clinically, I'vefound a home in clinical genomics, which

(08:02):
is a field that is changing rapidly.
And, like AI, seems to be benefiting a bitfrom a Jevons paradox in that as genomic
technologies become more efficient, andaccessible, and cheaper, their overall
usage is increasing. And the fieldis also undergoing this interesting
transition from an inherently diagnosticspecialty to increasingly interventional

(08:22):
by way of genetic medicines.
And so, this is where I've spentthe last 18 months, clinically
and from a research perspective.
Yeah, when I first met you, I had amental model for what medical students
were, and they were these like very,I think cookie-cutter biochemistry,
you know, or go undergrad, taking thestraight narrow path to medical school.

(08:43):
And I think I met you and you werea first-year medical student and
a vice president at Bessemer, andI was just like, who is this guy?
I would love to learn more aboutMorgan and how he got to where he was.
You know what I love about your storyso much is that you're like emblematic
of this like new kind of medical studentthat like breaks the sort of very cookie
cutter mold that medical students havebeen subjected to for a long time.

(09:03):
You come from investmentin capital formation.
Other folks are coming fromengineering and mathematics.
I don't know exactly what explainsit, because arguably the job
at the end of this trainingprocedure hasn't gotten better.
You know, in many cases it has gottenworse, but it's been exciting to
see the intellectual diversity ofwho's going into medical school now
relative to like 10 or 15 years ago.

(09:23):
I truly don't think I'm unique.
There are lots of Fs out here andI hope, I hope you have more in
the pod, but I think there's thisgenuine desire from a lot of people in
medicine, to your point, recognizingthe current constraints of the system
to have an impact at multiple levels.
And so I think when I look back on thelast 10 years of my career, I think
there's this very sacred one-to-onephysician relationship that the canonical
medical student or trainee pursuesand studies and, and I think there's

(09:46):
knowledge and that experience thatcannot be replicated anywhere.
There's certainly privileges andopportunities that can't be replicated
anywhere, but there's specific knowledge that you gain
being responsible for someone's care.
And then when you think aboutwhat's happening in technology
and in the capital markets,
those are things that can take thatkind of sacred one-to-one relationship
and scale them in a nonlinear fashion,whether it's thinking about AI,

(10:07):
doctor constructs, or thinking aboutinvesting capital into a therapeutic
that's going to cure a disease.
And so, I think people enjoy genuinelywhen you have this experience of caring
for patients, being able to play at thesemultiple levels of the stack, if you will.
Yeah, totally.
So, I think that's a greatopportunity to transition to some of
the papers that you've worked on andthe research that you've published.
So, you, you spoke of the ChatGPT moment,and I think that you and some of your

(10:30):
collaborators like really saw that thiswas a moment not only for AI but
also how it was gonna impact medicine.
And you wrote what I think has becomelike a pretty seminal paper in the area,
which was essentially a can
ChatGPT passed the USMLE?
And so there's a paper called "Performanceof ChatGPT on the USMLE:
potential for AI assisted medicaleducation using large language models."

(10:50):
So, I'd love to hear a little bitabout the context for that paper and
sort of like how you put it together.
If I understand correctly, it waslike a sprint to put it together.
It was like nights andweekends kinds of things.
So, we, we'd love to hear some of those,like in the trenches stories, too.
Sure.
And I'll caveat this with sayingmy academic career, as you
mentioned, has been unconventional,but it's also been quite random.

(11:11):
And I think this paper embodies that.
So, I actually worked on this paperwith one of our portfolio company CEOs
at Bessemer, physician-scientist bythe name of Jack Poe, who also was the
person who introduced me to, to Zak.
So, I'm ever grateful for that.
And there was this momentafter ChatGPT came out.
We were all scrambling to figureout what this technology would be
capable of in a medical context.

(11:31):
And, you know, I think I groundthat in the reality that there are
many, many teams working with GPT-2 and earlier versions of this.
But the performance and the interface,I think invited a larger audience of
folks to, to explore the capabilities.
And so Jack called me up and said, hey,
you know, our team here at Ansible,we're, we're working on this project.
It's super time sensitive.
Do you wanna be a part of it?
And, when someone gives you anopportunity like that, I think you, you

(11:55):
just ask them like, what's the next step?
You don't even pause.
Uh, the reality of my life at thatmoment though, is I actually had a neuro
shelf exam that Friday, and I think wewere, we were meeting at the beginning
of the week. So I was, I was supposedto be studying, but I was humbled
to learn that the system that we wereworking with would've probably far
eclipsed my performance on that exam.
Whether I studied more or not.
In many ways when I reflect on thispaper, I say it was both the best

(12:17):
paper, best paper experience I had
just in terms of, I think, thespeed and the kind of like
radical thinking that we proposed.
I mean, we actually submitting it tosome journals had ChatGPT listed as an
author and were scolded by some of thelarger, kind of more well known journals.
Ended up publishing in PLOS.
I also say it was one of the worst, andthe reason why is because the paper at

(12:39):
the time positioned GPT as a potentiallearning tool, even though its performance
kind of suggested there was more there.
What we didn't grapple with is reallythe imperfection of the benchmark
of the U.S. medical licensing exam.
And in many ways my, my worst fear ismaterialized, which is, as you both
appreciate, the field kind of ranaway with this benchmark in a very

(13:01):
obsessive, uh, obsessive approach.
And so, I wish we had grappled more with the kind of shortcomings
of the USMLE, both for humanclinicians as well as for AI.
And I think now in the last kind of coupleyears, more folks have started to grapple
with this in a real way, as we saw mostrecently with the CRAFT-MD paper.
But just to calibrate, right. Ibelieve Raj has some comments on this.

(13:23):
Yeah.
Yeah.
So just to calibrate, just to give your paper
more credit, I think, than, than evenyou're giving it now, Morgan. This is
back in the beginning of 2023, right?
So now I think we all take for grantedthat these models are very good at
standardized tests. But it's veryeasy to say that now after we've

(13:44):
had a flurry of two years, or threeyears, or four years of results
on NLP models, AI models doingso well on standardized tests.
I also think at the same time, you know,this is actually even before GPT-4, right?
This is the original ChatGPTthat was released, if I recall,
now I'm gonna get the years wrong,

(14:05):
this is November, 2022, right?
That's right, yeah.
This is,
yeah, so this is like, you know, rightat NeurIPS 2022. I think Andy and I
were there at that one together and itjust took over the whole conference.
That's the model that you're using.
This is even before they releasedGPT-4, you know, I think a month or two
after you guys published this paper.
And so, I do think it's worth justremembering where we were at that moment,

(14:29):
and I'll go back even a few years.
Earlier than that, I was spending a lotof time, uh, with this guy, with Andy
Beam here, my cohost, just talkingabout this idea. And at the time people
would pillory it the other way, whichis that this is an impossibly hard
task to solve, which is to have an AI model take a U.S. medical licensing

(14:52):
exam. And it's very far away.
And I remember Andy getting commentslike that all the time and okay.
Wow, you guys are really
aspirational, or reallyvisionary, ambitious.
Right.
But I think we should contextualizewhere we were and what these sorts
of milestones led us to now believe.
And then I think, asyou're saying, I do think

(15:15):
lots of folks are pointing out.
I mean, I think there's anothergreat example of this, the
AMIE paper from Google, right?
So, this is now almost a year ago,I think, the pre-print came out and
they really very carefully studiedconversational interaction with AI models.
And, and there was CRAFT-MDfrom a couple weeks ago.
I, I think a lot of folks are nowcorrectly pointing out that there

(15:38):
are limitations for these benchmarks.But at the same time, I don't want to
rewrite the past and say that it wasalways obvious that we could do this.
This is an easy task, and I thinkit's, we're also in danger of saying
that these tasks mean absolutelynothing. So, they don't mean that,
I don't think they mean, I thinkwe're having a good conversation

(15:59):
right now as a society around this.
They don't mean that they're sufficient,that someone could practice medicine,
or an AI can practice medicine.
I don't think most people are sayingthat, but I think we're going to the
pendulum swinging the other way wherewe're saying they mean absolutely
nothing about what these models can do.
And they are completely meaningless.
And I think that might suffer from
the same sorts of, sort ofintellectual impulses that would

(16:20):
conclude in the opposite direction
for someone to say that thebenchmark means everything.
So, it's just not, we don't need to beso black and white about what it means,
what it doesn't mean.
May maybe I'll hop in there.
I can see, yeah.
Yeah,
and just add like a little color.
So,
like in 2016 or 2017, I had beengoing around giving this little
talk that saying the USMLE shouldbe a benchmark for medical AI.
I gave a talk at like at 2017 at GTC, NVIDIA GTC, like literally saying we should

(16:44):
have this be a benchmark for medical AI.
I was obviously working on it.
That was a hot team.
Yeah, I was obviously working on itmyself, but I was like a couple hundred
billion parameters short of a scale ofmodel that you need to actually pass it.
But I got those.
But they're probably in your brain.
They're probably in your brain.
Yeah.
Yeah, yeah.
Um, but like the criticisms were thenas they are now, and that like Raj said

(17:05):
some people were like, this is insane.
Computer could never do it.
So it's, it's also like I have thesame sensation that Raj has where
it's what was once impossible is nowtrivial. And like I have now, like,
lived that experience many times inAI over the past like seven years.
But that was, I think one of thefirst times I had that experience.
But two, like people would say, well,who cares if they can pass the USMLE?

(17:26):
There's a whole side story.
Actually, I got in trouble bythe NBME for saying that AI
was gonna pass the USMLE.
They don't seem to careabout that anymore, but.
We will leave that for another day.
I'd love to see thoseemails at some point.
Yeah.
Uh, but they would say like, if, even ifa computer can do this, like who cares?
This isn't gonna helpthem be a good doctor.
And I would just, like, what Rajsaid, that this test is a necessary

(17:48):
but not sufficient conditionfor humans to be a doctor.
And we should view it the same way for AI.
Like if it can't do this, then wewouldn't trust it to practice medicine.
But it's, again, it's a necessarycondition, but not a sufficient
condition in that there's all theseother things that we need AI to be
able to do well. And I, and I stillthink that there's signal in how much
better it does relative to a person.
It's clearly testing some type ofmedical IQ or test taking ability.

(18:11):
So, I'm also likewise similarto Raj, like I wouldn't want
to dismiss these out of hand.
I think there's also signal in therelative ranking of the models on
this benchmark, even if we don'tunderstand what the absolute calibration
means for clinical practice or for,being helpful in clinical practice.
I also think, yeah, I think I, I'mjust hesitant to like discourse
extreme one way or the other.

(18:32):
Right?
It's meaningless or it's super meaningful.
It's just one piece of informationand I do think the general, I
think, move recently towards morerealistic clinical benchmarks.
I do think that has beenin the water for some time.
I also think that's not somethingthat we're realizing two weeks ago.
I think years ago. I think the folks whowere also putting forth the USMLE

(18:54):
as a benchmark were saying thesame thing. And I think just out of
respect for intellectual forebearers,
I think if you look decades earlier,and you look at the way people are
saying things about medical educationand training and teaching doctors
and residents, they've been sayingthe same thing, too, which is that
this is a piece of information.
But we need realistic clinical appraisals,realistic clinical evaluations, and we

(19:14):
need different types of evaluations.
And so, I think this actually goes backmuch further than even the last year.
This is a decades old debate inpsychometrics and medical education
and really is worth kind of bringingthat history in because there's
also lots of limitations for thenon-standardized tests as well.
And I think we should, we should keepthose in mind while we're integrating

(19:35):
them into this conversation.
One thing I love to talk about justin truth, and I, I agree with a lot
of what you all have said, is just theexperience of going through medical
school during this period, right?
Of my second year seeing thistechnology released into the wild
and frankly grappling with thereality that the benchmarks I was
being assessed on my shelf exams.
The USMLE did not

(19:57):
explore what it would be liketo be a physician in that era.
And so, I think there's kindof two separate conversations.
One is knowing that once you'rea board-certified physician,
most of your, recertificationsexams are actually open book.
And many of my colleagues areusing OpenEvidence, ChatGBT,
Claude, to pass those exams now.
How do we refactor the human-centeredevaluations to consider AI?

(20:20):
And then there's the separatequestion, which is an important one,
which is how do we think about AIbenchmarks that grapple with the
context of a clinical environment?
And I think the recent CRAFT-MD paperis interesting in that it simulates
this, you know, interaction betweenthe patient and the physician.
And I'm a big information theory fan,and I think one thing that is lost
in that framework is the reality ofthe role of the physician as this

(20:42):
information gatherer, as this personwho in seven minutes in a primary care
setting is forced to build a trustedrelationship with someone and to
ask questions in both an efficient,
empathetic, and approachable waythat they're going to maximize the
channel capacity of that interaction.
And the study, I'mhoping someone will run.
So, this is a request for study forour listeners, is to actually run

(21:03):
the experiment between a voice AIagent, which is we appreciate our
increasingly kind of sympathetic and,and human-like performing that medical
interview with a patient autonomously.
Comparing the diagnosticperformance of that AI with a
human physician on the informationthat they respectively gathered.
So the human physician conducting theirinterview as they typically would.
And then assessing the downstreamdiagnostic performance based

(21:25):
on the information gathered.
And also saying, did they gather thesame information in the same way?
And what were the key differences inhow that information was gathered?
How the patient responded, right?
So, I think there's certainly limitationsto simulating the patient response
through AI because when you have a living,breathing human being in front of you,
sometimes you're more willing to share.
I will say there is some evidence onthat from this AMIE paper, from Google

(21:48):
where they do this like turn-by-turn AI,there's a human acting, a human actor
on the other end who's like roleplayingas a patient who has a disease.
And I only know this because my gradstudent was on this and he just offended.
So, I had a good jog of my memory onthis, they look at if you give
the information collected by thehuman physician to the AI system,
it essentially doesn't changethe diagnostic accuracy.

(22:08):
So at least in this study, theAI and the human were eliciting
equivalent amounts of information.
The AI was able to get to thecorrect diagnosis more reliably with
that same amount of information.
These are simple case presentationsthough, so like the information
elicitation is like relativelysimple and straightforward.
So, it probably would be more interestingto look at more complex cases.

(22:29):
Absolutely.
I totally agree, Andy, and I think,something that you both just said,
which is critical but can be lost ina lot of these types of studies, is
the importance of a human baselineand an evaluation of how humans work
with AI. And how AI compares to humans.
Completely agree that thehuman element here is key.
The chatbot construct with simulatedpatients, I think is directionally

(22:51):
interesting, but of course limited.
And if we wanna understand how well AIcan interact with human patients, it
would be interesting to see the AMIE studyrerun with a purely voice AI construct.
For some patients, I would imaginevoice being a more intimate
material for sharing specialinformation or intimate information.

(23:12):
Others, I think would prefer text, andI would suspect this will stratify along
the lines of age, but the ability topersonalize the paradigm of interaction,
I think is the beauty of our new medium,and we have to build with that in mind.
And importantly, I think what thisresearch also shows is that we need
to revisit the fundamental clinicianpatient interaction and begin to
unbundle the components of the medicalappointment that we've classically

(23:36):
understood now with an AI first mindset.
I think we are learning a lot, andsome of it is surprising us, right?
There's been a lot of good soundbitesabout how humans and AI will
just do better together, right?
If we insert the two together, you combinethem, they'll be better than either alone.
I think the picture's becoming a littlebit more nuanced right now around where

(23:56):
there are areas where humans and AI cancollaborate effectively, where humans
do certain things better, where AI doescertain things better, there's a good
essay that one of my colleagues Benod RajPicard, just wrote in the New York Times.
I think literally a day ago it cameout that it goes into this where maybe
it is the case that there are partsof the workflow that are autonomously
conducted by AI and by humans separately.

(24:17):
But I think it's very, veryimportant to not look at one of
these systems in isolation, right?
Like the LLMs alone, or as you'resaying Morgan this sort of simulation
of a patient or of a doctor by anLLM is something that I think needs
a lot of validation and I thinkit's very important to integrate.
Actual studies of humans, bothin the use of the technology,

(24:39):
but also in the appraisal of theoutputs of the models themselves.
Maybe actually that's a goodtransition point, Andy, if
you jump to that next second.
Lemme the, the one, one last questionquestion before, before we transition.
So, I mean, you kind of hinted aboutthis, but like step one and step two are
these watershed moments for med students.
It dictates a lot of what the restof your career is gonna look like.

(24:59):
Was there any sense
of remorse, mixed feelings, hesitationthat ChatGPT passed step one and later
with flying colors, before you, yourself,the future doctor to be, passed it.
Did you feel like John Henry?
Honestly, I was happy to see it.
My zone of genius is not testing.
I've always been like a learner wholearns through experiential dynamics,

(25:22):
and so I was happy for ChatGPT.
I was happy that I got it doneand kind of moved on with my life.
I think that's a very centeredway to look at things.
Yeah, I think that's, thatis, yeah, I totally agree.
I think a lot of folks did not lookat these results that same way.
Right?
And it was a little bit of, Imean, I was talking to my doctor
friends and family members whensome of these results started coming

(25:42):
out, including your paper, Morgan,and I think there was this sense of
existential concern.
But then I think it quicklymorphed into, well, the tests
really don't mean that much.
And they, as we've discussed, they arenot the end all, be all by no means.
Alright, so Morgan, we wanna jump,we've actually already talked about
this a little bit, but we want tojust spend a little bit of time.

(26:02):
I think one of the reasons weinvited you on is because both Andy
and I are amazed at your ability tobalance many, many things. Like,
you know, we all talk about multitasking.
I think we're all busy, we all dolots of things, but you really are,
balancing several different,
amazing, amazing activities as partof your work, both within medical

(26:23):
school, of course, and then alsoacademically and then in VC world.
And so I guess my question is, are theretwo Morgans or more than two Morgans?
You know, medical school is alreadyreally hard, and how do you possibly
manage to get everything done?
What is your productivity hack?
I, honestly, I'm not the most productivity tool

(26:45):
hacker person you'll meet.
I'll be honest.
I think a lot of the reason I wasable to do this was because of
my team at Bessemer and theirwillingness to support me through this.
The scheduling was crazy.
Like, going into the OR at four inthe morning some days, and stepping
out in the afternoon when our caseswere done to hop on a call with the
founder, taking a board meeting at night.
I mean, they made that happen.
And so in many ways it's justthe people that I work with

(27:06):
who are supportive of the path.
But I also think to the earlier commentsabout information diet and refining
our priors, the things that I work on.
So, research and computation andworking with y'all at the
journal, the work with Bessemer.
Leading our health care AI practiceand practicing medicine are highly
synergistic in nature, and each onefeeds into the other in different ways.
And so, whichever kind of contextI'm switching into, I know it's

(27:28):
serving something else I'm workingon in a, in a really interesting way.
And so I've kind of given myself theintellectual freedom to know that's true,
even if it feels like it's a little bitof a longer journey to connect those dots.
And so, I guess that's my shortanswer to what's probably a much,
much more complicated question.
Yeah, it just, as an aside butalso meta related to this, you know,
entire conversation, have you used AIat all to help with medical school?

(27:52):
Like, is AI useful for you as a tool togo through the preclinical curriculum
or through your clinical training now?
Has it been useful for you?
Absolutely.
So, when I was in medical school,
I actually, on my blog, putout a blog post and I said the
title of it was, you know, "Putyour healthcare AI app to the test."
And so, I think in my unique role as aninvestor, you know, I've been meeting lots

(28:14):
of founders, a lot of physician founders,building tools for other physicians, and I
just dogfooded this stuff for four years,
right?
When I think about, some of the companieswe're invested in, hopefully we'll talk
about a few of them, Abridge in particularone of the AI scribing companies.
Oh yeah, I remember this.
You, you actually, I was, uh, okay.
Wait.
We should tell the story, right?
So we were, Morgan and I wereat a conference together, right?

(28:36):
And this was a conference organized byZak. And Morgan was talking about Abridge,
and I'd heard about it, but then hewas like, let me just show you a demo.
And I was like, okay.
And I think I actedas a patient, right?
I acted as a patient. You did.
You took my history.
You talked, first of all, I was veryimpressed with your bedside manner.
Thank you.
You took my history.
You're a very good doctor.
And Abridge,

(28:56):
the Abridge app very, very effectivelysummarized the note there, right?
And did it in, I thinkbasically real time.
And you showed it to meand it was pretty cool.
So, what, why don't you just tell,I think a lot of people have
heard about Abridge, but maybe youcan tell folks about a Abridge.
Well, I hope the, Abridge team hearsthis 'cause always be selling, you know?
But so yeah, at a high level,Abridge is a tool that clinicians can

(29:17):
utilize to capture the conversationsthey're having with patients, and from
a system perspective, capture thatinformation in a way that is complete
and accurate. And that enables that datato be utilized for downstream use cases
in terms of providing that informationto other clinicians, providing it to
the patients in an accessible format.
And then thinking about all of thedownstream activities a health system
has to utilize that information forwhether it be revenue cycle clinical

(29:40):
quality reporting, and the like.
We invested in Abridge back in 2020and interestingly at the time, the
company was a direct-to-consumer app.
So, the founder, Dr.
Shiv Rao, he's a cardiologist by trainingwho partnered up with Zach Lipton at
Carnegie Mellon to develop thistechnology and initially the
distribution was, let's go directto patients who have this just acute

(30:01):
pain point of getting shuffled betweenappointments from their PCP to their
endocrinologist, to their cardiologist,none of whom are talking together.
Can we reverse engineer asolution to interoperability,
starting with the patient?
It's kind of heralded as thisconcept of the personal health record.
Can you update the patient's informationand phenotype as they're bumping into
the system and experiencing health care?

(30:21):
Interestingly, that product as a consumerproduct did gain traction with a lot of
rare disease patients, unsurprisinglywho really feel this pain point acutely.
But ultimately, as the companymatured, and frankly as the market
matured and as products like ChatGPTeducated the health care executive
landscape that the stuff was real.
The stuff was not just pie in thesky, founders selling a fake vision.

(30:43):
These were real products.
The company repositionedtowards the enterprise and
kind of the rest is history.
I will say I do think that
the ambient scribing landscape,whichever vendor you're talking about,
is the greatest unanimous adoption oftechnology by health systems and hospitals
since the electronic health record.

(31:03):
And this has all happened without theHitech Act equivalent or meaningful
use equivalent for the adoption of AI and
grounding ourselves inthat reality is important.
So, let's stay on that for just asecond because like the thing with
Abridge and the ambient scribingtechnology that has been surprising
to me is the traction and durability.
Like, one of the hardest thingsin health care is business models.

(31:26):
Who the customer is is often like ifyou have an AI health care application
is not obvious.
So, is this like the alignmentbetween things that doctors
actually want to use and some typeof sustainable business model?
Like, do the hospitals pay for it becauseit actually does facilitate billing
and reimbursement more efficiently?
I see why the doctors like it becauseit lowers their administrative burden.

(31:48):
Why do the hospitals like it?
Yeah, well, let's ground ourselves in thereality of where things are post-pandemic.
Right?
In the, in the years after thepandemic, half of U.S. hospitals were
margin negative, meaning they werelosing money on provisioning of care.
You had the great resignation ofhealth care workers, and not just
clinicians, but really across the system,people facing just tremendous amounts of
burnout in the current way of working and

(32:10):
then you had this moment in AI withGPT and these other technologies
that allowed us to build thesedelightful product experiences that
had been kind of unseen in health care.
Right?
Design has never been a strength ofour industry, and so I think in many
ways you had the perfect confluenceof these trends coming together to
facilitate this adoption inthe ambient scribing category.
Now, as it relates to the business model?

(32:31):
Yes.
I think most of the players in thespace are selling this technology
to hospitals as a software product,as a tool for clinicians. And
as they, many of these companies,
approaching this as a point solution.
We are an AI scribe and that's what we do.
I think as they move into other spaces,like you mentioned, revenue cycle
management, maybe clinical research,I'm a big believer that ambient scribes

(32:53):
should improve the quality of the datawe collect about patients and will be
a win for research and what's happeningin life sciences and real world data.
I think other business modelswill reveal themselves.
The big thing that keeps me up at night,if you ask me about AI, it's not whether
we'll be able to do certain things.
I think I'm, techno optimist andbelieve in the arc of progress
bends towards interesting things.

(33:14):
It's whether or not in ourcurrent system and all of the
bureaucratic apparatus around it,
whether we will be able to affordthis stuff and not just afford it
for our commercial populations,but afford it for everyone.
And I think people working intechnology should pay attention to
that problem because we all wannabuild products that people use.

(33:34):
And unfortunately, the reality hasbeen in AI that the adoption has
focused on stuff that makes money.
But when I think about the greatestpotential here, sure revenue cycle's cool.
Can we phenotype disease in a moreprecise way to actually get patients'
answers faster and connect them totherapies that are curative faster?
That's really, I think where thistechnology goes and where the

(33:56):
greatest impact will manifest.
Cool.
Awesome.
Morgan, can I ask you to talka little bit about your general
sort of investment thesis?
I'm sure you are gettingpitched all the time.
How do you cut through the noise?
One of the funnest experiences inmedical school, just as an anecdote
before I answer your question,was getting pitched by some of my
professors and, sometimes onrounds, you know, like after rounds.

(34:17):
Amazing.
Hi, Morgan, I have somethingto sell to you right now.
Yeah, yeah, yeah, yeah, yeah.
It was, it was amazing.
I met a lot of wonderfulpeople through that.
But look, I, you know, at, at Bessemerhigh level in investing is both thesis
driven and opportunistic, right?
So, we come up with what we call roadmaps,which are thesis about where we see
the puck headed in technology overthe next five to 10 years, per se.

(34:40):
We can also dream about where it goesin 30 years, but in the venture capital
business, being early, being too early,
is often being wrong, right?
And so, you need to believe thatthese trends are manifesting
over a near term time horizonin health care and life sciences.
Just for folks who are less familiarwith Bessemer, we've been investing
in the category for 40 years.
So, we have a dedicated focus andcommitment to these industries.

(35:01):
Even though building in them canbe very hard, we think it's quite
worth it and are long-term oriented.
And we do that across modality.
So, I describe us as modalityagnostic, problem specific investors.
We invest in software companies,tech enabled services, services,
therapeutic platforms, and diagnostics.
And when I think about my approach,it's let's find the biggest
problems in health care and medicineand let's figure out what the

(35:22):
right modality is to solve it.
It could be a diagnostictherapeutic combination.
It could be a software plus a diagnostic.
It could, you know, you cansee how all these combinations
can become quite interesting.
And we're doing that from thepre-seed to growth stages and in AI.
When we started investing in the categoryabout six years ago, we were really
focused on what I describe as AI first
founders. And I'll distinguish whatAI first means from AI enabled.

(35:47):
So AI first companies and teams arethose that are actually innovating
at the methods layer in AI, right?
They're publishing papers, they'representing at top AI conferences,
and they're really advancingwhat our capabilities are from
an applied AI perspective.
And I think Abridge is agreat example of a company.
We have a company called SubtleMedical that spun out of a Stanford

(36:07):
radiology lab that was also inthis category of being AI first.
I distinguish this from AI enabledbecause the teams at AI first
companies look different, right?
They're hiring AI scientists. They'reoften collaborating with academia.
And you know, the actual value creationat the company will of course come
from building a business, but it firstcomes from developing a technology
that is proprietary and unique.

(36:28):
I'll caveat that with saying, ofcourse, all moats are fleeting in AI right
now, and so you're in many ways on a,kind of hamster wheel trying to constantly
keep up. And the company will need tofinance itself accordingly, often raising
a lot more money in order to do that.
Let's kind of juxtapose that withAI enabled companies, which are
companies that are much more in thebusiness of deploying AI. Whether

(36:49):
it's something straightforwardlike a large language model, or
kind of patching together a system, orthinking about each individual model as
an instrument in a broader orchestra.
The pejorative for this, althoughI'm not endorsing it, is GPT wrapper.
Right.
Which is, yes, that's the pejorative.
Yeah.
And so these are, you callthem AI enabled companies.
So, I'm curious if the dynamics recentlybetween, like DeepSeek and OpenAI, has

(37:14):
this changed your view of where the,
I think nothing isdurable as you're saying,
you know, nothing's infinite, but where the semi-durable moat is like,
has it switched at all from the sort ofAI first to the AI enabled companies?
Is syndication having distribution,having people enjoying and using

(37:34):
your products and like knowing yourbrand, is that the bigger, like does
that have a relative advantage afterjust the last year of open source and
DeepSeek, or still AI first has itsplace? The greatest moat in health care
and life sciences is distribution,because the way that enterprises
adopt technology is far slower,oftentimes in hospital environments.

(37:57):
You know, once technology's adopted,we see that technology flourishing in
that environment for the next decade.
So, if you can weather the storm, the 12-to-24 months that often takes to sell
into these large enterprises and geton the inside. That in itself is
a tremendous moat, and that's where AIenabled companies make their mark.
I think on the AI first side, there'sstill tremendous opportunity for

(38:18):
those companies, but they need tobe quite specific in where in the
stack they decide to invest ininnovating at the methods layer.
And I think we can all agree on thiscall post-DeepSeek, but even we could
have seen this coming a couple years ago.
That's not going to happen at thefoundation model layer in LLMs in biology.
Will it happen at thefoundation model in PLMs?
I'm not too confident.
Right.
But are there very specific areas.

(38:39):
Whether it's in clinical entityrecognition or if it's in a particular
kind of variant interpretation protocolor, you know, you think about getting
a level deeper and becoming more nuancedwith the problem you're trying to solve.
Designing epitopes tomitigate immunogenicity as one
of our companies, SeismicTherapeutic has focused on.
And so I think the moats willcome from these more niche applications

(38:59):
for AI first companies, and then theyalso have the joint burden and task
of also nailing the distribution.
Can I ask a follow up to that, Morgan?
This is something I feel acutely bothin my academic and entrepreneurial life.
One thing that has been a truismfor the last like decade is
that general approaches beat
problem-specific approaches.
And so like when I was workingon the USMLE, I had this

(39:22):
little LSTM that I was trying toget to answer medical questions.
And I was trying to be cute andclever about the data that it had and
like how to represent the knowledge.
I mean, it turns out that likenext token prediction completely
just bulldozed that problem.
The problem that I needed to besolving was, like, next token prediction
very well.
So what lessons are there?
For folks starting companies,how to pick problems that won't be

(39:45):
bulldozed by the general solution.
'Cause that to me feels like thegreat existential risk for all
of these companies is that youwon't get GPT-5 or GPT-6 or
04, 05, 06 out of existence.
Like what distribution is a great moatfor there, but that if you're starting
something new that may not be an option.
Like how do you think aboutthat core existential problem?
Well, if I knew where thegeneralist solutions would be

(40:07):
the next 12 months, I would be
the smartest person on the planet.
I think it's very hard.
A quote that I love that Daphne Kolleroften says is it's very hard to predict
what's going to happen in a shortwindow of time before you, when you're
riding an exponential curve, right?
That's like a very hard problem to solve.
What I would say is I would focus onthe areas that are more data sparse.
So, when I think about what'shappening on the clinical

(40:28):
side, I fully agree with you.
I think that generalist systems willcontinue to perform extremely well.
And I would not be betting on a clinicalonly first LLM unless it's in a very,
very deep, you know, domain category.
Like a particular segment of raredisease where you fine tune
something to be quite performant there.
I think in biology there's much moreopportunity because we just don't
have the equivalent of the Internetfor biological data available

(40:51):
to us to train these models.
And so, I think in the near term, I can'tsay what's gonna happen in five years.
There will still be a plethoraof AI first companies that are
able to maintain moat in biology.
I do not think it will be at thePLM protein language model layer,
but there will be other segmentswhere this is possible and
I'm investing a lot in those areas.
Cool.
Awesome.
Are we ready to move onto the lightning ground, Raj?

(41:12):
Yeah.
Alright.
Morgan, are you readyfor the lightning ground?
Hit me.
So, I think we asked the inverseof this question earlier, but the
first lightning round questionis, what has being a venture
capitalist taught you about medicine?

(41:33):
Interestingly, being a venturecapitalist has taught me most of what
I know about the business of medicine.
So, when I'm on a mitochondrial medicineservice at a children's hospital and
I try to order a particular genetictest, and I hear from the billing
department that that ain't happening.I would say that my work in
venture and deeply understanding thereimbursement structure of the industry

(41:55):
has afforded me, I wouldn't say anempathy, but an understanding of why
certain things happen the way they do.
And I think recognizing that has been asuperpower in terms of, you know, you're
making clinical decisions and wanting todo so in a grounded reality of the way
that we fund health care in this country.
So that's probably been one ofthe more specific takeaways.
I also think that venture hastaught me a lot about what

(42:16):
scales and what doesn't.
If you think about the asset class,there's many, many ways to fund
a business in 2025, includingnot taking venture capital.
But what venture capital isparticularly designed for is
technology that scales nonlinearly.
I actually think, we've gotten this kindof muddled in our current environment
where we use venture capital a lotfor companies that are only taking
operational risk, which is fine.

(42:38):
It's actually quite an expensiveasset to utilize for operational risk.
it's far better positioned, givenhow expensive it is for a company
to utilize venture capital fortaking some sort of scientific,
clinical, or technological risk.
And so, I think in terms thinkingin that regard, bringing that
mindset of what actually is capableof nonlinear scale and thinking about

(42:58):
that in medicine shines a light on,on where the opportunities will be.
Awesome.
Morgan, which is a harder job, medicalstudent at Brown University, or vice
president at Bessemer Venture Partners.
Can you define what you mean by harder?
This is my lightning round.
I'm flipping it back to you.
Uh, more.
Yeah.
Let, let's say, let's sayharder is, uh, one that leads

(43:21):
to more stress in your mind.
Definitely medical schooland medical training.
I mean, I would describe the highsin medicine are way higher than the
highs in in business for me personally.
And the lows are far lower.
The stakes are significant, right?
In business, if you lose money orsomething doesn't go as planned,
like the business shuts downand it's sad, but those people
likely go on to live their lives.

(43:42):
And in medicine we're not alwaysafforded that second chance.
And so I would say that's one of theharder parts about being a medicine.
The other hard part, andmaybe you get this from my
background, is there's a lot ofstructure and a lot of bureaucracy.
And as someone who loves autonomy andfreedom, I often find myself feeling
a bit defeated by the appendingpressure of needing to conform.

(44:05):
But again, as I mentioned, I've beenfortunate to find people who will
kind of help me break down some ofthose barriers and silos to stay sane.
Awesome.
Next question.
If you weren't inmedicine and venture capital,
what job would you be doing?
So, I originally went to schoolto become a classics professor.
I love Latin and ancient Greek. And Iknow Emily, who was just on the podcast,

(44:26):
shares that. And actually Zak sharesthat. I don't know if it's like a
weird pediatrician, informaticist,classicist thing we have going on here.
I would've become a classics professor.
I love
ancient Roman history, and I love to study the literature
from those time periods.
The reason actually why I didn't end upbecoming a classics major is because my
dad told me he wouldn't pay for collegeif I didn't get a science degree.

(44:47):
So that's why I made up mycomputational decision sciences.
And I did end up integrating a lot ofclassics in that, including discovering
the field of computational classicswhere I was using NLP methods on Livy.
So that's a fun area that I diginto sometimes, in my spare time.
Nice.
What's your favorite piecefrom the classics era?
I'm a big fan of a lot of Catullus'swork because it's funny and, I

(45:10):
think lighthearted and kind of punchy.
So, I think generally hisbody of work is interesting to me.
I. And I'd also have to say, I knowit's kind of a canned answer, but
the Aeneid is something that I returnto often. Forsan haec olim meminisse
iuvabit. I know my Latin pronunciationis terrible, but it is actually a
quote that I carry with me often when thegoing gets rough. Which is, you know,
perhaps sometimes it will be meaningfulto have remembered even these things.

(45:35):
Awesome.
Excellent.
Morgan, if you could have dinner with oneperson, dead or alive, who would it be?
I knew you guys were gonnaask me this question.
Can I say the both of you?
There's a few, there's a few people.
We have had dinner before,
Morgan, you have to say somethingyou haven't had dinner with before.
Well, I'm subtly mentioningthis because it's been a while, so
I'm hoping you'll take me to dinnersoon, maybe for, for graduation.
But, I think.

(45:56):
There's a couple people that come to mind.
Bud Rose, who's the founder of UpToDate,unfortunately, who is, who is no longer
with us, I think was just a remarkablehuman being in terms of thinking about
where informatics and medical knowledgeintersect, and how, frankly, one
of the first people to grapple withhow human physicians and computational
systems can and should interact.
And I really wish I could ask
Bud, Dr. Rose, what, what he thinksabout our current environment

(46:17):
with language models and where we should take this stuff.
I think the other person, I deeplyadmire who's still with us, is Ted
Nelson, who is a computer scientist and,great computational thinker who designed
Xanadu, which was kind of an alternative
way that we could have organizedinformation on the Internet.
And I think he had a lot of reallygreat interesting ideas for his time.
Everything from, you know, micropaymentsfor creators to thinking about

(46:40):
bi-directional linking and hypertextformats, which I think still in
intrigued me as a way that we couldhave organized information at scale.
Cool.
Awesome.
Final lightning round question.
What is your P of doom?
Can you define that?
So, P of doom is this probability ofdoom that where the doom is specifically

(47:03):
a consequence of AI run amuck.
So essentially, what's the probabilitythat AI becomes sentient and kills us all?
Fifty percent.
Over what time
horizon?
A century.
Okay.
It is a safe answer.
Okay.

(47:23):
Okay.
And this is why I'm investing in AI.
Wow.
Alright.
Yeah.
So, we have a couple like big picturequestions to wrap up with Morgan.
So, you've been great so farand just wanted to zoom out a little
bit to wrap up the conversation.
I've listened to a lot of investingpodcasts and read a lot of investing books

(47:44):
and it seems like identifying successfulfounders is like one of the things
that VCs think they do very well thatdifferentiates them from their peer group.
So, when you are lookingfor companies to invest in,
what are the things in the foundingteam that you look for that are
strong predictors of success?
One of the most important thingsI look for is what I describe as

(48:05):
the learning rate of the team.
So, if we have a call on Monday, andthen we catch up again on Friday, what
have you learned about your business,your customer base, your technology, and
really tracking that curve over time?
And when I've backed founders who havethat kind of just off the charts learning
rate, I think that they, that compoundsit. It enables them to frankly get

(48:27):
to the truth, the objective truth, andmake decisions in a highly informed way.
In health care and life sciences, Ithink it's easy to often look for
the domain experts in a space, so thesmartest person in this field of biology
or in this specialty of cardiology.
And I think those people are reallyimportant to companies, either founding

(48:47):
wise or from an advisor perspective.
But I also look for people who arestudents of what's happening outside
of health care and life sciences becauseso many of the great ideas come from
these other industries that are adoptingtechnology much more rapidly than we are.
And the transfer learning, acrossindustries is quite impressive
and can really move the needle forward.
So, I look for that as well.

(49:08):
I think the reality is that making aninvestment in a company is like getting
married and, in fact, it's often harderto unravel than a marriage in the U.S.
And so I'd underscore the point that teamis everything. But I'd also underscore
that a few other things are top of mind.
I think one is timing. The why now, right?
We talked in this podcast about alot of ideas in AI and medicine that

(49:30):
have been percolating for decades,and yet we're grappling with them
in a real way in 2025 because therewas some unlock and some change.
That's forcing us to revisitthis concept with a new lens.
And so, when I think about
technology evolving and I'm making aninvestment in an evolving technology.
I wanna believe that that inflection pointis happening during a period of time that

(49:51):
that company can be funded to realize it.
And you wanna be sufficiently early towhere, as people joke, if there's a market
map of this category, you're way too late.
But you also don't wanna be too early.
That the commercial opportunity forthat technology and that company
doesn't materialize in what Idescribe as a venture timeline.
Yeah, being early is oftenthe same as being wrong.

(50:13):
Spot on. In many cases, so.
Period.
Yeah.
Morgan, what's your visionfor the future of medicine?
How do you see medicine itself?
The practice of medicine by physiciansand maybe medical school, too, evolving
over the next, let's say five or 10 years.
I'll hit the medical school point first.
I hope we move beyond

(50:33):
this approach of information inundationand memorization, and more towards
a system of thought and learningthat forces critical thinking.
We're not there.
And I think the hard part is that learningmedicine is learning a new language.
And so, we need to balance learningthat vernacular and vocabulary with

(50:54):
forcing people to actually think.And I would argue, think with AI and
teaching that in an intentional way.
In terms of the field of medicine, wedidn't touch much on this and maybe
we'll revisit it on a future pod, butI've been completely pilled by the
field of genomics and multiomics andwhere we are headed in that realm. I had
the opportunity as part of my rotationsto spend time at NIH for three months,

(51:18):
where I joke that I saw how medicinecould be practiced if we actually funded
things that were right for the patient.
And I acknowledge that the NIHpeople joke, it stands for
"not in a hurry" because thingsdon't happen there in a super
fast or always efficient format.
But the people who work there,the physician scientists who are
leading trials there are trulyempowered to think objectively

(51:40):
about what is best for this patient.
And the institution is fundedto support those decisions.
And in a dreamy world, I'm excited aboutthe genomic learning health system.
And I'm excited about this notion thata larger percentage of people are
going to undergo molecular testing inthe next few decades, whether that's
actually a exome or, or a genome oreven more specific testing that helps

(52:04):
us understand how protein expression ishappening in a particular disease state.
And I'm excited about our ability to marrythat information with what we now have
In terms of clinical data inelectronic health records.
And so if you think about what theGenomic Learning Health System stands
for, it's this notion that people undergomolecular testing and then they bump into
the health system and they receive care.

(52:24):
And as they receive that care,we're constantly updating their
phenotype and describing the newthings that have gone on with them.
And as we have that information availableto us, we can then go back and update
how we've interpreted their particulargenomic and multiomic context to have a
greater precision around how we thinkabout their health and wellbeing.
I think this will be kindafirst manifesting in areas of
rare disease as it already is.

(52:45):
But I'm hopeful that we get to a point,as we saw even just recently with the Mayo
Clinic Tapestry study, that we think aboutthis in terms of a broad-based population
approach to multiomics and genomics.
That is both diagnostic, but aswe know, increasingly preventative
and interventional. In so manyways, oncology has paved the way in
demonstrating how a deep molecularunderstanding of disease unlocks both

(53:09):
novel diagnostics and therapeutics.
And just think today, naturally,if one were to be unfortunately
diagnosed with cancer, you often notonly know the cancer type, but also
the driver mutation underpinning it.
Yet, when we look across other specialtiesand disease states, that same level of
molecular precision is often missing.
Think heart failure, you might be toldyou have garden variety, heart failure.

(53:32):
And so, one thread I'm hoping topursue over the next decade is, I
guess what I would call, the molecularof other specialties, cardiology,
neurology, nephrology, metabolism.
Because I see tremendousopportunity to bring
multiomic insights to each of these fields.
What do you see as getting in the wayof that, that future? Payment models.
What gets in the way of everygreat thing in health care?

(53:53):
I feel like we don't havethe payment models to support it.
If you're a commercial insurer todayand your average patient is going
to be with you or your member, letme use the right terminology, is
going to be with you for two years.
Why do you care about paying for theirmolecular testing to prevent some
disease that might manifest for thatperson in a 40-year time horizon?
Why do you wanna be on the hook forthe gene therapy or the siRNA therapy

(54:16):
that would, in theory, you know, curethat patient if you're not going to be
responsible for the cost on the line?
These are the real questionswe have to grapple with.
Again, if people take away anythingfrom this conversation that we have,
I hope they hear me when I say:
the science, the computation, thetechnological innovation is no longer
the barrier and maybe it never was.
Maybe it was a fallacy that, that it was,but just given compute data access, where

(54:40):
we're headed in terms of the arc ofprogress I mentioned, the tech is not the
problem in health care and life sciences.
The problem is the business model,the economic model, the way care
is paid for, and the incentivestructure underlying that.
And more and more technologists, I hope
will spend a little bit of theirtime solving those problems,
because I think there will bean outsized benefit for us all.

(55:03):
Well said.
Yeah, well said.
One last question, Morgan.
So, I think again, like you, you've beenable to strike this like super interesting
balance between like being a leadingVC, investing in the space, being
a future physician for aspiring medicalstudents who hear this and be like,
man, that sounds like an awesome life.
What can they do to becomethe next Morgan Cheatham?

(55:24):
No one should strive to bethe next Morgan Cheatham.
I'll start there.
I'm a big believer that everyone hasa unique contribution to make, and in
fact, often when I meet people for thefirst time, I make a point of trying to
figure out what that might be for them.
What I can say though is there area few principles that have guided
me in pursuing my journey thus far.
The first and most important is Iwould say, clarifying your vision.

(55:47):
Everything else will stem from that.
What strongly held beliefs do you haveabout the future state of the world in
technology, biomedicine, or whateverdomains you happen to traverse? And how
do you position yourself to contributeto that over time as it unfolds?
For me, I talked about having a visionof what medicine could be like where I
described the genomic learning healthsystem or the multiomic learning health

(56:09):
system, and I'm personally taking actionby spending time in AI research, clinical
genomics and biotechnology investing.
All interdisciplinary fields tocontribute to that future state.
The best way to clarify your visionthough, I would say, is to lean into
your innate curiosity, and I wouldsay most of my major regrets in life
so far stem from not going all in onsomething that I found interesting or

(56:31):
felt curiosity about. Whether it wasplaying around with Bitcoin in 2015,
everyone has their story, ordabbling in genomics in undergrad,
but putting it on hold for abouta decade only to revisit it
now.
So, I would say following the threadsof your curiosity is key. Even if
your interests seem niche, esoteric,or intersectional and complex.
And in doing so, finding your people alongthe way is also critically important.

(56:55):
And then I would say the third andmost tactical piece of advice would be
parallelizing your career when possible.
For me, this happened on accident,but it's obviously manifested
in pursuing venture capitaland medicine simultaneously.
One informs the other, and on theother side of it, I almost feel like a
toddler who was learning two languages.
It may take me longer to start speaking,but once I do, I am inherently bilingual.

(57:18):
And so, it can be difficult toparallelize your career though in
fields like medicine, which arebureaucratic and hierarchical, and
I think we have to acknowledge that.
But if you have that strong vision,you owe it to yourself to at
least ask what might be possible.
Another quote I love fromthe classics, is faber est
suae quisque fortunae, or a person is the maker of your own fortune.

(57:40):
That's from Appius Claudius Caecus.
You just have to ask.
And the worst thing thatcan happen is someone says no.
And in other cases, if you're in a fieldwhere you don't need to ask, then just do.
Awesome.
I think that's a great note to end on.
Thanks again for joining us today, Morgan.
Thanks, guys, for having me.
This was a blast.
Yeah.
Thanks so much, Morgan.
This was great.

(58:01):
This copyrighted podcast from theMassachusetts Medical Society may
not be reproduced, distributed,or used for commercial purposes
without prior written permission ofthe Massachusetts Medical Society.
For information on reusing NEJM Grouppodcasts, please visit the permissions
and licensing page at the NEJM website.
Advertise With Us

Popular Podcasts

40s and Free Agents: NFL Draft Season
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

The Bobby Bones Show

The Bobby Bones Show

Listen to 'The Bobby Bones Show' by downloading the daily full replay.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.