Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Patrick Sullivan (00:12):
Hello, you're
listening to EpTalk Behind the
Paper, a monthly podcast fromthe Annals of Epidemiology.
I'm Patrick Sullivan,editor-in-chief of the journal,
and in this series we take youbehind the scenes of some of the
latest epidemiologic researchfeatured in our journal.
Today we're talking with DrPrasad Patil about his article
(00:37):
Using Decision Tree Models andComprehensive Statewide Data to
Predict Opioid OverdosesFollowing Prison Release.
You can find the full articleonline in the June 2024 issue of
the journal at www.
annalsofepidemiology.
org.
So let me introduce our guesttoday.
Dr.
Prasad Patil is an assistantprofessor of biostatistics at
(00:58):
the Boston University School ofPublic Health.
His research interests includemachine learning, applications
in public health,reproducibility and
replicability and trainingprediction models under
multi-study heterogeneity.
Areas of application includetuberculosis genomics, air
pollution source apportionment,opioid overdose prediction,
which is what we'll be talkingabout today, and analyses of
(01:24):
indices of well-being at variousspatial resolutions.
Dr.
Patil, thank you so much forjoining us today.
Prasad Patil (01:30):
Thank you so much
for having me and for
highlighting this work.
It's really an awesomeopportunity.
Patrick Sullivan (01:32):
Well, it's
work that we were really excited
about when we saw it, becausethe topic of opioid overdoses, I
think, is such an important onein our time and in epidemiology
, and then the methods that youused were also of great interest
.
So can you just start out bygiving us a little background
about the problem that youdescribe?
Why is this issue important?
Prasad Patil (01:52):
Sure.
So our paper studies the riskof opioid overdose in the 90
days after release fromincarceration.
So we're specifically lookingat a subpopulation of
incarcerated individuals in thestate of Massachusetts.
This data was collected from2015 to 2020.
And well, why the issue isimportant, I think to us at
(02:14):
least, I find this to be anintersection of two extremely
vulnerable and stigmatizedpopulations.
So you have those sufferingfrom opioid use and potential
opioid overdose and those whoare in touch with the
incarceration system, who havebeen previously incarcerated.
Because of this sort ofintersection, this is a
(02:34):
subpopulation that is often notstudied all that well, I would
say, and the risk factors thataffect individuals who are being
released from incarceration cansometimes be different than
those that are sort of importantto look at for a general
population, even among those whoare opioid users.
So things like reducedtolerance, increased stress,
(02:56):
especially increased instabilitythese things can greatly
increase the risk of overdosepost-release, and so we were
specifically interested intrying to assess what are these
risk factors look like and whatsorts of methods can we apply to
what I'll describe later as avery unique data setting and
data opportunity.
But in general, it's verydifficult to kind of study this
(03:16):
question due to a lack of reallyresearch quality data for this
group.
Patrick Sullivan (03:21):
Great.
So you mentioned a little bitabout the purpose of the study,
but can you walk us through yourstudy design and, given these
challenges with accessing data,you know how did you address
that and develop these datasources to be able to carry out
this research with thismethodology?
Prasad Patil (03:37):
Sure, yeah, I mean
I would say and you know we can
touch upon this later but Ithink the thing we learned the
most from this work is that itwas extremely hard to kind of
apply the methods we were tryingto apply here and really glean
any actionable insights.
That being said, the data setthat we worked on is a fairly
(03:58):
unique linkage data set that thestate of Massachusetts has put
together.
It's called the Public HealthDatabase, and they have kind of
painstakingly and almostamazingly been able to link
various administrative databasestogether.
So we get identifiers forindividuals who have been seen
by various state services, andso our methodology kind of had
(04:23):
two phases here.
Although these databases arelinked, they're not necessarily
in a form in which we are in aready-to-analyze form.
So the first phase was reallytrying to build what we called
incarceration-overdose pair.
So we wanted to pair upeffectively lengths of
incarceration and really 90 dayspost the release of
(04:44):
incarceration with potentialoverdose events, and so in this
PHD, public health database,there's many different data
resources.
What we looked at specificallywere Department of Corrections
records this is where we gotinformation about incarceration
and other predictors, and wetried to link these with acute
care, hospitalization, ambulancerecords and death records to
(05:08):
try to effectively match up thesame individual across these
databases to say that thisindividual is released at this
time in the 90 dayspost-release, Do we observe an
event in any of these otherdatabases?
And so this in and of itself wasa fairly difficult process.
So we had to do a bunch ofmerges and transpositions to
kind of connect these thingstogether.
And you know, you hadindividuals who were
(05:30):
incarcerated multiple times andso thinking about how to deal
with the fact that we have sortof repeat observations of the
same individual.
In the end we ended up treatingthese as independent records,
which kind of reduces the amountof information we're working
with to some extent, because wecan't really compare
longitudinally here.
But we built these records foreach individual that tell you
(05:51):
whether or not an event occurred90 days post-incarceration for
that given sort of incarcerationstay.
And then the second piece ofthis was we fit what are called
decision trees this is a machinelearning algorithm to the
entire cohort that we had built,as well as race stratified
subsets, and I can kind ofexpound upon why we did that.
Patrick Sullivan (06:11):
Yeah, I think
it may be helpful for people
just to understand.
I mean, we see this likedecision tree models, but how
would you sort of explain thatto like an earlier career
epidemiologist who's not sofamiliar with what a decision
tree model does, and why was ita good choice in this particular
study?
Prasad Patil (06:29):
Yeah, absolutely.
You know, even for myself,coming from a biostatistics
background, I've had to learn alot about machine learning
before I sort of got into all ofthese applications.
But decision tree algorithms,the way these basically work,
it's kind of a different way ofthinking about trying to find
out what variables are importantfor predicting a given outcome,
and so their goal 100% is totry to predict something.
(06:50):
So here we're trying to predictwhether or not an event occurs
after an individual is releasedfrom prison.
What this algorithm does is yougive it the entire data set,
you give it all the variablesthat you've measured as
potential predictors and it willgo through each one and build a
binary decision rule, right?
So the decision rule, forexample, let's say one of the
variables we're including in themodel is age, and let's say the
(07:13):
range of ages in our data setis 18 to 65.
It will go through every valueof age that is represented in
the data set and build a rulethat says let's group our
observations into those that areunder 20 and over 20, under 21
and over 21, for every valuegoing from 18 to 65.
(07:33):
And with this rule it'll thensee how well have I separated
this thing that I'm trying topredict.
So here I'm trying to predictwhether or not an overdose event
occurs.
If my rule is, everyone lessthan 20 go to the left, everyone
over 20 go to the right, have Iseparated out the overdoses
from the non-overdoses andyou'll have sort of what you
call a loss function to measurewhether or not you've done that
(07:57):
well, and you'll do that forevery possible rule.
You can build for everyvariable and pick the one rule
that does the separation thebest.
Patrick Sullivan (08:04):
The model
really looks at all those,
optimizes the cut point, forexample for continuous variables
.
You're saying it'll check eachpossible cut point and see what
explains the biggest amount ofvariance, essentially.
Prasad Patil (08:17):
Exactly right, or
what separates the thing you're
trying to classify in this case?
Yeah, and so then, the way thatthis algorithm is what's called
recursive.
So now, once you've split thedata set into two pieces with
this initial rule that you foundto be the best, you do it again
on each of the two pieces,right?
And you continue to split untilyou've sort of reached some
predefined endpoint for this.
(08:39):
And so, for those who have abackground in, you know,
regression, modeling and thingslike that, first of all you can
see how this is quite differentin terms of a data approach, and
you can also start to see whysome of these algorithms are a
little harder to interpret andare a little bit more greedy
about trying to find the bestpossible predictive option.
(08:59):
So what you end up with is it'scalled a decision tree, because
it kind of looks like a tree,right, it starts with a single
rule, and then the next levelhas two rules and the next level
goes on with these binary rulesuntil you end up at some end
point.
And so, for a new observation,you would check where it falls
on either side of each rule,right?
So for a new observation, let'ssay they're over 20 years old.
(09:20):
So we go to the right in ourtree.
Then our next rule let's say,checks how long their length of
incarceration was and you make adecision going left or right
and you continue kind ofcascading down the tree until
you end up at some endpoint thatassigns a prediction for that
person, and that endpoint willbe either mostly all overdose
individuals or all non-overdoseindividuals.
(09:42):
That'll kind of determine whatyour eventual prediction is.
But again I want to emphasizethat the goal is to predict, and
so then going back to this andfiguring out how things are
associated, what interactionslook like and things like that
is a bit of a challenge withthis type of method.
Patrick Sullivan (09:59):
So, given that
goal of prediction, what were
some of the main findings afteryou applied this method?
What were some of the keytakeaways after you applied this
method?
What were some of the keytakeaways from the analysis that
you did?
Prasad Patil (10:08):
Sure.
So, to describe the data alittle bit, our final data set
had about 5% overdoses in it, sowe had 14,000 or so
observations and about 5% ofthose actually we were actually
able to measure an overdose forthat individual in the data.
That doesn't necessarily meanthat other people didn't have
one, it's just that we weren'table to measure based on what we
(10:28):
had, and we fit this decisiontree algorithm to the entire
data set and we found that itexhibited pretty good
sensitivity.
We did some things what's calledcase weighting to try to
prioritize predicting overdosesover non-overdoses, because we
have so few in relation, and wewere trying to increase the
accuracy and the sensitivity ofthis method, and the sensitivity
(10:51):
overall was pretty good.
But we found that this wasmostly driven by white,
non-hispanic individuals.
They made up the majority ofthe data set and they made up
the majority of the overdosesand so put into this bucket of
no overdose, which we knew wasnot true for Black individuals,
(11:14):
for Hispanic, for Asianindividuals.
Even in our data set, of whichwe had a few, there were some
overdoses that were not beingpicked up by this method, and so
we fit these race stratifiedmodels to try to understand what
.
Does the picture look differentif we try to fit a model
specifically within thesesubgroups rather than overall,
and we found some different riskfactors for Black individuals,
(11:36):
for Hispanic individuals, thatlooked different than this
overall model.
So we found the overall one wasnot working very well, but these
metrics were more balanced whenwe broke it up, stratified in
that manner, and overall wefound the most important things
across a number of these models.
Although none of these modelsperformed particularly well, I
would say, in terms of theiraccuracy, they all found that
(12:00):
spending a longer time at themost recent facility was
associated with a decreased riskof overdose, and involuntary
commitment was associated withan increased risk of overdose.
Patrick Sullivan (12:14):
So when you
say time at the facilities, this
is more time at the last sortof incarceration facility.
Prasad Patil (12:21):
Yeah, that's right
.
So this was something we foundout later in the analysis.
So initially this variable wascoded as length of stay
effectively, and so we thoughtthat meant the length of their
term effectively.
What we came to find out aftersome discussion with Department
of Corrections is that thisvariable coded the length of
stay in the most proximalfacility, which means for those
(12:42):
who aren't familiar, people areoften moved around from
institution to institutionwithin the system, and so this
variable just captures how longyou were at the last place that
you were imprisoned.
Patrick Sullivan (12:53):
So might that
be a marker for people who have
shorter duration at the lastfacility, for individuals who
may have complex behavioral ormedical problems that get moved
around for the purpose ofmanaging those?
I mean, is it really about theduration or do you think that
might be a marker for, like,what does it mean to be moved
frequently in the system?
What is that confounded with?
How do you interpret that?
(13:14):
Finding but that's what's goingon in my head is like what are
the characteristics of peoplewho are moved more or less
frequently?
Prasad Patil (13:20):
Right- No, it's
absolutely- It's complicated and
I don't think there's anyoverarching characteristic that
would define these people.
So, for example, it could be ashort stay because it's a short
sentence, right, so that itcould just be that you were put
in for something that isassociated with a short sentence
, then you're let out.
Or it could be that you're putin a holding facility and then
(13:42):
you're moved to a differentfacility, depending upon what's
going on with your case, orsomething along those lines.
And I have to say I don't wantto go too far into this because
this is not my expertise.
I'm more on the method sets.
I don't.
I definitely don't want to saysomething wrong, but this is how
I understand it.
Patrick Sullivan (13:57):
But I think I
mean from a perspective as
epidemiologists, I thinksometimes the important thing is
to identify our findings ashypothesis generating and then
either handing it over to thefolks who know more deeply what
that would mean, but I thinklike a conversation with folks
in the correctional system tosay, like what are those, what
are these things associated with?
(14:18):
I mean, I think it's just aninteresting conversation and I
really appreciate delineatingour, like our roles as
epidemiologists and how far ourknowledge goes, because some of
these are really deeplyidiosyncratic questions about
how the correctional systemworks.
So this leads back and shouldraise those questions.
Prasad Patil (14:37):
For sure, and I
think our- the expertise in our
group we centered around thisnotion of instability which I
had mentioned before.
Part of what we had been doingin this project as a whole is we
actually did a bunch of litreview on identified risk
factors for opioid overdose postincarceration and we did
community outreach.
(14:57):
So folks from our research teamran focus groups and sort of
showed people who have been inthe system or who work in the
system, who are social servicesworkers, some of the risk
factors we'd identified andasked them to kind of fill in
the gaps and tell us what seemsrelevant, what seems irrelevant.
And they spoke a lot more aboutmore abstract things like
(15:18):
instability was a big one fear,stigma and how much these things
influenced the desire to useagain or the risk of overdose,
and so part of that made us wantto link moving around shorter
terms, being in and out ofincarceration with this kind of
overarching principle ofinstability, made us want to
link, you know, moving aroundshorter terms, being in and out
of incarceration with this kindof overarching principle of
(15:38):
instability.
Patrick Sullivan (15:39):
It is kind of
always just fascinating to me
that when you talk about likethe complex web of causality and
these constructs like fear andinstability and vulnerability,
it's sometimes it's amazing tome that we find signals in our
we have some pretty crude,pretty dist are the actual
levers of change and there's awhole nother process that sits
(16:17):
behind that.
So I think the continuum fromthe empirical findings and then
all the other kinds of piecesthat you described focus groups,
individual in depth interviews,expert interviews to try and
figure out what sits behind that.
But sometimes it does surpriseme that we get strong, clear
signals of things that when youunpack them are a very complex,
(16:38):
you know, set of like, set ofsocial determinants.
Prasad Patil (16:42):
Absolutely, and I
think I mean you know, if you,
if you look at this finding onits face, right, basically, a
rule that was very common acrossa lot of these decision trees
we fit was that if you had alonger length of stay, you were
predicted to not have anoverdose.
Now, if you want to, you know,take a very simplistic view on
what you should do.
(17:02):
Based on that information, youmight conclude that you should,
you know, assign longersentences.
Right, and, of course, that isincorrect and it is why we need
to, you know, to partner withpeople who understand the actual
situation and who can actuallyprovide insight on what these
different things mean, before wejump to what, on their face,
(17:25):
seem like useless conclusions.
Right?
Patrick Sullivan (17:28):
Yeah, so can
you sort of talk about.
We've gotten into this a littlebit, but what do you see as
some of the main strengths andsome of the important
limitations of your study?
You talked about them some inthe paper, but can you just sort
of recap for us what's strongabout this method and what are
some of the limitations folksneed to consider?
Prasad Patil (17:45):
Yeah, I think the
greatest strength of this work
was really the ability to workwith this sort of state curated
data warehouse and it's again Iwant to highlight that it's, I
think, unique in the country.
I don't know many, if any statesthat have linked together these
types of databases at thislevel yet, and just the fact
(18:07):
that we were able to conductthis analysis, I think, is worth
talking about right, and it'sworth highlighting to other
state agencies to say that thesekinds of things are possible if
we start to curate ourresources and link them together
.
Patrick Sullivan (18:21):
And shout out
to Massachusetts for organizing
this, because a lot of statesdon't.
I think it often takes stateresources, investment of state
resources, to do this and Ithink in a lot of states there's
not the priority given to this.
So props to Massachusetts forsure.
Prasad Patil (18:35):
Absolutely.
And I think the other big thingis here we have some semblance
of quantitative information thatkind of backs some of the, I
would say, more qualitativefindings in this field
previously.
So, like I described, a lot ofthe risk factors come from
smaller studies, they come fromtalking with the community, and
(18:56):
so now we wanted to try tosupplement that with some of
this algorithmic modeling to saylike, well, what happens when
we actually try to predictsomething like overdose?
What do we find?
How does that agree with ordisagree with the existing
findings?
And I think it adds to thatconversation to say that you can
fit these types of models,these are the associations or
the predictions that we're sortof getting, and it kind of shows
(19:18):
that there is some efficacy ofapplying these machine learning
types of algorithms to thisproblem.
I think in terms of weaknessesor limitations.
Well, you know, the models arenot good enough to use, we don't
understand them well enough andthey don't have, you know,
prediction metrics that wouldsay, you know, let's apply this
to new individuals to predicttheir risk of overdose.
They're not nearly at thatpoint.
(19:40):
I mentioned the kind of crudequantification of overdose.
We only have those who we cansee.
So there are a lot of peoplewho have likely overdosed but
have not come in contact withthe state service, and so we are
probably grosslyunderestimating the overdoses
that actually occurred.
And there were a number ofcomputational limitations.
One of the reasons we useddecision trees was because it
(20:02):
was like the most sophisticatedML technique we were able to
apply in this environment.
You know, we had to actually goto the state department to run
code or send code to ourliaisons at the state department
, because all this data is kindof under lock and key.
So the process was prettychallenging and we weren't
really able to do somethingsuper sophisticated.
Patrick Sullivan (20:25):
So I want to
turn now to a part of the
podcast we call Behind the Paper, and it's really just to try,
you know, especially for peoplewho are earlier in their career
and who see this kind ofpublished work, to help us think
about, like, how we actuallyare able to do this work as
humans, you know, as people.
So I wonder what the biggestchallenge you faced was from
(20:46):
it's often getting funding, likein the conduct of the research,
like what did you findchallenging and how did you
overcome that?
Prasad Patil (20:53):
Yeah, I think I
mean for me again, coming from a
more method standpoint, I think, the data quality and the
computational limitations thatwe faced in trying to work on
this problem, I would say weprobably we really didn't
overcome these.
We did the best that we couldunder the, you know, data
conditions that we were able towork in.
(21:13):
But I think if you read thepaper you'll see that, and like
I mentioned, these are notactionable findings yet.
They're telling us something,but if you compare the
application of machine learningmethods and other facets, even
of medicine to what we're tryingto do here, we're not even
close really, and I think itspeaks to this more general
(21:35):
issue of you know, this isalready a population that's
difficult to measure anddifficult to survey, and so the
data that we get from them arenot usually of very high quality
to try to do these analyses,and so there's already a gap,
and so then we can't apply verynice and fancy methods to these
data and I feel like we can'tlearn as much as we can in other
settings.
(21:55):
Necessarily.
Patrick Sullivan (21:56):
Yeah, all of
our sort of EPI 101 things about
misclassification of data arejust like the characteristics of
data, it seems like, andthere's reasons, I think, for
systems not to have the data,and also for individuals.
Sometimes there's disincentivesor incentives to reporting you
know behavior, so there's allkinds of input issues.
(22:19):
I do wonder, like if you've sortof talked about this method,
but if there's a listener whowants to learn more about, like,
the field of decision treemodels, are there any resources
or sort of papers or websites?
How would someone take a firststep into this?
Where would you send them tosort of look?
Prasad Patil (22:36):
The great thing is
that there's almost every
resource under the sun onmachine learning is available.
It's like a very popular topiclately, so I mean you can start
off on YouTube or something likethat and just look at a few
videos to get familiar with whatthese methods are doing.
I think it's not so much aboutaccess, right.
If you're a student, yourschool, your university probably
(22:56):
has a course or you have opencourseware and things like that
that are really quite rich indetail these days.
It's more to me a question ofwhat it is you want to learn.
Do you want to learn how toapply these methods?
Do you want to learn how theywork?
It really depends on what levelyou want to enter in.
I think there's from the verytheoretical to the very applied.
(23:19):
My personal opinion is it'sreally worth seeking out
material where they provide yousome semblance of detail of what
the algorithm is actually doingso kind of like we talked
through what decision trees tryto do, because it's very easy to
apply these things and it's notso easy to understand what
they're doing and what it means.
So I think my first question tothose who want to learn would
(23:42):
be like why do you want to learn?
What do you want to do?
to learn.
What do you want to do?
Patrick Sullivan (23:47):
touch on the
idea of algorithmic equity.
So what is algorithmic equityand why is it important?
Prasad Patil (23:54):
So as far as I
know, this is not a very
well-defined term.
This is something that I'vebeen thinking about as I worked
on this project and again togive some background.
So algorithmic bias is a hugetopic of interest in the machine
learning world and folks mayhave heard about these very, you
know, like disturbing caseswhere people have deployed
(24:14):
machine learning or artificialintelligence algorithms on data
sets where structural biasesexist and those things then get
propagated.
So, for example, again in thisworld of incarceration, there
was an algorithm that wastrained to try to help with
prison sentencing and the goalwas basically to help judges
decide, you know, what sort ofsentence should be assigned,
(24:37):
given historical information,and that historical information
has racial biases in it.
Effectively, individuals whowere non-white were given longer
sentences, even if all theirother characteristics were
exactly the same as a whiteindividual, and so the algorithm
picked up that pattern andpropagated that issue right.
And so there are a lot ofpeople working on you know what
(24:58):
we do to take these complicatedalgorithms and reduce the risk
of bias right, to try to, youknow, hide potentially risky
information from thesealgorithms and try to make them
less biased on these sort ofsocietal levels.
What I think about asalgorithmic equity is something
slightly different, which iskind of what I was describing
before.
So here we're working on opioidoverdose in the incarcerated
(25:21):
population.
This is a vulnerable populationthat's understudied.
There's already a big gap,right, they're needing, and not
enough work is being done inthis realm.
Add to that the fact that thedata is really complicated and
it's not a very attractive placefor algorithmic innovation
either.
Right, so we use decision treeshere for many reasons, but this
is like a very old, classicalmethod.
(25:42):
There's a lot better stuff outthere now that we wish we could
have used here, but it's simplynot suited for this problem, and
so my question really is likehow do we get people who are
working in methods and workingon algorithms to try to improve
what we can do for thesedifficult data settings and not
so much, you know, improve onalgorithms that already work
(26:05):
really well on rich and cleandata sets?
Patrick Sullivan (26:09):
Yeah, I think
this idea that the kind of
selection biases or theconfounding that occurs in the
data sources may be picked up bythe algorithms as signal and
given the history of racialinequity in this criminal
justice system, this is aspecial problem.
So thank you for calling it outand I think that naming it is
often helpful.
Sure, yeah, so I'm going tomake a hard turn here to one
(26:32):
last question.
But I'm just very interested inhow, as professionals, we
navigate our careers and how wemake these transitions from
being in educational settings tobe in sometimes in postdoctoral
settings, to be in facultysettings.
So I wonder if you could giveyour younger scientific self,
you know, one piece of advice.
Thinking back to a point inyour training or in your
(26:53):
postdoctoral preparation thatwas a challenging point for you
Like what insight do you havelike being able to do this work
that you might feed back to thatearlier you, at an earlier
point in your career about youknow what's been helpful or what
seemed hard, that didn't matter, or just sort of.
What encouragement would yougive to your younger career self
?
Prasad Patil (27:13):
That's a great
question.
I think what would be relevantto me back then, and even
probably now, is maybe to not beso shy and to try to make
connections with other peopleand really sort of seek that out
.
I think when I was doing my PhDwork and other PhD students can
probably relate you get verysiloed into what you are focused
(27:34):
on and you're sort of you knowat least to me, I convinced
myself that you know this islike I work in a meritocracy If
I do really good work it'll berecognized, and so I should
really, you know, sit here in myroom and focus on you know and
I work.
You know I work on like math.
I work on theorems and thingslike that.
But I think, like any otherfield, it's really important to
(27:55):
get to know other people, and Ithink that's important for
science.
I think you can really only doso much you know by yourself
cooped up, and you know gettingto know others fosters
collaboration.
It gives you more opportunitiesto present your work and to let
other people know about whatyou're working on, and so I
would tell myself to try to takethose opportunities as often as
(28:16):
I can and try not to let whatI'm doing kind of impede me from
learning about what others aredoing, and trying to talk about
it with other people.
Patrick Sullivan (28:25):
What an
insightful message, and one of
my favorite axioms is the bestwork is done across disciplines
that seem very disparate, likethe interesting stuff is always
at the intersection.
So thanks for sharing.
These questions are always alittle bit vulnerable and feel a
little bit on the spot, but Iappreciate you sharing that.
No, that's been so nice talkingto you.
(28:48):
Thank you for your focus onthis particular area and
especially for focusing on thehealth of a really vulnerable
population and for sharing yourmethods.
It was great to have you on thepodcast and thanks again for
bringing this work to Annals ofEpidemiology.
It's my pleasure.
Thanks so much for having me.
I'm your host, Patrick Sullivan.
(29:09):
Thanks for tuning in to thisepisode and see you next time on
EPITalk, brought to you byAnnals of Epidemiology, the
official journal of the AmericanCollege of Epidemiology.
For a transcript of thispodcast or to read the article
(29:39):
featured on this episode andmore from the journal, you can
visit us online at www.
annalsofepidemiology.
org.