Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:15):
Pushkin.
Speaker 2 (00:20):
Hey everybody, Nate here, jumping in before the short so
that we've been having a lot of fun answering your
listener questions. So far, we've covered things like Pascal's Wager,
the hot hand fallacy, or the fallacy of the hot
hand fallacy actually, and the expected value of learning new languages.
We want to keep doing this kind of thing, so
send us all your questions about risk decision making, game theory, poker,
(00:43):
you name it. Reach out to us on social media
or email at Risky Business at pushkin dot fm. Even
if you're not a premium subscriber, this is a great
way to support the show, so you can keep sharing
it free of charge. We look forward to hearing from you.
Speaker 1 (01:03):
Welcome back to Risky Business, a show about making better decisions.
I'm Maria Kanakova.
Speaker 2 (01:08):
And I'm Nate Silver.
Speaker 1 (01:09):
Today on the show is going to be a little
bit doomtastic.
Speaker 2 (01:12):
Yeah, I mean, I don't know if it's like worse
than thinking about like the global economy going into a
recession because of dumbfuck tariff policies. This is all about
how we're all going to die in seven years instead. No,
I'm just kidding. This is a very very intelligent and
well written and thoughtful report called AI twenty twenty seven
that we're going to spend the whole show on because
(01:34):
think it's such an interesting to talk about, but that
you know, includes some dystopian possibilities.
Speaker 1 (01:42):
I would say it does indeed, so let's get into
it and hope that you guys are all still here
to listen to us in seven years.
Speaker 2 (01:52):
The contrast is interesting between like all the chaos we're
seeing with tariff policy in terms of starting a trade
war with China and then other types of chaos. It's
interesting to kind of look at this. I mean, I
wouldn't call it a more optimistic future exactly, but like
but like on a different trajectory of like a future
(02:14):
that's going to change very fast, according to these authors,
with profound implications for you know, everything, the human species.
These researchers and authors are saying that everything is going
to change profoundly.
Speaker 1 (02:30):
And.
Speaker 2 (02:32):
Even though there is some hedging hair, this is kind
of their base case scenario, and like you know, base
case number one and base case number two differ. There's
like kind of a choose your own adventure at some
point in this report. But they're both very different than
the status quo, right, and the notionately hear from that,
like everything becomes different. If ais become substantially more intelligent
(02:56):
than human beings, people can debate and we will debate
on this program what that means. But yeah, do you
want to do you want to contextualize this more? Do
you want to tell people to actors are of this report?
Speaker 1 (03:04):
Absolutely? Absolutely? So the report is authored officially by five people,
and I think unofficially there's also a sixth We've got
Eli Leifelund, who's a super forecaster and he was ranked
first on RAN's forecasting initiative, so he is someone who
is very good at kind of looking at the future
(03:27):
trying to predict what's going to happen. You have Jonas
Palmer who's a VC at Macroscopic Ventures. Thomas Larson was
a former executive director of the Center for AI Policy,
so that is a center that advises both sides of
the aisle on you know, how how AI is going
to go? And Romeo Dean, who is part of Harvard's
(03:51):
AI Safety Student Team, so someone who is still a student,
still learning, but kind of the next generation of people
looking at AI. And finally, we have Daniel coch Taylor,
who basically had written a report back in twenty twenty one.
He was a AI researcher and he looked at predictions
(04:14):
for AI for twenty twenty six, and it turns out
that his predictions were pretty spot on, and so open
AI actually hired Daniel as a result of this report.
Speaker 2 (04:24):
Obviously he now left, yes.
Speaker 1 (04:26):
And then and then he left exactly.
Speaker 2 (04:28):
And importantly, so there's also Scott Alexander exactly.
Speaker 1 (04:32):
So I was saying, he's the he's the person who
kind of is in the background, and he's you guys
might know him as the author behind astral Codux.
Speaker 2 (04:41):
And I know Scott. He's one of the kind of
fathers of what you might call rationalism. I think Scott,
when I interview him from a book, was happy enough
with that term and accused me or co opted me
into also being a rationalist. These people are somewhat adjacent
to the effective altruist, but not quite right. They're just
(05:02):
trying to apply a sort of thoughtful, rigorous, quantitative lens
to big picture problem, including existential risk, of which most
people in this community believe that AI is both an
existential risk and also kind of an existential opportunity, right,
that it could transform things. You talk to Sam Alt
and he'll say We're going to cure cancer and illuminate
(05:23):
poverty and whatever else. Right, And Scott's also an excellent writer.
And so let me disclose something which is slightly important here.
So I actually was approached by some of the authors
of this report a couple of months ago, I guess
it was in February ish, just to give feedback and
chat with them. So I'm working off the draft version, right,
(05:44):
which I do not believe they changed very much. So
my notes pertained to an earlier draft. I did not
have time this morning to go back and re.
Speaker 1 (05:50):
Read it, so I was not on the inside loops.
So I did not get an earlier draft. And I've
read this draft and basically just to kind of big
picture sum it up, it outlines two scenarios, right, two
major scenarios for how AI might change the world as
(06:10):
soon as twenty thirty. Now important note like that that
date is kind of hedged. It might be sooner, it
might be later. There's kind of a there's a confidence
interval there. But the two different scenarios. One basically we
do twenty thirty, humanity disappears and is taken over by AI.
(06:33):
The positive report is in twenty thirty. Basically we get
AIS that are aligned to our interests, and we get
kind of this AI utopia where AI is actually help
make life much better for everyone and make the standard
of living much higher. But the crucial turning point is
before twenty thirty, and the crucial kind of question at
(06:53):
the center of this is will we be able to
design AIS that are truly aligned to human interests rather
than just appear to be aligned and kind of lying
to us while actually following their own agenda. And how
we handle that is kind of the lynchpin. And it's
(07:14):
actually interesting, Nate, that you started out with China because
a lot of the policy choices and a lot of
what they see as kind of the decision points that
will affect the future of humanity actually hinge on the
US China dynamic, how they compete with each other, and
how that sometimes might basically clash against safety concerns because
(07:38):
no one wants to be left behind. Can we manage
that effectively and can kind of that transition work in
our favor as opposed to against us. I think that
this is kind of one of the big questions here,
and so it's funny that we're seeing all of this
trade war right now as the sideport is coming out.
Speaker 2 (07:55):
Yeah, look, I think this exercise is partly just a
forecasting exercise, right, I mean, obviously it's just kind of
like fork at the bottom where we learn to have
an AI slow down or we kind of are pressing
fully on the accelerator, right, Like, in some ways, scenarios
are like not that different, right, Either one assumes remarkable
(08:20):
rates of technological growth that I think even AI I'm
never quite sure who to call an optimist or a pessimist, right,
even AI believers you know, might think is a little
bit aggressive. Right. But what they want to do is
they want to have like a specific, fleshed out scenario
for how the world would look. Like it's kind of
like a modal scenario. And like I think they'd say that, like,
(08:44):
we're not totally sure about either of these necessarily, right,
And I don't think they'd be as like pedantic as
to say, if you do X, Y, and Z, then
we'll save the world and have utopia, and if you don't,
then we'll all die. Right. I think they probably say
it's unclear and there's kind of like risk either way.
We wanted to go through the scenario like fleshing out
like what the world might look like. Right. I do
(09:05):
think one thing that's important is that whatever decisions are
made now could get locked in, right that you pass
certain points in overturn and it becomes very hard to accelerate,
like an arms race. This is you know, what we
found during the Cold War for example. I mean, one
of the big things I look at is like, do
(09:25):
we force the AI to be transparent in its thinking
with humans? Right? Like, now there's been a movement toward
the AI will actually explicate it's thinking more. I'll ask
you a query open AI. The Chinese models to this too, right,
and I'll say, I am thinking about X, Y and Z,
and I'm looking up PD and Q and now I'm
reconsidering this. It actually has this chain of thought process, right,
(09:48):
which is explicated in English. You know. One concern is
that what if the AI just kind of communicates to
another in these implicit vectors that's inferring from all the
texts it has. It's kind of unintelligible to human beings, right,
and maybe kind of quote unquote thinking in that way
in the first place, and then does us the favor
of like translating back to it goes from to kind
(10:10):
of this big bag of numbers is one day I
researcher called it, right, and then it translates back into
English or whatever language you want. Really, in the end,
what if it just skip cuts out that last step right,
then we can't kind of like check what AI is doing.
Then it can behave deceptively more easily. So you know,
so that part seems to be important. I want to
(10:34):
hear your your first impressions before I kind of poison
the well too much.
Speaker 1 (10:39):
Well, my first impressions is that the alignment problem is
a very real one and an incredibly important one to solve.
And what I got from this is that actually, the
problem that I've had with like these initial AI lms
is the kernel of what they're seeing there. Right. So
you and I have talked about this on the show
in the past, and I've said, well, my problem is
(11:01):
that when I'm a domain expert, right, I start seeing
some inaccuracies, and I start seeing like places where like
it either just didn't do well or made shit up
or or whatever it is. Now I think it's very
clear that those problems are going to go away, right,
that that is going to get much much better. However,
the kernel of it's showing me something, but that might
(11:24):
just be you know, I have no way of verifying
if that's what's going on, what it's reading, like, how
it's I don't want to say thinking about it, even
though in the report they do use thinking, but.
Speaker 3 (11:34):
It's normalizing too much, I think is correct.
Speaker 2 (11:38):
I think it's.
Speaker 1 (11:39):
Okay, we'll stick to that language. Yeah, okay, So, so
how it's thinking about it that those little problems and
like the little the glitches and the things that it
might be doing where it starts actually glitching on purpose,
are not going to be visible to the human eye.
And so one of the main things that they say
(11:59):
here is that as AI internal R and D gets
rapidly faster, so that means basically AI's researching AI, right,
and so internally they start developing new models, and as
they kind of surpass human ability to monitor it, it
becomes progressively more difficult to figure out. Okay, is the
(12:21):
AI actually doing what I want it to do? Is
the output that it's giving me its actual thought process?
And is it accurate or is it like trying to
deceive me? But it's actually kind of inserting certain things
on purpose because it has different goals, right, because it
is actually secretly misaligned, but it's very good at persuading
me that it's aligned. Because one of the things that
(12:43):
actually came out of this report, and I was like, huh,
you know this is interesting is if we get this
remarkable improvement in AI, it will also remarkably improve at
persuading us right as part of it. Making Yeah, so
this is but I've never even thought about that. I
was like, okay, fine, But one of the things that
(13:05):
I do buy is that it's going to be very
difficult for us to monitor it and to figure out
like is it truly aligned with human wants, with human desires,
with human goals, And the experts who are capable of
doing that, I think are actually going to dwindle right
as AI starts proliferating in society, And so to me,
that is something that is actually quite worrisome and that
(13:27):
is something that we really need to be paying attention to.
Speaker 2 (13:29):
Now.
Speaker 1 (13:30):
Just to fast forward a little bit, in their doomsday
scenario in twenty thirty one, AI takes over it basically
like suddenly releases some chemical agents, right, and humanity ties
and the rest of the stragglers are taken care of
by drones, et cetera.
Speaker 2 (13:46):
I don't even like it. It doesn't even quick and
painless death. I will say, let's hope we.
Speaker 1 (13:52):
Don't know what the chemical agents are. Might not be
quick and painless. Some chemical agents are actually a very
painful death thing. So let's hope. Let's hope it's quick
and painless. Quickly, quick, yes, stuff quick, Okay, hopefully some
chemical agents are not quick. I hope it's quick and painless.
But if they're actually capable of deception at that high level,
(14:15):
then you technically don't even need them to do it.
If we're trusting medicine and all sorts of things to
the AIS, it's pretty easy for it to actually manipulate
something and actually insert something into codes et cetera that
will fuck up humanity in a way that we can't
that we can actually figure out at the moment. Right,
(14:37):
Like the way I think of it, and this is
not from the this is not from the paper, but
this is just the way that my mind processed. It
is like think about DNA, right, like you have these
remarkably complex, huge strands of data, and as we've found out,
but it's taken forever, one tiny mutation can actually be fatal, right,
(14:57):
but you can't spot that mutation. Sometimes that mutation isn't
fatal immediately, but we'll only manifest at a certain point
in time. That's the way that my mind tried to
kind of try to conceptualize what this actually means. And
so I think that you know, that would be easy
for a deceptive AI to do, and to me like that.
(15:17):
That's kind of the big takeaway from this report is
that we need to make sure that we are building
AIS that will not deceive right that they're capabilities, they
explain them in an honest way, and that honesty and
trust is actually prioritized over other things, even though it
might slow down research, it might slow down other things,
(15:39):
but that that kind of alignment step is absolutely crucial
at the beginning because otherwise humans are human, right, they're
easily manipulated. And we often trust that computers are quote
unquote rational because they're computers, but they're not. They have
their own inputs, they have their own weights, they have
their own values, and that could just lead us down
(16:02):
a dark path.
Speaker 2 (16:02):
Yeah, so let me follow up with this pushback. I
guess right, Like, first of all, I don't know that
human so easily persuaded. This is my big critique with
like all the misinformation people who say, well, misinformation is
the biggest problem on society basis. It's like people are
actually pretty stubborn and they're kind of sound pretentious. They're
(16:25):
kind of basian and how they formulate their beliefs, right,
they have some notion of reality. They're looking at the
credibility of the person who is telling them these remarks.
If it's an unpersuasive source, it might make them less
likely to believe they're balancing with other information with their
lived experience so called. Right. You know, part of the
reason that like I am skeptical of AIS being super
(16:47):
persuasive is like you know that it's an AI. You know,
it's trying to persuade you, you know what I mean. So, like,
if you go and play poker against like a really
chatty player I Phil hemm With or Scott Seeber or
someone like that, right, you know, on some level of
the best play is just to totally ignore it. Right,
You know that they are trying to sweet talk you
into doing exactly what they want you to do. And
(17:09):
so the best play is to disengage, or literally you
can randomize your movis VI have some notion of what
the game, theoretical optimal play might be, right, or salesmen
or politicians have reputations for being Oh he's a little
too smooth, Gavin Newsome a little too fucking smooth, right,
I don't find Gavin news some persuasive at all, Right,
use a little too from the hair gel to the
(17:31):
constantly shifting vibes. I mean, I don't really find Gavin
new Some persuasive at all, even though like an aim,
I'd say, boy, Gavin new some good looking guys look
gravelly throated, But you know whatever, I mean. Look the
big critique I have of this project, and by the way,
I think this is an amazing project in addition to
like wonderful writing if you viewed on the web not
(17:51):
your phone, so these very cool like little infographic that
updates everything from like the market value. If they don't
call it open AI, they call it open brain, I
guess is what they settle on for a substitui.
Speaker 1 (18:01):
Yeah, they call everything something else just to make sure
that they're not stepping on any toes. So they have
open brain and and they have deep scent from China.
Speaker 2 (18:12):
I wonder which one that could be. I wonder. But
it's beautifully presented and written, and like I appreciate they're
going out on a limb here, you know, I mean,
I think they have. It's been fairly well received. They've
gotten some push back both from inside and I think
outside the AI safety community. Right, but they're putting their
(18:32):
necks in the line, hear. They will look if things
look pretty normal, if or it looks pretty normal in
twenty thirty two or whatever, right, then they will look
dumb for having published this.
Speaker 1 (18:43):
Well, and they actually have that right as some scenario
that you end up looking stupid if everything goes well.
Speaker 2 (18:50):
But that's okay.
Speaker 1 (18:51):
Now, can I push back on the persuasion thing a
little bit, just on two things. So, first of all,
the poker example is not actually a particularly applicable one here,
because you know that you're playing poker and you know
that someone is trying to get information and deceive you.
The tricky thing. So this is kind of when I
spend time with con artists. The best con artists aren't
(19:13):
Gavin Newso like, they're not car salesmen. You have no
idea they're trying to persuade you to do something. They
are just like nice, affable people who are incredibly charismatic,
and even in the poker community, by the way, like
some of the biggest grifters who like it comes out
later on, we're just like stealing money and doing all
of these things. Are charming, right, They're not sleazy looking
(19:35):
like they have no signs of, oh, I'm a salesman.
I'm trying to sell you something. The people who are
actually good at persuasion, you do not realize you are
being persuaded. And I think people are incredibly easy to
kind of to subtly lead in a certain direction if
you know how to do it, and I think ais
could could do that, and they might persuade you when
(19:56):
you don't even think they're trying to persuade you. You might
just ask like, can you please summarize this research report
and the way that it frames it right, The way
that it summarizes it just subtly changes the way that
you think of this issue. We see that in psych
studies all the time, by the way, where you have
different articles presented in slightly different orders, slightly different ways,
and people from the same political beliefs, you know, same
(20:19):
starting point, come away with different impressions of what kind
of the right course of action is or what this
is actually trying to tell you. Because the way the
information is presented actually influences how you think about it.
It's very very easy to do subtle manipulations like that.
And if we're relying on AI on a large scale
(20:39):
for a lot of our lives, I think that if
it has like a quote unquote master plan, you know,
the way that they present in this report, then persuasion
in that sense is actually going to be.
Speaker 2 (20:49):
You'll know you're being manipulated, right, that's the issue.
Speaker 1 (20:52):
No, you don't know. That's the thing.
Speaker 2 (20:53):
People will be manipulated because it's AI and that I
don't but I don't know.
Speaker 1 (20:58):
I honestly, like Nate, I applaud your belief in human's
ability to adjust to this, but I don't know that
they will because I've just seen enough people who are
incredibly intelligent fall for cons and then be very unpersuadable
that they have been conned, right, instead doubling down and
saying no, I have not so humans are stubborn, but
(21:20):
they're also stubborn and saying I have not been deceived,
I have not been manipulated, when in fact they have
to protect their ego and to protect their view of
themselves as people who are not capable of being manipulated
or deceived. And I think that that is incredibly powerful.
And I think that that's going to push against your optimism.
I hope you're right, but from what I know, I
don't think you are.
Speaker 2 (21:41):
I'm not quite sure to call it optimism some I
guess maybe we do likely different views of human nature.
But like there's not yet a substantial market for like
AI driven art we're writing, and I'm sure there will
be one eventually, right, But like people understand that context matters, right,
that you could heavy I create a rip off the
Mona Lisa, but you can also buy rip off the
Mona Lisa and Canal Street for five bucks, right, and
(22:03):
like it? You know, So it's it's the intentionality of
the act and the context of the speaker. Now, sound
like super woke, I guess right where you're coming from.
I think that actually is how humans communicate. Like art
that might be pointless dribble coming from somebody can be
something different coming from a Jackson Bollock or whatever. You know.
Speaker 1 (22:25):
Absolutely, I think that that's a really important point. By
the way, I think it's a different point, but I
think that that is a very important point. I think
context does matter.
Speaker 2 (22:36):
We'll be right back after this message. I was buttering
up this report before. My big critique of it is like,
where are the human beings in this? Or put another way,
(22:59):
kind of like where is the politics? Right? They're trying
not to use any remotely controversial real names, right, so
you open brain for example, so where's where is President Trump?
Let me steal a qick search to make sure their
named Trump does not appear to know the name.
Speaker 1 (23:18):
So do they do? Actually, I don't know if this existed.
Maybe they took your criticism in this, but they do
have like the vice president and the president like they
do put politicians in this version of the report. They
don't have names, but they say the vice president in
one of these scenarios, you know, handily wins the election
of twenty twenty eight. We have one vice president.
Speaker 2 (23:38):
They have general secretary. I think I'm not sure if they.
Speaker 3 (23:40):
I mean general secretary, general secretary resembles she yep, and
the vice president kind of resembles jd Vance, Right, I
don't think the president resembles Trump at all, Right, is
kind of the same.
Speaker 1 (23:55):
No, they did character like, yeah, they tried to side
stuff that if we think it's.
Speaker 2 (24:00):
All happening in the next four years, then you know,
presidential politics matter quite a bit. I mean, I know,
I is such a fucking you know, I was jogging
earlier on the East Side and I was listening to
the S Reclin interview with Thomas Friedman. It's such a
fucking yuppie fucking thing. Right, It's okay, It's okay.
Speaker 1 (24:22):
You're allowed to be a young.
Speaker 2 (24:25):
Not a huge fan of necessarily, but like you know,
as well versed some like geopolitics and like China issues,
like yeah, China, you've just been back from Chinese, Like yeah,
China's kind of winning, you know what I mean, And
like I'm not sure how how Trump's hawkishness on China,
but like kind of imbecillically executed hawkishness on China, Like
(24:46):
I'm not sure how that figure's in to this. Right,
If we're reducing US China trade, that probably does produce
an ai slow down maybe more for US if we're
like if they're not exporting their like raw earth materials
and and so forth, but we're making it hard for
them to get in video chips, so they probably have
like lots of workarounds and things like that. Maybe maybe
(25:08):
Trump sheriffs are good. I would like to ask the
author's this report, because it means that we're going to
like have slower AI progress. And I'm not joking, right,
that increases the hostility between the US and China in
the long run, right, I mean that's if we were
send it all the terariffs tomorrow. I think we still
permanently or let's not say permanent, let's say at least
for a decade or so, have injured us standing in
(25:32):
the world. And so I don't know how that figures in.
And I'm like, I'm also not sure like kind of
quote what the rational response might be. But one thing
they tried, let me make sure that they kept us
into their report, right, so they actually have their implied
approval rating for how people feel about open brain, which
(25:56):
is there not very sense AI. I think this actually
is some feedback who that they took into account. Right,
they originally had it slightly less negative, but they have
this being persistently negative and then getting more negative overtime.
It was a little softer in the in the previous
version that I saw, so they did change that one
thing at some stage. But like the fact that like
(26:19):
AI scares people, it scares people for both good and
bad reasons, but I think mostly for valid reasons, right,
that the fear is fairly bipartisan. That the biggest AI
accelerators are now these kind of Republican techno optimists who
are not looking particularly wise given how it's going with
(26:41):
the first ninety days whatever we are, the Trump administration
and the likelihood of like a substantial political backlash right,
which could lead to dumb types of regulations. But like
in a part of it too is like okay, AI
they're saying can do not just computer desk jobs, but
like all types of things, right, And like humans kind
(27:03):
of play this role initially as supervisors, and then literally
within a couple of years people start to say, you
know what, am I really adding much much value here? Right?
You kind of have like these legacy jobs and there's
a lot of money. I think mostly scenarios imagine very
fast economic growth, although maybe very lumpy, right for some
(27:26):
parts of the world and not others. But we're kind
of just sitting around with a lot of idle time.
It might be good for live poker Maria. Right, all
of a sudden, all these smart people, their open earth,
their open brain, excuse me, stock is now worth billions
of dollars, right, and like nothing to do because the
AI is doing all their work. Right, they have a
lot of fucking time to play some fucking texas hold
(27:47):
them right, that.
Speaker 1 (27:51):
Is, that is one way of thinking about it. Let's
let's go back to your your earlier point, which I
actually think is an important one because obviously they were
trying to do as all you know, super forecasting tries
to do is you try to create a rape that
will work in multiple scenarios. Right. You can't tie it
(28:13):
too much to like the present moment, otherwise your forecasts
are going to be quite biased. However, I do think
that what you raise kind of our current situation with China,
et cetera, has very real implications. Given that this is
kind of the central dynamic of this report that their
predictions are based on, I think that it's incredibly valid
(28:33):
to actually speculate, you know, how will if at all,
this effect the timeline of the predictions, the possible the
likelihood of the two scenarios. And I will also say
that one of the things in the report is that
all of these negotiations on like will we slow down?
Speaker 2 (28:48):
Will we not?
Speaker 1 (28:49):
How aligned is it? This all takes place in secret, right,
Like we don't know the humans don't know that it's
going on. We don't know what's happening behind the scenes,
and we don't know what the decision makers are kind
of thinking. And so for all we know, you know,
President Trump is meeting with Sam Altman and trying to
trying to could do some of these things. And it's
(29:12):
funny because we were kind of pushing for transparency in
one way, but there's a lot of things here that
are very much not transparent.
Speaker 2 (29:18):
Yeah, it's kind of the deep state, right, but also
also a lot of the negotiations are now AI versus AI, right.
And look, I'm not sure that AIS will have that
trust both with the external actor and internally. I'm skeptical
of that. Right. If that does happen, they kind of
think this might be good because the AIS will probably
(29:39):
behave in like literally game theory optimal way right and
understand these things and make I guess, like fewer mistakes
than humans might if.
Speaker 1 (29:51):
They're properly aligned. Like that's a crucial thing because in
the doomsday scenario, AI negotiates with AI, but they conspire
to destroy humanity. Right, So there are two scenarios. One
it's actually properly aligned, so AI negotiates with AI, game
theory works out and we end up you know, democracy
and wonderful things. But in the other one, where they're misaligned,
(30:14):
AI negotiates with AI to create a new AI basically
and destroy humanity. So it can go one way or
the other depending on that alignment step. First of all,
I mean the Utilian.
Speaker 2 (30:26):
You didn't see that utopian to me, right, I'm not
sure that it did it.
Speaker 1 (30:29):
It actually seemed quite distoping to me, Like it seemed
incredibly distised.
Speaker 2 (30:33):
It's kind of like, you know, look, but at least
we're still alive, that we'll have chures, we'll probably live longer.
And again, lots of lots of lots of poker. The
ail you writing Silver Bulletin and and hosting our podcast. Right,
let me let me back up a little bit, because
I think we maybe take for granted that some of
these premises are are kind of controversial, right, so they
(30:55):
have a breakpoint I think in twenty twenty six, well
says our why aren't CREA increased potentially be on twenty
twenty six? Right, So that's kind of the breakpoint is
like twenty seven is this inflection point? I think I'm
using that term correctly in this context.
Speaker 3 (31:09):
You know.
Speaker 2 (31:09):
So I'm reading this report and up to twenty twenty
six and like, thumbs up, YIAA. This seems like very
smart and detailed about, like, you know, how the economy
is reacting and how politics reacting in the race dynamic
with China. Maybe there needs to be little bit more
Trump in there. I understand why politically they didn't want
to get into that mess, right, But like, so there's
(31:31):
kind of three different things here, right. One is a
notion of what's sometimes called AGI, or artificial general intelligence.
And if you ask one hundred different researcher you get
one hundred different definitions of what AGI is. But you know,
I think it is based to like being able to
do a large majority of things that a human being
could do competently, siming we're limiting it to kind of
(31:54):
like desk job type tasks. Right, anything that can be
done remotely is sometimes definition that is used or through
remote work, right, because clearly AIS are inferior to humans
and like sorting and folding laundry or things like that, right,
that requires certain type of intelligence. Right. If you use
the kind of desk job definition, then like AI is
(32:15):
already pretty close to AGI. Right. I use large language
models all the freaking time, and they're not perfect for everything.
I felt like, you know, in terms of like being
able to do the large majority of desk work at
levels ranging from competent intern to super genius, Like on average,
it's probably pretty close to being generally intelligent by that definition, right.
Speaker 1 (32:39):
If you're the one using it. I just want to
like once again point that out because one of the
things that they say in the report is that as
it gets more and more involved what we're asking AI
to do, it's like the human process to evaluate whether
it's accurate and whether it's making mistakes will get longer
and longer. And I think they say like for every
like basically one day of work, it'll take several it's
(33:02):
like a two to one ratio at the beginning for
how long it will take humans to verify the output. Right,
So you think, like you think you save time by
having aids do this, but if you want it to
actually develop correctly, then you need a team and it
takes them twice as long to verify that what the
AI did is actually true and actually valid and actually aligned,
et cetera, et cetera. Now you're not asking it to
(33:22):
do things that require that amount of time, but there
do need to be little caveats to how we how
we think about their usefulness and how you are able
to evaluate the output versus Southern scenarios.
Speaker 2 (33:34):
When I use AI, the things that it's best with
are things that like save any time. Right where I
feeded a bunch of different names for different college basketball teams.
We work in our NCAA model. I'm like, take these
seven different naming conventions that are all different and create
a cross reference table of these, which is kind of
like a hard task. You need to have a little
(33:55):
context about basketball. And it did that very well. Right,
That's something I could have done. It might have taken
an hour or two, but you know, instead, so it
could do it. It's in a few minutes, and it
gets faster. It's like, oh, I've learned from this from
you before, Nate, so now I can be faster and
doing this type of task in the future. I was
at the poker tournament down in Florida last week and like, uh,
(34:19):
you know, I ask open research. Excuse me, god, is that?
Speaker 1 (34:24):
See exactly right? It all because deep research, deep research,
deep research.
Speaker 2 (34:31):
I asked deep reach.
Speaker 1 (34:32):
Reading too many of these twenty twenty seven reports.
Speaker 2 (34:34):
Super brand gets to pull a bunch of stock market
data for me. And then I'm playing at poker hand
and like, make a really thin, sexy value bet with
like fourth pair. No one knows that means, right, I
bet a very weekend. I thought the other guy would
call it with an even weaker hand, and I was right.
And I feel like I'm such a fucking stud here
valuating fourth pair, And well AI does the work for me,
(34:55):
and then of course I like bust out of the
tournament an hour later. And meanwhile, you know, deep research
bungles this particular task. But in general, AI has been
very reliable. But the point is that, like there's like
a a inflection point where like I'm asking you to
do things that like are just a faster version of
what I could do myself, I would at the moment
I ask AI, be like, I want you design a
(35:16):
new NCA model for me with these parameters, because like
I wouldn't know how to test it. But anyway, I'm
mean long wanted to hear so AGI, we're gonna get AGI,
or at least we're going to get someone calling something
AGI soon. Right, Artificial super intelligence where it's doing things
much better than human beings. I think this report takes
(35:38):
for granted or not it takes for granted. It has
lots of documentation about its assumptions, but it's saying, Okay,
this trajectory has been very robust so far, and people
make all types of bullshit predictions of the fact that
these guys in particular have made accurate predictions of the
past is certainly worth something, I think, right, But they're like, okay,
you kind of follow the scaling law, and before too
(36:01):
much longer, you know, AI starts to be more intelligent
than human beings. You can debate what intelligent means if
you want, but do superhuman of things or and or
do them very fast? I think might be different, right,
And NA that can do things very fast is and
maybe it's certainly a component of intelligence. Right, But like,
but I don't take for granted that like quote unquote,
(36:23):
AI can reliably extrapolate beyond the data set. I just
think that, like it's not an absurdly pavlogic. It may
even be like the base case are close to the
base case, but like that's not assumable from first principles.
I don't think we've all seen lots of trend charts
of thing. You know, if you look at a chart
(36:44):
of Japan's GDP in the nineteen eighties, he might have said, Okay, well,
Japan's gonna take over the world and be and people
bought this and now it's kind of hasn't grown for
like forty years basically, right, And so like we've all
seen lots of curves that go up and then it's
actually an estra where the fuck you call it? Where
like it begins to bend the other way at some
point and and we can't tell until later. Right. The
(37:06):
other thing is like the a billy of AI to
plan and manipulate the physical world. I mean some of
these things where they're talking about, like you know, brain
uploading and dice and swarms and nanobots, like you know, there,
I would literally wager money against this happening on the
(37:28):
time scales that they're talking about, right, they double the timescale. Okay,
then I might start to give some more probability. And look,
I'm willing to be wrong about that. I guess we'll
all be dead anyway, fifty percent likely in this scenario.
Speaker 1 (37:39):
But like, like this scenario fifty percent you do like AA.
Speaker 2 (37:42):
You know, the physical world requires sensory input and lots
of parts of our brain that AI is not as
effective at manipulating. It also requires like being given or
commandeering resources somehow. By the way, this is like a
little bit of a problem for the United States. I
mean we are behind China by like quite a bit
(38:06):
in robotics and related things. Right, So, like, I don't
know what happens if like we have the brainier, smarter AIS,
but like they're very good at manufacturing and machinery. So like,
what if we have the brains and they have the
brawn so to speak, right, and they have maybe a
more authoritarian but functional infrastructure. So I don't know what
(38:29):
happens then, right, Like, but the abilitiy of AIS to
comment to your resources to control the physical world to me,
seems far fetched on these timelines in part because of politics, right,
I mean the fact that it takes so long to
build a new building in the US, or a new
subway station or a new highway, and the fact that
(38:49):
our politics is kind of sclerotic. Right. And look, I
mean I don't want to sound like too pessimistic, but
if you read the book I recommended last week, fight,
I mean, we basically did not have a fully competent
president for the last two years, and I would argue
that we don't have one for the next four years. Right, So,
like all these kind of things that we have to
plan for, like who's doing that fucking planning? Our government's
(39:12):
kind of dysfunctional. And you know, maybe that means we
just lose to China. Right, Maybe that means we lose
to China, at least we'll have like nice cars. I guess.
Speaker 1 (39:24):
We'll be back right after this. I think that your
point about the growth trajectories not necessarily being reliable is
(39:44):
a very valid one. The Japanic example is great. You know,
malefusion population growth is another is another big one. Right,
We thought that population would explode and instead we're actually
seeing population decline, so you know, the world does change.
The thing that I think they rely on is that
that AIS are capable of designing just this incredible technology
(40:09):
much more quickly so that our building process and all
of that gets sped up one hundredfold from what it
is right now. But it's still, at least at this point,
needs humans to implement it right, and needs all of
these different workers. And so yeah, I think there are
some assumptions built into here that I hope, like I
hope that that timeline isn't feasible, and I do think
(40:30):
that there are things that are holding us back. All
the same, I think it's really I think it's interesting.
One of the reasons I like this report is that
it forces you to think about these things right and
try to game out some of these worst case scenarios
to try to prevent them, which I think is always
an important thought exercise. I do want to go back
to kind of their good scenario, which just so bad
(40:52):
scenario is, you know, we're all wiped out by a
chemical warfare that the AI is release on us. Good
scenario is that you know, everyone gets a universal basic
income and AI does everything and no one has to
do anything and we can just you know, at play poker. Yeah,
(41:13):
and that just as as you suggested, that seems actually
like a very dystopian scenario where people can become much
more easy to brainwash, control, et cetera, et cetera. It's
like a dumbing down, right where where we're not challenged
to produce good art, to advance in any sort of way.
Just to me, it does not seem like a very meaningful.
Speaker 2 (41:35):
Well, the question is there's not way another if you
read AI twenty twenty seven, which I highly recommend that
you read it. There's also another post by a pseudonymous
poster called el Rudolph L who wrote something called A
History of the Future twenty twenty five to twenty forty,
which is very detailed but goes through kind of like
(41:57):
what this looks like at like more of a human level,
how society evolves, how the economy evolves, how work evolves, right,
and like very detail, just like AI twenty twenty seven
is they're kind of focused on the parts of A
twenty seven, then I think it kind of deliberately ignores
maybe can call them mild blind spots or whatever, right,
(42:18):
But like, but that's interesting because that kind of thinks
about what types of jobs are there in the future.
There are probably lots of lawyers actually, right, because you know,
the law is very sluggish to change, especially in a
constitutional system where there are lots of vitail points, right,
probably high end service sector. You know, you go to
a restaurant because everyone's for a lot of people are
rich now, right, and you're flattered by the attractive young
(42:40):
server and things like that is kind of highly kind
of like catered and curated experiences. I guess I have
some faith in humanities beilieved to fight back quote unquote
against like two scenarios that it might not really like
either one, you know what I mean. And like the
scenario where like AI is producing like ten percent GDP
(43:03):
growth or or whatever. Right, man, it's great a few
own stocks that are exposed to AI and tech companies probably, right,
but it's also making that money on the backs of
mass job displacement. And like, you know, it kind of
a sert confident in the long run that human beings
fine productive things to do, and and you know, massive
employment has been predicted many times and never really occurred, right,
(43:24):
But like, but it's not occurring this fast where they
think the world ends in six years of whether they're
predicting or we have utopia in six years, and like
just the ability of like human society to like deal
with that change at these time scales leads to like
more chaos than I think they're more predicting. I think also,
and I told them this too, right, I think also
leads to more constraints. Right that you hit bottlenecks if
(43:48):
you have five things you have to do, right, and
you have the world's fastest computer, et cetera, et cetera,
But there's like a power outage in your neighborhood, right,
and that's a that's a bottle deck. Right, Maybe there
are ways around it if you're like go to home
depot and buy a generator or you know what I mean.
But like, but the point is that, like you're often
(44:09):
defined like the slowest link, and politics are sometimes the
slowest link. But also but like also, you know, I
think the report maybe under states, and I think kind
of in general, the a safty community like maybe understates
the ability of like human beings to cause harm to
other human beings with AI, right, that concern kind of
(44:29):
gets brushed off as like to pedestrian or like like to.
Speaker 1 (44:33):
Say, Pedestrian was saying, yeah, the exact word I was
thinking of. I think that's a good I mean, I
think that's a great place to end it, because yes,
we do need to be concerned about all of these
things about AI. But like that that phrase I think
is very crucial, like do not underestimate the ability of
humans to cause harm to other humans. And I think
(44:54):
that that's you know, it's not a very opt it's
not a very pleasant place to end, but I think
it's a really important place to end. And I think
that that's a very valid kind of way of reflecting
on this name.
Speaker 2 (45:05):
Or to trust AIS too much, right, I gener I
think that concern is like somewhat misplays but like if
we're handing over critical systems to AI, right, it can
cause problems if it's very smart and deceives us and
doesn't like us very much. It can also cause problems
if it has hallucinations, bugs in critical areas where it
(45:29):
isn't as robust and hasn't really been tested yet that
are outside of its domain yep, or there could be espionage. Anyway,
we will have plenty of time, although maybe only seven
more years actually to like explore, explore these scenarios.
Speaker 1 (45:50):
Yes, and in seven years we'll be like, welcome back
to the final episode of Risky Business, because the prediction
is we're all going to be done tomorrow. Oh but yeah,
this was an interesting exercise, and I think my my
p doom has slightly gone up as a result of
reading this. But I also remain optimistic that humans can
and can do good as well as harm.
Speaker 2 (46:13):
Yeah, my interest in learning Chinese has increased as a
result of recent developments. I know about my pe doo.
Speaker 1 (46:20):
All right, yeah, let's do some language immersion. I'm with you.
That's it for today. If you're a premium subscriber, we
will be answering a question about whether MFAs can ever
be plus ev right after the credits, And if you're
not a subscriber, it's not too late. For six ninety
(46:42):
nine a month the price of a midber, you get
access to all these conversations and all premium content across
the Pushkin Network. Risky Business is hosted by me Maria Kannakova.
Speaker 2 (46:54):
And by me Nate Silver. The show is a co
production of Pushkin Industries and iHeartMedia. This episode was produced
by Isabel Carter. Our associate producer is Sonya Gerwitz. Sally
Helm is our editor, and our executive producer is Jacob Goldstein.
Mixing by Sarah Bruger. If you like the show, please
rate and review us. You know we'd like we take
a four or five. We take the five, rate and
(47:16):
review us to other people. Thank you for listening.