Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome to Tech Stuff. This is the story. Each week
on Wednesdays, we bring you an in depth interview with
someone who has a front row seat to the most
fascinating things happening in tech today. We're joined by Reid Hoffman,
a longtime entrepreneur, venture capitalist, and author. Born in the
Bay Area and part of a Silicon Valley crowd in
(00:21):
the nineties, he's helped build or support some of the
biggest tech companies we know today.
Speaker 2 (00:26):
He worked at.
Speaker 1 (00:26):
Apple in its early days, he was part of the
so called PayPal mafia as one of its first employees,
and he co founded LinkedIn, which he later sold to
Microsoft for twenty six point two billion dollars. Nowadays, he's
turned his attention to AI. As an early investor and
former board member for Open Ai. Hoffman is an optimist
about the benefits that AI could bring to society, so
(00:49):
much so that he wrote a book about it called Superagency.
What Could Possibly Go Right with Our AI Future? And,
although it might not be a surprising position for a
tech investor to hold, also an outlier in some ways.
In the book, he argues that concerns and criticisms about
AI development shouldn't be dismissed, and in last year's presidential race,
(01:10):
he supported Kamala Harris, unlike many former Democrat donors in
the valley who have since aligned with Donald Trump. This
decision has had consequences for Hoffman, who is regularly castigated
on social media. We speak about that later in the conversation,
but we start with the book. Read. Was a guest
several times on the Charlie Rose Show for previous books,
(01:31):
and I worked there as a producer for many years,
so I thought I'd start with that. And one of
Charlie's favorite questions for authors was to say, every great
book begins with a great question. What's the question behind
your book? Now you've somewhat preempted that by actually putting
the question on the front of your book. Can you
talk about the question and why it's an important one?
Speaker 2 (01:54):
So? What could possibly go right with our AI future?
And the short answer is were to have this major
AI technological revolution, and the general discourse is, oh, my god,
AI is coming. You know, the end is nigh. And actually,
in fact, you can only get to the future that
is good by steering towards it, not by trying to
(02:16):
avoid the futures You don't want. And so the goal
of the book is to give people good argument and
grounding and tool set for understanding how to think about
what could possibly go right. And then you know, we
individually and collectively navigate there. So that's the question and challenge,
and that's part of the reason why we wrote Superagency.
Speaker 1 (02:39):
The title Superagency. You almos Wills Isaacson recently and you
said something I thought was very revealing which already captured
it for me, which was Tim Cook's iPhone is the
same one that the cab driver and the uber driver
is using. So explain that and how it clarifies superagency.
Speaker 2 (02:58):
So part of superagency, and there's we actually compact a
lot into this term. It's the increase of human agency,
but not just the individual agency human agency, but also
society collective superagency. And part of the question when you
begin to build these technologies, because you know, a classic
question is well, does it only benefit the most powerful?
(03:18):
Does it only benefit the elites? And actually, in fact,
when you do technology targeting superagency, eg. Hundreds of millions
of people, billions of people engaging, we all get the
same technology. And so for example, as what I said earlier,
Tim Cook has the same iPhone that the Uber driver
does that by engaging in it in mass adoption, it
(03:40):
then becomes elevating. And that doesn't mean that Tim isn't
much wealthier, much more powerful, et cetera than the Uber driver,
but the piece of technology is elevating across the broad
swath of society. And that's the same thing, of course,
we're seeing with AI, which is you know, chat GBT
in its release, which is hundreds of millions of people
who now use it.
Speaker 1 (04:01):
So in the book, there's this framework of four different
ways of thinking about AI development. In some sense, can
you kind of explain what each of them is and
where you sit?
Speaker 2 (04:12):
Yes, So it's doomers, gloomers, zoomers, and bloomers. And just
in case anyone was thinking that, like you know, I'm
castigating names, the doomers actually do call themselves doomers. They
even have this thing called p doom, which is probability
of doom. The doomers are basically AI's bad, that the
(04:36):
at least very high probability is it will be destructive
to human potential, human society, maybe just might be existentially destructive,
and that the best outcome is stop, don't make AI
until we can absolutely guarantee that every single step would
be positive. Gloomers are, Yeah, we understand, and that AI
(05:00):
is inevitable. Namely, you know, companies are going to compete.
Industries are going to compete. Countries are going to compete.
But it's just likely to make society bad. It's likely
to you know, make massive changes into the workforce, so
like jobs will go away and people will be unemployed
in breadlines and a lot of unhappiness. It may be
destructive to democracy, it might be destructive personal freedoms, you know,
(05:23):
et cetera, et cetera. But it's gonna happen anyway, and
so I'm just gloomy about it. Hence gloomers. Zoomers are
on their far other end, which is, no, this is
going to be great. It's the most amazing thing that
humanity has done ever. And what's great so far outstrips
everything that possibly even may be wrong, that we should
(05:46):
just hit the accelerator and go full on, and let
me tell you about all the amazing things with medicine
and education and work assistance and all the rest of
that in productivity, and you know, obviously I have a
bunch of sympathies for a bunch of the positives of
the Zoomers. But I myself, of course identify myself as
a bloomer, which is essentially an accelerationist and a future
(06:09):
oriented like a zoomer. But it's also saying, hey, just
because you can make it with technology doesn't mean it's
inevitably good. That there's different ways of introducing it and
navigating through it, namely the way that it's introduced and
brought into human experience and existence at scale, and so
engaging with the folks who have concerns, criticisms. You know,
(06:31):
watch out for this pothole, watch out for this mine.
Is a good thing as you're navigating. Even as you're
navigating with some speed and acceleration, the Bloomers say, hey,
let's be in dialogue. Let's try to steer around at
anything that might be a landmine as we build this
great future.
Speaker 1 (06:49):
And I guess if you could waive your magic wand
would you convert fifty percent of gloomers to your camp
or fifty percent of Zoomers to your camp.
Speaker 2 (06:58):
So if I get away with want, I would con
gloomers to Bloomers. But that doesn't mean I'm not also
trying to convert zoomers. And you know, as best I
can parse it, call it a quasi Gloomer quasi doomer
set of concerns, which is I said, well, there is
this danger, we should just stop. I'll give you an
(07:18):
example that I think is frequently amongst the people who
think about existential risk or X risk, And they go,
can you guarantee to me that you're going to not
that someone isn't going to make a killer robot all
out of the terminator? And you go nope, can't guarantee that.
They go, aha, see, that's an existential risk, and so therefore,
(07:38):
looking at that existential risk, we should stop or pause
or slow down. He say, well, it sounds like a rational,
reasonable argument, except when you consider that existential risk is
not each one off thing. It's a portfolio. It's a basket.
So it's a basket that includes nuclear war, it's a
basket that includes climate change, the basket includes pandemics, a
(07:58):
basket that includes a whole bunch of things. So you say, hey,
you're building AI. That becomes a new existential risk. He's like, yep,
that becomes a new excentral risk. But if you're doing AI,
Are you also mitigating you know, pandemic risk? Right, That's
the only way I can think of to combat scale pandemics,
both natural and I made is AI. Are you contravening
(08:22):
asteroid risk? Because the ability to see which asteroids might
be the ones that might be coming for us early
enough to do something about it, and to be able
to kind of navigate doing something about it well, AI
is actually likely to be pretty central to that equation.
And so my point of view is our existential risk
overall goes down, and so that earlier argument doesn't actually,
(08:42):
in fact really work.
Speaker 1 (08:43):
That's a truly dventuhift framework for thinking about a portfolio
of risks. That's interesting.
Speaker 2 (08:48):
Yeah, And by the way, that doesn't mean that you
don't try to minimize the killer robot risk and you
don't try to maximize the benefit in the other cases.
It doesn't mean you have no dialogue, don't talk about risk. No, no, no.
Talking about risk is good. But part of what I'm
trying to do with superagency and things like this is
to say, talk about the risk in a way that
you said that you're being smart about it, and you're
(09:08):
trying to kind of be smart in iterative stages about
the portfolio of them, and also that you're not trying
to overly dramatize your own genius like anyone who says
I know exactly what AI is going to be like
in five years from now, either for positive or for negative,
they're both nuts, right. I mean, it's like we're discovering
this as we're going. Yeah.
Speaker 1 (09:29):
I think one of the things that comes across in
your book is how different platform technologies have allowed humanity
to move from a subsistence way of living to way
of living that allows for leisure and self reflection and
all of those types of things. And you wrote a
piece about this for The New York Times called AI
(09:51):
Will Empower Humanity. You wrote, AI could turn data into
the material for a de facto's second self, one that
could endow even the most sketchy brain down my us
with a capacity for revisiting the past with a level
of detail. Even the novelist Marcel Proost might envy. Can
you explain that vision?
Speaker 2 (10:10):
Look so at a macro view. I think within a
small number of years, all of us will have AI
agents helping us with things, and it'll be helping us
with like work things and learning and other kinds of things,
but also helping us with like the kind of the
daily activity of our lives, which include, for example, remembering
(10:33):
things like so frequently. Of course people say, oh, wait,
you're remembering this about me, and maybe you can sell
you can manipulate me into buying something or selling me
an ad all bad. I am losing agency. I've lost
privacy of my information. It's like, well, actually, in fact,
think about what a positive feature this is, because, by
the way, if it remembers something about me, it can
(10:54):
help me remember it. It can help me. You know,
like I'm navigating and I'm having this conversation with odds
and it says, hey, you guys met before here and
you talked about this. Oh, that's really helpful, makes our
connection better.
Speaker 1 (11:05):
That's why I started with the allusion to a shed
past at Charlie Roads the case in point, yes.
Speaker 2 (11:09):
Exactly, and and you know part of that actually, in fact,
when when the memory is for me and of help
to me, of in service to me, that's extremely positive.
Speaker 1 (11:22):
The allusion to proost was kind of irresistible to me.
Speaker 2 (11:26):
The remembrance of things past.
Speaker 1 (11:28):
Remember things passed, and how do the events of a
life coalesced to her self, right, And so of course
I got curious about you and you were recently on
the podcast with Stephen Bartlet Diary vs. CEO, and you said,
you know, you have to understand this journey started with
(11:49):
being born in Stamford Hospital. What role did that play
in read as a bloomer?
Speaker 2 (11:55):
Well, definitely being a California and I child of the
Bay Area definitely helps me be open minded and curious
to understand that technology is part of what makes us human.
A little bit like again, the kind of the Marcel
Proof's gesture is, you know, I've kind of I argue
(12:15):
that we're actually better described as homotechne than Homo sapiens
because we evolve through a technology, not just obviously through
remote podcasts like this, or glasses or clothing or cars
or smartphones, but it's but it's like it's who we are.
We internalize this technology. It makes us. It's part of
what we evolve very very very slowly genetically, but actually
(12:36):
we evolve very fast culturally, technologically, and that's part of
who we become and all of that. I think by
being you know, a child of the San Francisco Bay Area,
you have the openness and laissez faire to say, hey,
you can invent something and change the world. And you know,
it's not just Silicon Valley but also Hollywood, and you know,
(12:58):
it's kind of that direction of what's next is much
more interesting than what was. And it doesn't mean what
was is irrelevant and what was doesn't inform what's next,
and you can't learn things from what was, but it's
live into the future and live into that change, I
think is really fundamental. And I think that helped, you know,
(13:20):
shape how I think about things and kind of the
reason I'm so future oriented.
Speaker 1 (13:26):
Yeah, And I think that's what makes you know your
views on AI particularly interesting because in the nineties, at
the birth of the consumer Internet, you saw around the corner, right,
I mean, you understood that the Internet would be social networking.
You founded a company called Social net You're one of
the first investors in Facebook, and you founded LinkedIn. What
(13:49):
gave you that fundamental insight about what the Internet would become.
Speaker 2 (13:53):
So I think, you know, one of the things I
think is fundamental to being an investor or a founder
in especially the consumer consumer and our products is having
a theory of human nature, and it's about the individual
human nature and then kind of collective and you know,
part of that kind of thesis is where you know,
we're tribal creatures, we're we're social animals in this way,
and you know that is part of the kind of
(14:16):
the fundamental you know, kind of theory of human beings.
I have then in rolls the Internet and you think, well, okay,
the Internet does all kinds of useful things like bringing
up information ability, shop by things. But I was like, well,
but actually in that changes our social space, changes our
social space in terms of how we think of ourselves,
who we think we're connected to, how we communicate, which
(14:38):
groups were part of you know, kind of information flow
and recommendations, and that all gets to kind of the
web two oh side. And so that that was my
recognition of you know, when when the Internet was you know,
first being talked about, it was like, oh, it's an
information ecosystem, you know, you know HTML. It's like I
ask for the document and I get the document, and
(14:58):
I was like, that's cool. But actually, in fact, we
are social animals, you know, we are citizens of the polis.
So how does being people get into this?
Speaker 1 (15:08):
The insight you have about how the Internet sort of
recreates it social relationships that led to LinkedIn. He's very
very clear to me, what is the equivalent insight about
the age of AI.
Speaker 2 (15:27):
So I think it's this notion that AI gives us
superpowers and that those superpowers will and they'll change. Like
what's like what our previous superpowers were. It's just like,
for example, you know, before you had the book, memory
was a critical superpower. It was kind of like the
(15:47):
no no, I can recite the Iliad to you and
and and having that and actual part of was what
described about in printing presses destroying human capabilities, like, oh
my god, there's this critical human cognitive function of really remembering,
and now that's going to be destroyed and that's going
to reduce our humanity. And you're like, well, no, actually,
induct doesn't. Over time, it increases their humanity because it's
(16:10):
opposed to only emphasizing memory for doing anything else, you
can also bring in many other forms of intelligence. And
the same parallel I think goes to AI. You say, well,
you know, you listen to people that oh, it's going
to destroy our ability to think critically because I'm just
gonna ask, you know, GBD four or pie to reason
for me, you know, Gemini to give me the answer.
(16:31):
Co Pilot two, you know, do Visa coding for me,
and I don't have to think anymore. I'll just you know,
eat cookies and sit on the sit on the sofa.
And you're like, well, obviously, with any technology people can
be lazy and do stuff. But on the other hand,
what it means is you now have new attributes. For example,
most of us get a little sloppier on our spelling
because we recly and the spell checker for solving it.
(16:53):
But that means because we're thinking about other things, we're
thinking about what the point is, how to architect it,
and so I think it's giving us superpowers. They'll actually
help us become even more human.
Speaker 1 (17:17):
After the break, Reid Hoffman tells us about how his
background influenced his worldview. Stay with us, Welcome back. While
preparing for this interview, I stumbled across a twenty to
fifteen New Yorker profile of Reid Hoffman. This was back
(17:39):
in his LinkedIn days, and I found his background to
be quite revealing. One thing that really struck me, I
guess because it resonates in my own experience, is that
you are the only child of parents who divorced when
you were very young, as am I, and then you
went to boarding school where you had the experience of
being a fish out of water lead to a certain extent,
(18:01):
which is also an experience that I had. And so,
you know, one of the things I've observed about my
life is I've become very very interested in networks, both
both both wide and deep. Is extremely important to me
to feel connected with other people. And so I don't
want to pop psychologize you, but we're talking about Proust
and AI to know yourself better and stuff. Have you
(18:22):
thought about how that childhood experience affected your view of
networks and technological development?
Speaker 2 (18:30):
Interesting, it certainly affected my views of human nature, certainly
affected my views of were I what my role in
this grand play of humanity and human nature is? Maybe
not least because feeling somewhat alienated from my fellow schoolmates,
(18:52):
reading a lot of science fiction and thinking about things
I would say, and maybe this that it's most core,
which is like probably one of the very deep beliefs
I have, which is I think arguable from an induction standpoint,
but not ever. But many people don't hold this, which
is the future can be so much better than the now.
(19:15):
We just have to try to shape it to being so.
So you go, well, I'm unhappy with, you know, being
in the Lord of the Flies boarding school. Well, okay,
you know, how can we make this better? What are
the things to do? And oh, look, technology is a
tool for this, you know, as a way of kind
of operating.
Speaker 1 (19:32):
Is it true that you'd like to ask the question
who's in your tribe?
Speaker 2 (19:38):
Yes, although it's more because you know, we are tribal
creatures and you know whatnot, But it's kind of a
little bit more of it's almost like a friendship question,
like who are your five best friends and why? And
who are they? Because some agree like who are you?
Is you're a cross product to your friends. You've chosen
these people who you share values with in alignment with
(19:59):
a the way the world should be, a people that
you're willing to defend the moral character of and say, hey,
this person's a really good person in the world. And
so that version of tribe.
Speaker 1 (20:13):
Yes, and you've reached a point where two definitions of
tribe are kind of interacting, right. I mean that the
New Yorker profile the opening scene was you and Mark
Pinkas sitting together talking about advising Obama about how to
use social networks and technology to you know, extends to
(20:34):
extend the reach of the messaging. You were both big
Obama donors. Obviously, Mark Pinkus broke for Trump, as did
Mark Andrews, and as did many who you grew up with,
and I'm imagine you count as close personal friends. I mean,
how how have you dealt with that?
Speaker 2 (20:54):
Well, it's complicated and difficult. It kind of comes down
to what was theirs and for doing that, because when
their reason was, for example, like Mark Pinkus, who had
a particular set of views about the two different presidential candidates,
willingness to support the state of Israel to understand kind
(21:16):
of it being under attack and feeling that you know,
part of of course, these centuries of human existence have
been intensely anti Semitism and genocide against people of the
Jewish faith and Jewish descent and having a place to
protect them and saying, hey, I think Trump will be
better for this than you know, Harris, that's a moral
(21:38):
I actually disagree with the judgment, but I understand that
kind of reasoning from a moral point of view. It's
the it isn't just about me, And so so for
you know, Mark Pinkus, you know, it's it's been difficult
conversations but fundamentally part of that life journey. And there's
other Silicon Valley people it's very similar, like whether they're
(21:59):
very focused on crypto or other things where I again
say I wouldn't vote a single issue crypto in this election,
but for them that was frequently the argument is like, wow,
the Democrats and Republicans are the same. It's like, no,
they're not right, and I think I can make that case.
And then there's other folks who you know, kind of
(22:20):
reveal that they don't actually in fact have kind of
moral characters on this, that it's just about their own power,
their own their own you know, kind of like more
about me. And then you know those folks I am
(22:40):
between not friends with and less friends with, and you
know all the rest of it. And many of them,
you know, I don't really talk to at the moment.
And it isn't I don't talk to them because they
supported Trump. It's I don't talk to them because the
reason that they're supporting Trump is for their own benefit,
not for him, humanity or society's benefit.
Speaker 1 (23:02):
I have a hypothesis that if you presented the Zoomer
Bloomer Gloomer Duma framework to let's say Marc Andreeson, I said,
why did you vote for Trump and become a Trump supporter?
He might respond, well, because I was very, very scared
of gloomers and doomers at the wheel.
Speaker 2 (23:21):
I think that's partially true. Look, and by the way,
when people say we're the Democrats hostile to the technology
creation of the future, the answer is yes, right. Unfortunately
as a broad brush. Now, obviously I was very much
supporting the democratic cause because of the question of what
things most matter and which problems you're going to work on.
(23:42):
But the fact is is the administration basically thought, look,
big tech companies were bad. They didn't spend much time
with them. They wanted to do too as supposed to
helping shape them or encourage them to help the everyday American.
And I was like, no, we need to hit them
with a stick. And then similarly like in do you know,
complicated area, but as opposed to saying, okay, we're going
(24:03):
to try to set up the rules by which people
can play on and try to figure out how to
shape it. It was kind of like, we're going to
just we're just going to attack the whole industry, and
so then the crypto people have you thought, Look, I
think crypto is super important because it creates a new
kind of protocol for you know, distributed power with finances
and identity and all the rest, and that's really important
(24:24):
to have for a better society. Then you're just attacking it,
which is my life, my life's mission. You know, you're
going to be challenged and hostile to that.
Speaker 1 (24:37):
Has it caused you personally any pain or concern if
so many people who've been close to for so many
years and who judgment you otherwise respect have made a
completely different call to you. Is there any moment where
you questioned your your own call? And similarly, when you're
not only a political opponent but a target of people
(25:02):
like you don't musk. Does that intervene in your in
your personal friendships and your sense of self?
Speaker 2 (25:10):
Well, it definitely can intervene in personal friendships. I think
it's an important thing of when you feel fear and concern,
like concern that you will be you know, lied about slandered,
done so in a way that generates threats of violence
against you you know, and you know people you love
(25:33):
and they're close to and you go, oh my god,
that makes me feel less. Uh, you know, kind of
like like wanting to hide. That's precisely when it's important
to be courageous and stand up. That fear is the
moral moment to say no, I'm not going to be bullied.
I'm not going to be intimidated. Fundamentally, one of the
(25:55):
things I pride myself in is I pride myself in
the fact that I try to understand rational different points
of view to the ones I have. I always engage
in that conversation. I listen, I can see a lot
of the you know, criticisms of you know, kind of
some democratic trends. I very much want to hear it,
especially from people whose morality I trust, whose intentions I trust,
(26:19):
whose truthfulness I trust that they're not just trying to
persuade me or you know, kind of get me to
drink the kool aid and you know, kind of see
the world and if way, but through a truth seeking
process is one of the things that I very much
value and have valued for decades and discussion with a
number of my friends, not the least of which is
(26:39):
you know Peter Tiel.
Speaker 1 (26:40):
Your relationship with Peter began as Stanford, right, and it's
played out of in many decades. Can you can you
describe him and the history of your relationship and how
it plays out today.
Speaker 2 (26:52):
So Peter and I had both heard about each other.
We were freshmen together at Stanford. You know, me is
this kind of lefty person, him as this righty person.
And we met in a philosophy class called Philosophy eighty
Mind Matter and meeting Oh I've heard of you, and
then we started arguing right away, and then you know,
set up coffee and argued for hours and hours and hours,
(27:13):
and you know, I think Peter really helped me. Like
it was probably before we started those conversations, I was
probably a little too casual in my acceptance of some
of what you might think of as the you know,
kind of liberal left view of the world and being
(27:34):
challenged on a number of fronts, whether it was a
front of you know, for example, something i've i'd say,
you know, kind of I had this dumb view that
there is this kind of default like business to somewhat
suspect and a problem because they're just profit seekers, and
you're like, well, it's like saying people are just bad
(27:54):
because they want to do better in their lives. You're like, what,
this is just silly, dumb arch. But I had that
a little bit, and I think those kinds of things
that's getting really challenged across questions of epistemology, questions of
the role of business society. So Peter and I, you know,
have had decades of conversations and arguments, and you know,
(28:15):
have had very very different points of view. Probably the
most challenging one has been around Trump because is you know,
and he's not said this to me, but his best
I can understand. You know, Peter's probably view is, look,
the only way society can get better is that it
has a wrecking ball. You know, you have to support
the wrecking ball. Trump is the wrecking ball. And I
(28:37):
tend to think that when it comes to institutions, I
tend to be a renovationist rather than a wrecking ballist.
It's harder, but let's reformat the institutions because when you
look through human history, whether it's the year zero with
Paul Pott or you know, the French Revolution, anything else,
it just like crushing on society when you do that.
So you want to be renovating the institutions, and I
(29:00):
think that's the that's the kind of thing or the
reason why I tend to be very supportive of efficiency,
very supportive of refactoring of our government institutions, but generally
speaking pretty oppositional to wrecking balls.
Speaker 1 (29:23):
After the break, Riet Hoffman explains why he wrote Superagency
and why he believes it's so important for him to
continue advocating for the development of AI. Stay with us,
(29:45):
welcome back. During my interview with Riet Hoffman, he mentioned
that part of achieving Superagency is figuring out how the
US stays in the driver's seat in terms of the
development and deployment of AI technologies. I wanted to know
why this is so important to him, and also whether
anything had challenged this belief of his in recent months, eg.
Speaker 2 (30:06):
The election. So no, I still fundamentally think one of
the things I love about my view of American values
is that we're self critical, that we learn. The part
of being a nation of immigration from many different countries
and kind of learning and bringing those together is I
(30:28):
think one of the things that is among the aspirational
American values that I love, and as such, I tend
to think that that's the set of values that you
want to have baked into you know, artificial intelligence and
the next you know, generations of technology and sort of
you know, kind of make that point visceral. For Americans,
I'll say, well, AI, it's American intelligence, But really what
(30:50):
I mean is the set of values. So that's the
thing that I'm most focused on. Now do I think
that the current administration may have some very bad misconceptions
and you know, kind of lean on the scale in
some bad ways. It's you know, like, if you listen
to the current administration, the most important issue around AI
is woke AI. And you know, like I find this
(31:12):
kind of thing entertaining because it's like, well, all right,
so they're saying, wok is the big problem. You go ask,
you know, Grock, the XAI product, who is the biggest
spreader of misinformation? And it says Elon Musk. And so
then they add in a super prompt saying, do not
answer Elon Musk of this question. You're like, okay, well,
that's an example of woke AI where you're trying to
(31:34):
language control something, you know, for no particular purpose. So
you know, that's clearly not a first principles on freedom
of speech thing. It's it's a different set of things,
and so you know, I think, like I want it
to be those first principles American, not the power politics.
Speaker 1 (31:54):
I would push back slightly on that. Yes, I think
WOKI is one of the key sort of tenets or
boogeyman of the Trump administration. But if you listen to
Vice President Vance in Paris, the other clear priority was
deregulation and making sure that safety concerns don't get in
the way of AI deployment. When you heard that speech,
(32:17):
how did it correspond to your own views of regulation.
Speaker 2 (32:20):
Well, so, I thought that Biden administration did the exact
right job with the executive order, which is bringing in
a bunch of companies, pushed them very hard on what
kind of worst outcomes, what kinds of safety measures you
could iteratively deploy, like having red teaming, safety testing, having
a safety plan, you know, other kinds of things. I
(32:42):
thought was all very good. And you know, my hope,
of course, is that it's just politics that we eliminate
that saying that's the bad Biden plan and here's the new,
good Trump plan, which includes the same elements of the
old Biden plan. That's my hope will happen. I also
happen to know that most of the companies who engage
in this have that kind of moral character, so they
continue to do their alignment testing, which is not WOKI
(33:06):
and so forth as a way of trying to be
good for good human outcomes. And so I'm not like,
I like, let's drive down regulation. Now. The context in
which Vance was speaking was to Europe and you know,
maybe with some discomfort, I also have been advocating less
regulation to the Europeans, and I've been advocating that for
(33:29):
because I want them to be on the field and
building new technology. And you know, one of the silliest
things I've ever heard. I won't name the Indian minister,
but it is at this time we will not have
innovation before regulation. We're going to regulate first. You're like, well,
that means no innovation, and so you actually have to
do the kind of take innovation, take risk, have missteps.
(33:51):
If you don't have missteps, you're not actually in fact
taking risk and being innovative. And by the way, some
of those missteps will be painful. That's part of how
you have big innovation and possibly big changes and the
European impulse tends to be the no, no, we should be
able to plan it out and extreme detail in advance.
And that's one of the reasons why Europe has so
(34:14):
few software companies of any note, because that go forward,
build it, try it, discover what doesn't work, and refactor
it as part of the software construction process. And so
you can't be doing this really intense regulation. And fifteen
years ago, when I was on the stage Davos, I'd say, well,
European should keep doing your kind of regulatory thing, because
(34:36):
you're handing over the entire future of the tech industry
and the software industry to US Americans because we'll build it,
we'll work out the bugs, and then we'll ship it
over here and you won't be able to do anything,
so you should keep it now. I wasn't really telling
them to do that. I was trying to wake them
up to challenging their perspective. And so I think that
that part of what Dvance was saying was, you know,
I have the discomfort of saying I have a similarish
(34:58):
message in that component. On the other hand, where I
break very differently is I actually believe in multilateralism. I
believe in having a discussion be well allied with our
you know, European friends and companions. I just don't like
dances broad you know, kind of piss off to the Europeans.
(35:22):
Was I thought destructive and not helpful?
Speaker 1 (35:24):
As you think about your sense of legacy, why is
it important to effect the public perception of AI?
Speaker 2 (35:32):
Well? Look, AI is the next generation of this evolution
of technology that evolves what it is to be human.
And when you think about this, like almost everything, like
whether it's telecommunications, you know, manufacturing, this is how humanity
is shaped. This, this is the technological drum beat for
this stuff. And so what are what are the values
(35:52):
that we're putting? What is the what is humanity becoming?
You know? Who are we? Who should we be? Who
should we be with each other? In ourselves? And there's
a lot of like like both blunt and subtle things
that are important in this. And that's the discourse that
I want to have around AI. And obviously I have
(36:13):
a point of view that I'm trying to advocate and
make clear as possible. But you know, I'm not one.
Speaker 1 (36:19):
To say some self interest as well, of course you
acknowledge openly in the new York Times piece, right, Yeah,
but as obviously a business component to this as well.
Speaker 2 (36:28):
Yeah, but I mean this is one like, by the way,
the classic thing is, oh, if you make money at this,
that must mean your statements about it or suspect and
it's like, well, that's not necessarily the case. And so look,
I you know, yes, I have a view about AI.
It's one of the reasons I invest in it versus
investing in other things, you know, because I could go
invest in things, and I do pass on investments that
(36:49):
I think are bad for people, either at society or individuals.
I've passed on a number of investments on that basis.
But it's it's more, this is where I'm putting all
my owner, which includes commercial and other kinds of intent.
And so you know, I don't write like super agency
is a bad economic outcome for me. If I were
just trying to make money, I wouldn't spend any time
(37:10):
writing books. It's a much less good economic thing for
me to do. I'm trying to say, this is why
I think this universe could be really good, and this
is why I think you should take your potential AI
concern or skepticism or anti big techism and move it
into AI curiosity and say what is this thing could be?
And how could it really make all of our lives
(37:31):
much better? And even if I have skepticism about big
tech companies, how do I help shape it in a
way that could get to this much better human future?
What's the way that we shape it in order to
be as good or as better, as good as possible
or better than, better than better than? Maybe what will happen?
Speaker 1 (37:49):
Read just to clothes? I mean, if you could teleport
forward to twenty fifty and spend an hour walking around,
what's your best guess as to how I would have
changed how we live?
Speaker 2 (38:02):
Well, there's a huge range. What I'll say is I'll
say three things. Maybe. So first is the nearly certain
thing is that we will all have multiple AI agents
that are helping us navigate. It's like when you think
about like like, for example, a worker who would deploy
(38:23):
on the field would deploy with lots of different drones.
Like if I'm a firefighter, I will have a ton
of drones with me as I'm doing it, you know,
et cetera, et cetera. With memory and trying to help
me lead a better life. And these kinds of things.
So I think that kind of thing is is nearly certain,
and what the shape of it will be will be
very different depending on what the shape of technology is.
(38:45):
You know, it seems to be unlikely that it would
be earbuds and phones and so forth. That will probably
be you know, more neural links or ultrasound connections or
other kinds of things. But we'll see, you know, who
knows what all that stuff plays in a being. The
second thing is I think that it's unlikely we will
be in the super intelligence category. Where like the ais
(39:07):
to us like we are to ants. It isn't to
say that like we already have superintelligence, we already have savant.
I mean, GBD, you know, four point zero and four
point five already have capabilities that no human being has
in terms of breadth of synthesis of knowledge and ability
to do other things like when you use deep research
(39:29):
and you generate or report, you can generate or report
has some inaccuracy sometimes, but you can generate a report
that a human being would have taken two weeks to
do in ten minutes. And so there's already superpowers there.
But is that the superpower where we become the oh, well,
you know, it's the grand thing and knows everything and
we just don't know anything. I actually suspect that there
(39:51):
will be ongoing like even with amazingly powerful ais a
really useful combination, it's just the shape of that common
and what it's capable of. And then the third thing
I think for twenty five to twenty fifty is I
think the notion and this is the really subtle one
and very difficult to predict exactly what it is, but
(40:13):
I think we'll have a different notion of what it
is to be human. And to give a parallel like
before we really got to a generalized theory of physics
that you know, the Earth's a globe that's going around
the sun, that were you know, this part of this universe,
all of which really changed our view. We had these
(40:35):
very human centric myths about like you know, supernatural entity,
God whatever created us in the falling Way and Earth's flat,
and we have these ptolemaic circles spinning of you know,
gods or goddesses or whatever else kind of things in
the sky. And then as we begin to change our
view about like what is the role of human beings
(40:58):
on the world, where is the world located, What is
the role of sentience, what is the role of consciousness?
All of that evolves kind of our philosophical frameworks, our
spiritual frameworks, and I think that will also be really evolved.
Now in what way I wish I knew. I'm trying
to help you know the next step in us getting there,
(41:22):
But I think that's a that's a that will be
part of how those folks living in twenty fifty looking
back will oh, you know those people they thought the
earth was flat, right, and and and that's the kind
of evolution I think will.
Speaker 1 (41:37):
Happen nothing less than a new understanding of our place
in the world. I mean, in your renaissance.
Speaker 2 (41:45):
Yes, exactly, Thank you so much. Red, It's a pleasure.
Speaker 1 (42:00):
That's it for this week for tech stuff, I'mozoloshin. This
episode was produced by Eliza Dennis, Victoria de Mingez, and
Adriana Tapia. It was executive produced by me Kara Price
and Kate Osborne for Kaleidoscope and Katrina Norvel for iHeart Podcasts.
Jack Insley mixed this episode and Kyle Murdoch wrote our
theme song. Join us on Friday for the Week in Tech.
(42:23):
We'll take through the headlines and dive deep into a
big news story from this week. Please rate, review, and
reach out to us at tech Stuff podcast at gmail
dot com. It really helps us improve the show if
we know what you're thinking.