All Episodes

June 16, 2023 54 mins

Emily Chang goes behind the scenes at OpenAI, the buzzy startup behind ChatGPT and Dall-E. She meets CTO Mira Murati to discuss the launch of what may be the most popular product in tech history and the potential risks and rewards of artificial intelligence. She then meets with OpenAI backer Reid Hoffman for perspective on the dawn of AI. 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
How does all this talk about, you know, relationships and AI, Like,
could you see yourself developing a relationship with an AI?

Speaker 2 (00:07):
I'd say yes, as a reliable tool that enhances my life,
makes my life better.

Speaker 3 (00:14):
I'm Emily Chang, and this is the circuit. We're inside
a nondescript building in the heart of San Francisco where
one of the world's buzziest startups is making our AI
powered future feel more real than ever before. It's giving
me very West World Spa vibes. It's almost like suspended
in space and time a little bit. They're behind two
monster hits, chat Gypt and Dollie, and somehow the biggest

(00:37):
tech giants to market, kicking off this competitive race that's
forced them all to show us what they've got. Is
it magic? Is it just algorithms? Is it gonna save
us or destroy us? To help us separate AI height
from reality, I sat down with LinkedIn co founder and
Facebook investor Reid Hoffman, who was an early backer and
board member of Open Ai. He also used Chatchi to

(01:00):
write a novel. But first, here's my conversation with Miramrati,
chief technology officer from inside open Ai.

Speaker 4 (01:09):
Well, thank you so much. For doing us. It's really
great to have you, and you've been very busy.

Speaker 1 (01:12):
I want you to take us back a little bit
when you were making the decision about releasing chat Gpt
into the wild. I'm sure there was like a go
or no go moment. Take me back to that day.

Speaker 2 (01:23):
You know, we had chat gipt for a while, and
we had been exploring it internally and with a few
trusted users, and we realized that we sort of hit
a point where we could really benefit from having more
feedback and having more people try to break it and
try to figure out how best to use it. Let's

(01:46):
make sure that we've got some guardrails in place and
start rolling it out incrementally so we can get feedback
from how people are using it, what are the risks,
what are the limitations, and learn more about this technology
that we have created and started bringing it in the
public consciousness.

Speaker 4 (02:04):
So you wanted people to break it or try.

Speaker 2 (02:06):
Yes, we definitely wanted people to try to break it
and find the fragilities in the system. We had reached
the point where we had done a lot of that
internally and.

Speaker 5 (02:18):
With a small group of people.

Speaker 2 (02:20):
External experts as well, and we wanted more external researchers
to play with it.

Speaker 3 (02:26):
It became the fastest growing tech product in history. Did
that surprise you, I mean, what was your reaction to
the world's reaction.

Speaker 5 (02:33):
Yeah, it was a huge surprise for us. We were surprised.

Speaker 2 (02:38):
By how much it captured the imaginations of the general
public and how much people just loved spending time talking
to this AI system and interacting with it.

Speaker 3 (02:49):
I want to take a step back a little bit,
you know, because a lot of people still don't really
understand how it works. Chatgbts trained on you know, tons
and tons of data and text.

Speaker 1 (02:58):
It can now mimic a human. It can write, It
can code at the most.

Speaker 4 (03:02):
Basic level and in the most succinct way that you
can How does it work? How does this all happen?

Speaker 5 (03:08):
So?

Speaker 2 (03:08):
TUGBD is a neural network that has been trained on
a huge amount of data on a massive supercomputer, and
the goal during this training process was to predict the
next word in a sentence, and we found out that
by doing this we also got the ability to understand

(03:30):
the world in text more like humans do. The goal
here is to have these systems have more robust concepts
of reality, similar to how we think of the world.
We don't just think and reason in text. We also
obviously have the world in images, visual world around us.

(03:51):
That's been the goal over time, which is why we've
been adding more and more modalities. And it turns out
that as you trained larger and larger models and more
and more data, the capabilities of these models also increase.
They become more powerful, more helpful, and as you invest

(04:13):
more on alignment and safety, they become more reliable and
safe over time.

Speaker 3 (04:17):
So I'd love to hear a little bit more about
your personal story. I know you grew up in Albania.
What was the road like from Albania to Silicon Valley.

Speaker 2 (04:25):
I grew up in Albania, and when I was growing
up in Albania, it was a pretty tumultuous place politically
and economically, so I always knew that I wanted to
study abroad. I always loved learning, and in this pursuit
of knowledge, took me to Canada on a scholarship, and

(04:47):
from there I came to the US and I've stayed
in the US ever since.

Speaker 1 (04:52):
You've worked on aerospace, you worked at Tesla, you worked
on virtual reality. How did you become CTO of Open Ai.

Speaker 2 (04:59):
It's been My training has been in mechanical engineering. I've
always loved maths and physics and these were my favorite
subjects as a kid. So my training took me from
aerospace engineering to automotive engineering and then in applications in
virtual reality and augmented reality.

Speaker 5 (05:21):
But there was always this.

Speaker 2 (05:23):
You know, deep technological advancement in pursuit of some problem
that makes our lives a little better. And five years
ago that brought me to Opening Eye because I thought
that there is no other more important problem that I
could be working on than artificial general intelligence. And I

(05:48):
joined Opening Eye to help with leading research teams, and
from there I went on to build a product team.
And you know, after having done a few of the
roles in the company and having built a lot of
the technical teams, I'm now leading all of the technical

(06:10):
teams done.

Speaker 1 (06:11):
So as CTO, how do you set the pace of
open AIS technology development? How do you balance speed versus
responsibility versus safety?

Speaker 4 (06:23):
Like, how do you where are your priorities?

Speaker 2 (06:26):
Essentially, so, I think today we are dealing with unprecedented
advancement in technology, and I think the most important thing
we can do is to manage its advancement and do
some in a way that's going to benefit people, maximize

(06:48):
the number of amazing applications that AI can bring, and
really fuel this energy that people have about interacting with
AI and making great use of AI, also giving people
the tools to do so in a reliable and safe way.
So Atpenyi, our safety teams and research teams collaborate very closely,

(07:13):
and safety teams are integrated in many of our research domains.

Speaker 5 (07:18):
But we also.

Speaker 2 (07:19):
Provide more room for long term research for safety and
policy research as well. It's important to work both on
kind of the near term present issues that we see clearly,
but also have make a lot of room for exploratory
and frontiers research when it comes to safety and policy.

Speaker 1 (07:41):
So CHATGBT could revolutionize so many things, and obviously AI
more broadly, What are the things you're most excited about?

Speaker 4 (07:48):
Like, what's the amazing What.

Speaker 2 (07:49):
I'm most excited about is how it will transform education
and our ability to learn because you can really see
that advancing society in a way. You know, there are
many even the most advanced societies that they are quite
limited when it comes to education. There is this formula
on how people are supposed to learn and we all

(08:14):
learn very differently. We have different interests, and so I
think by using technologies like tragibiity the underlying models, we
can really build custom virtual tutors or virtual teachers that
can help us learn about the things that we are
really interested in, can really push our creativity, and by

(08:37):
pushing human knowledge and human creativity, I think we can
really transform the fabric of society.

Speaker 5 (08:47):
What about the.

Speaker 4 (08:48):
Scary stuff, like what are you most concerned about?

Speaker 2 (08:51):
You know, whenever you have a technology that is so
powerful and so general, there's always the other side of it,
and there are always that we have to worry about.
And we've been very vocal about this since the beginning
of open AYE and very active in studying the limitations

(09:12):
that come with the technology. Right now, one of the
things that I'm most worried about is the ability of
models like jubid for to make up things. We refer
to this as hallucinations, so they will convincingly make up things,
and it requires you know, being aware and being.

Speaker 5 (09:35):
And just really knowing.

Speaker 2 (09:37):
That you cannot fully blindly rely on what the technology
is providing as an output.

Speaker 5 (09:43):
But on the other hand.

Speaker 2 (09:44):
It also makes it glaringly obvious that this is a
tool with which you're collaborating. People can misuse it in
various ways. They can spread misinformation, it can be misused
in high stakes scenario. So from GBD three point five
to GPT four we work very hard to reduce hallucinations

(10:07):
or increase the.

Speaker 5 (10:08):
Factual output of the models.

Speaker 2 (10:12):
And we worked on GBT four for over six months
just to make it more aligned, safer, more helpful, more accurate,
more reliable, and held back the release of the model
so that we could focus on these aspects of it.
But it's far from perfect, and we're continuing to work
on it and get the feedback from the daily use

(10:35):
and make the model better and more reliable.

Speaker 3 (10:39):
I want to talk about this term hallucination because it's
a very human term. Why use such a human term
for basically an AI that's just making mistakes.

Speaker 2 (10:47):
A lot of these general capabilities are actually quite human like.
Sometimes and we don't know the answer to something, we
will just make up an answer. We will rarely say
I don't know a lot of human hallucination in a conversation,
and sometimes we don't do it on purpose. So we're
constantly borrowing from the way that we learn the way

(11:10):
we see the.

Speaker 5 (11:10):
World to.

Speaker 2 (11:13):
Have a more intuitive understanding of the systems.

Speaker 3 (11:16):
Should we be worried about AI though that feels more
and more human like? Should AI have to identify itself
as artificial when it's interacting with us?

Speaker 2 (11:26):
I think it's a different kind of intelligence. It is
important to distinguish output that's been provided by a machine
versus another humans. So you have that understanding, but we
are moving towards a world we are collaborating with these
machines more and more, and so output will be hybrid
from a machine and a human and so they're almost like,

(11:48):
you know, amplifying tools that are pushing the ability that
we already have, whether that's reasoning or creativity, and these
machines are helping us push the bounds of that even further.
So it's going to be difficult to you know, distinguish

(12:09):
the output once you have this collaborative engagement between the
human and the missione the.

Speaker 4 (12:15):
Air of confidence.

Speaker 1 (12:16):
Obviously that CHATGPT sometimes delivers an answer with is it
can take you off your toes a little bit, right,
why not just sometimes say I don't know, or program
that into che GIPT.

Speaker 2 (12:26):
So it turns out that when you're building such a
general technology. Like with large language models, the goal is
to predict the next word in a sentence. The goal
is not to predict the next word reliably or safely.
Just from this simple goal, we got the ability to

(12:46):
understand language quite well. We got a lot of creativity,
ability to even code. And it turns out when you
have such general capabilities, it's very difficult to handle some
of the limitations such as what is correct. Also, the

(13:07):
model doesn't really know much about the user in terms
of their context and their preferences. But it's still the
early days, and that's why we're pushing out these systems slowly,
in a controlled way, but in a way that allows
us to get some feedback from how people are using them,
so we can use that, implement it and make them

(13:30):
better and more reliable. One thing that we did recently
with child jipt is we rolled out this ability to
browse the Internet so that it can become a bit
more reliable on questions that have factual nature. And this
is now offered as a plugin on child jipd plus service.

(13:52):
But it's still the early days and this feature is
only in alpha.

Speaker 3 (13:55):
Some of these texts and some of the data is
highsed some of it may be correct. Isn't this going
to accelerate the misinformation problem? I mean, we haven't been
able to crack it on social media for like a
couple of decades.

Speaker 5 (14:07):
Misinformation is a really complex heart problem. But you know,
these systems become smarter.

Speaker 2 (14:14):
It's actually also easier to guide them because you can
give direction in just natural language and say I don't
want you to do xting. Then the system, by being
more intelligent, more capable, has the ability to actually.

Speaker 5 (14:33):
Follow that particular instruction.

Speaker 2 (14:36):
Obviously, with more powerful models, you're also expanding the profile
of risks, and so you have more risks that you
need to understand.

Speaker 5 (14:45):
And deal with.

Speaker 2 (14:47):
There are several things that we are exploring. For example,
one of the things that we've been researching and exploring
is water marketing. The output where you are able to
you distinguish what is AI generated output versus human generated output.

Speaker 5 (15:05):
There are ways to deal with it.

Speaker 2 (15:07):
Also from a policy standpoint, I think it's a complex issue.
There needs to be addressed from research policy perspective. But
on the other hand, also you know, society needs to
adapt to these challenges and the capabilities that these models
are bringing just like we adapted, you know, to using

(15:30):
calculators and other technologies.

Speaker 4 (15:33):
There's sort of like underlying anxiety. I feel like when
you talk to.

Speaker 3 (15:36):
Most people about AI, you know that's cool, but it's
also scary.

Speaker 4 (15:41):
And I've heard.

Speaker 1 (15:42):
AI experts talk about the potential for the good future
versus the.

Speaker 3 (15:45):
Bad future, and the bad future gets kind of scary.
You know.

Speaker 4 (15:48):
There's talk about this lead leading to human extinction. Are
those people wrong?

Speaker 2 (15:53):
You know, there's certainly a risk that when we have
these AI systems that are able to set to then goals,
they decide that their goals are not aligned with ours
and they do not benefit from having us around and
could lead to human extinction.

Speaker 5 (16:12):
I don't think this risk.

Speaker 2 (16:13):
Has gone up or down from the things that have
been happening in the past few months. I think it's
certainly been quite hyped and there is a lot of
anxiety around it. Well, this risk is important, and we
need to work on frontier research to figure out how
to deal with super intelligent AI alignments. We are dealing

(16:36):
with a lot of risks today that are very real,
very present, very high probability that they impact us, and
I think if we cannot figure out how to handle
and deal with these risks while the stakes are low.
Then you know, we wouldn't have much hope to deal

(16:57):
with it when things are more complex. So my view
is a bit more pragmatic that one, where you know,
we really need to figure out how to deal with
the present risks that the systems pose, and coordinate among
developers and work with regulators, legislators and governments various countries

(17:18):
to come up with reasonable policies and regulation.

Speaker 4 (17:23):
Around the are elon.

Speaker 1 (17:25):
Steve Wosniak, a bunch of other you know experts have
called for six months pause on AI development. Do you
have any intention of slowing down or what's your response
to that letter?

Speaker 2 (17:35):
So the letter from FLI makes a lot of good
points about the risks that the technology poses, and we've
been talking about some of them.

Speaker 5 (17:47):
Opening Eye is being.

Speaker 2 (17:48):
Very vocal about these risks for many, many years, and
we've been doing active research on them. One of them
is acceleration. I think that's a significant risk that we
as a society need to grapple with. The Private companies
and governments need to work together to figure out the

(18:10):
risks that acceleration brings building safe AI systems that are
in general is very complex. It's incredibly hard, and I
don't think that it can be reduced to a parameter
set by a letter. The question then becomes, you know

(18:33):
who is abiding to this letter? Is it all different
countries in the world? How is that happening? I think
the issuing reality is far more complex, and it requires
coordination from private companies, from governments, and figuring out how

(18:54):
do you deal with these advancements in technology versus blocking advancement.

Speaker 1 (19:01):
There have been parallels drawn to the Manhattan Project, which
you know, they gathered the.

Speaker 3 (19:04):
Best scientific minds to develop nuclear weapons, and Robert Oppenheimer,
who led that project, said when he saw the first detonate,
a line from Hindu scripture ran through his head. Now
I am become death, the destroyer of worlds. I realize
the sound dramatic, but if we're talking about the risk
for human extinction, you know, not being totally out of

(19:25):
the question. Like in your development of AI, have you
had a moment like that where you're just like, Wow, this.

Speaker 4 (19:31):
Is this is big.

Speaker 2 (19:32):
I think a lot of us at Open AI joined
because we thought that this would be the most important
technology that humanity would ever creates.

Speaker 5 (19:42):
I suddenly think.

Speaker 2 (19:43):
That now with that comes a lot of responsibility. Of course,
I think AI is going to be amazing.

Speaker 5 (19:51):
It already is.

Speaker 2 (19:52):
It has this incredible potential to extend our creativity and
human knowledge and make our lives better in so many vectors.
But of course the risks, on the other hand, are
also pretty significant, and this is why we're here.

Speaker 3 (20:13):
I just rewatched the movie her which has this very
vivid depiction of life with AI in ten years.

Speaker 4 (20:19):
How will our lives be different? How will daily life
be different?

Speaker 2 (20:22):
I haven't watched a movie herd a lot aboutime. I
hope ten years is a long time, but I hope
that in the next few years we will have a
future where we use AI systems as tools to amplify
a lot of our own abilities. And I hope that

(20:47):
we have systems that help bring customized education to as
many people out there as possible, and I hope that
they You know, we can build tools, diagnostics, tools or
ways to understand diseases and the problems in healthcare much

(21:07):
much earlier and figure out how to deal with them
at scale. And you know, we're dealing with massive problems
in climate change, figuring out new solutions figuring out ways
in which we can help reduce the risks that climate
change processes.

Speaker 1 (21:23):
Could you put what you're developing here inside robots and
could they combat loneliness?

Speaker 2 (21:29):
I think bringing these systems into the physical world is
a pretty significant step.

Speaker 5 (21:36):
Feels like we're a bit far from that.

Speaker 2 (21:39):
But also, you know, just having a chatbot that you
can ask for advice suddenly, not in high stake scenarios
right now, seems like that would be helpful for a
lot of people.

Speaker 3 (21:52):
That's quite profound that we could someday have relationships with computers.

Speaker 5 (21:57):
In a way we already do.

Speaker 2 (21:59):
Right We're spending so much time on our computers, We're
always on our phones. We're almost like enslaved to this
interaction that we have with the keyboards and with the
touch screen.

Speaker 3 (22:11):
I think a lot about my kids and them having
relationships with AI someday and this thing that has much
more time to spend with them than I do. How
do you think about what the limits should be and
what the possibility should be when you're thinking about a child.

Speaker 2 (22:25):
I think we should be very careful in general with
putting very powerful systems in front of more vulnerable populations,
people under thirteen cannot access it, and even under eighteen
requires parents of supervision. So there are certainly checks and
balances in place because it's still early and we still

(22:49):
don't understand all the ways in which this could affect people.

Speaker 3 (22:52):
There's also some business interests here, and by releasing chatgbt
open ays kind of turbo charge this competitive frenzy. Do
you think you can bet Google at its own game?
Do you think you can take significant market share and search?

Speaker 2 (23:05):
You know, we didn't set out to dominate search when
we build child jepet. In fact, it actually started as
a project around understanding and dealing with truthfulness of large
language models, and then it evolved. But I think what
child gipt offers is a different way to understand information

(23:29):
and a different way to interact with the same tool.
And you could be, you know, searching, but you're searching
in a much more intuitive way versus keyword based. That
is definitely an outcome that we saw afterwards, and we
built an interface that would allow people to interact with

(23:51):
it much more smoothly, and as we can see, it
is pushing other people to build more assistant like products
companies and small companies. I think the whole world is
sort of now moving in this direction.

Speaker 5 (24:05):
I think our focus will.

Speaker 2 (24:07):
Remain on building these general technologies and figuring out how
we can bring them to the public in ways that
are useful.

Speaker 3 (24:16):
So there's this report that these workers in Kenya we're
getting paid two dollars an hour to do the work
on the back end to make answers less toxic. And
my understanding is this work it can be difficult, right,
because you're reading texts that might be disturbing and trying
to clean them up, right, Like, what's your response to that?

Speaker 2 (24:32):
So we need to use contractors sometimes to scale. You know,
in this particular case, we chose the particular contractor because
of their known safety standards, and since then we've stopped
working with them. But as you said, this is difficult
to work, and we recognize that, and we have mental

(24:55):
health standards and wellness standards that we share with contractors
when we engage them.

Speaker 3 (25:02):
All of the data that you're using, and this has
been talked about a lot, like all of the data
that you're training.

Speaker 1 (25:07):
This AI on, it's coming from writers, it's coming from artists,
it's coming from.

Speaker 4 (25:11):
Other people have created who've created things.

Speaker 1 (25:13):
How do you think about giving value back to those
people when these.

Speaker 3 (25:17):
Are also people who are worried about their jobs going away.

Speaker 2 (25:20):
These models are trained on a lot of public information,
a lot of data on the Internet, and also licensed data,
and the output that is generated by the models is
original our users.

Speaker 5 (25:35):
They have all their rights to that output.

Speaker 2 (25:38):
I know Microsoft has been doing some research on this
on how do you make sure that you recognize the
value that people are bringing with their data, And there
is some research that has been done in this direction
with the data dignity projects that some folks at Microsoft

(25:59):
have been working on, and there is some research of
figuring out the economics of this and how to do
that at scale. I don't know exactly how to work
in practice that you can sort of account for information
created by everyone on the Internet, but there is probably
some other way where, you know, people contributing specific kind

(26:23):
of data can sort of have a share of the
gains produced by this model. I'm not sure exactly how
that would work, but I think there is some research
on the economics of this, and I think it's definitely
worth exploring further. As far as the question of jobs goes,

(26:46):
I think there are definitely going to be jobs that
will be lost and jobs that will be changed. I
think there will be a lot of jobs that will
be created as well. We don't know exactly what they are,
and probably some of them we can't even imagine. Like
prompt engineer is a job today. That's not something that
we could have predicted totallyory.

Speaker 4 (27:05):
So what does responsible innovation look like to you?

Speaker 1 (27:07):
You know, like, would you support, for example, a federal
agency like the FDA that that's technology, like it that's drugs.

Speaker 2 (27:14):
You know, having some sort of trusted authority that can
audit these systems based on some agreed upon principles would
be very helpful. And having some standards around predicting capabilities
and auditing these systems once they're trend could be helpful.

Speaker 3 (27:38):
Do open AI employees still vote on AGI and when
it will happen?

Speaker 5 (27:42):
I actually don't know. I believe that what they did.

Speaker 2 (27:46):
I think we kind of do it, but I don't
know last time we did.

Speaker 1 (27:51):
What is your prediction about AGI now and how far
away it really is? This is when computers can learn
and reason and rationalize just as good as us, if
not better.

Speaker 2 (28:02):
I think we're making a ton of progress on technology
and it is really helping us in so many ways,
But we're still quite far away from being a point
where you know, these systems can make decisions autonomously and
discover new knowledge that we couldn't have predicted previously.

Speaker 4 (28:24):
So is that decades away?

Speaker 5 (28:26):
I'm not sure.

Speaker 4 (28:28):
Is it sooner than you thought when you started this work.

Speaker 5 (28:30):
I don't know if it's sooner.

Speaker 2 (28:31):
I think I have more certainty around the advent of
having powerful systems in our future that we'll be able
to make decisions autonomously and discover new knowledge.

Speaker 3 (28:45):
Should we even be driving towards agi? And do humans
really want it?

Speaker 1 (28:50):
Do we want computers to be smarter than us ultimately,
even though we don't know what that really looks like
or means.

Speaker 2 (28:55):
I think that through the course of history, pushing human
knowledge has pushed our societies in so many different ways.
It's been key to advancing our society, and I think
it would be a mistake to hold technological innovation or

(29:18):
our ability to pursue human knowledge further.

Speaker 5 (29:23):
And I'm not even sure.

Speaker 2 (29:24):
That that's possible in the first place, but theoretically if
it were, I think it would be a mistake. A
lot of our inspiration and advancements in society come from
pushing human knowledge. Now that doesn't mean that we should
do so in careless and reckless ways. I think there

(29:45):
are ways to guide this development and manage this development
versus bring it to a screeching hold because of our
potential fears.

Speaker 4 (29:56):
So the train has left the station and we should
stay on it.

Speaker 5 (29:58):
That's one way to put it for now.

Speaker 4 (30:01):
I'm sure Chatchpt would say it much more eloquent.

Speaker 3 (30:04):
Beyond open AI, there's an artificial intelligence gold rush happening
in Silicon Valley. Venture capitalists are pouring money into Anything
AI startups, hoping to find the next big thing. Now
here's my conversation with Reid Hoffman, who knows a thing
or two about striking gold.

Speaker 4 (30:21):
Thank you so much for doing this.

Speaker 3 (30:22):
I'm so grateful to have you, and obviously you've helped
us make sense of platform shifts over I mean, gosh,
twelve years.

Speaker 6 (30:29):
We've been talking, maybe longer. That's awesome.

Speaker 5 (30:32):
A long time.

Speaker 6 (30:33):
More than a decade.

Speaker 3 (30:34):
Generative AI has had two big hits so far, Dolly
and Chatchypt.

Speaker 7 (30:40):
Both from open AI. Why do you think.

Speaker 3 (30:42):
Chatchypt exploded more than Instagram, even.

Speaker 7 (30:46):
More than TikTok.

Speaker 6 (30:47):
Well, there's a couple of reasons. One is it's a
little bit like the movie industry, and each year has
a new biggest box office. The world's more connected, there's
more people, there's more curiosity of what's going on, so
you have your new biggest hit. So there's always that
as a backdrop. This will be the year as I
kind of put on fireside chatbots where one or more

(31:08):
of the person of the Year lists will be a
chatbot or an AI or open AI or something like
this as a way of doing that. Because it's a
magical experience to say, suddenly I can have a conversation
with this thing, like I'm talking to another person and
it not being another person. Right. That's like that has

(31:29):
not happened in history till November sometime last year, Right,
And so that's why I think it exploded.

Speaker 3 (31:36):
You have been on the ground floor of some of
the biggest tech platform shifts in history, the.

Speaker 7 (31:41):
Beginnings of the internet mobile.

Speaker 3 (31:43):
Do you think AI is going to be even bigger?

Speaker 6 (31:46):
I think so at minimum for the following reason, which
is it builds on the Internet, mobile, cloud data. All
of these things come together to make AI work, and
so that causes it to be the crescendo, the addition
to all of us.

Speaker 7 (32:03):
So, hey, it's gonna be bigger than all those things. Yeah,
and that's kind of a big deal.

Speaker 6 (32:06):
Yes, absolutely. But now part of it's because just like
we saw a chat GBT, we have billions of people
connecting in the world. They can all reach it very
quickly too, so all of a sudden you start interacting
with it, and then you begin to think, well, what
could happen with AI here? I mean, one of the
problems with the current discourse is that it's too much
of the fear based versus hope based. Imagine a tutor

(32:31):
on every smartphone for every child in the world who
has access to a smartphone. Imagines a doctor on every
smartphone where many communities don't have any access to doctors.
That's line of sight from what we see with current
AI models today.

Speaker 7 (32:49):
You coined this term blitzcaling. Does AI blitzcale?

Speaker 6 (32:53):
Well, it certainly seems like it today, doesn't it. The
speed at which we will integrate it into our lives
will be faster than only integrated the iPhone into our lives.
There's going to be a co pilot for every profession
and if you think about that, that's huge. Well, that
changes industries, that changes products.

Speaker 3 (33:10):
And not professional activities, because it's going to write my
kids papers, right, it's high school papers.

Speaker 6 (33:14):
Yes, although the hope is that in the interaction with it,
they'll learn to create much more interesting papers.

Speaker 7 (33:21):
You and Ela must go way back.

Speaker 3 (33:23):
He co founded open ai with Sam Waltman, the CEO
of open Ai.

Speaker 7 (33:26):
What did Elon say that got you interested? So early?

Speaker 6 (33:30):
Elon came and said, look, this AI thing is coming.
You know, I always trust people from my network were
smart to say, go look at this. Go look. I'm
always curious. Once I started digging into it, I realized
that this pattern that we're going to see the next
generation of amazing capabilities coming from these kind of you know, computers,
computational devices, and that that's something that could shape a

(33:51):
much better society that we'd all be in. And that's
the reason I do technology. One of the things I
had been arguing with Elon at the time about was
that Elon was constantly using the word robocalypse, which you know,
we as human beings, tend to be more easily and
quickly motivated by fear than my hope, So you're using
the term robocalypse, and everyone imagines the terminator and all
the else.

Speaker 7 (34:11):
It sounds pretty scary.

Speaker 6 (34:12):
It sounds very Scaredocalypse doesn't sound like something we want. Yeah,
stop saying that, because because actually, in fact, the chance
that I could see anything like a robocalypse happening is
so deminimous relative to everything else.

Speaker 7 (34:24):
How remote is the chance of the robocalypse in your mind?

Speaker 6 (34:27):
Let me put this away. I'm more worried about what
technology does in the hands of humans that I am
about a robocalypse. And what we've seen through the scaling
of these large language models is that the larger you get,
the easier it is to train them to be aligned
to human interests. That's good, doesn't mean it's perfect, doesn't
mean we shouldn't be attentive. But that's exactly the kind

(34:49):
of thing where you can build to a really good
future and be motivated by hope and optimism versus fear.

Speaker 7 (34:55):
So just on Elon for a second.

Speaker 4 (34:57):
You did come together on open AI, and how did
that happen?

Speaker 6 (35:00):
I think it started with Elon and Sam having a
bunch of conversations, and then since I know both of
them quite well, I got called in something should be
the counterweight to all of the natural work that's going
to happen within commercial realms, right within companies, you know, buildings,
which is by the way, as you know, a huge
fan that companies can build really good things. An I

(35:22):
int company and a different thing. But it's good to
have the counterweight too. And as part of having that counterweight,
what how do you bring in considerations like well, what
are we going to do for a bunch of people
who are not as well off economically or anything else,
and how do we make sure they're included? How do
we make sure that one company doesn't dominate the industry,

(35:43):
but the tools are provided across the industry so innovation
can benefit from startups and all the rest. It was like, great,
and let's do this thing open Ai.

Speaker 3 (35:52):
Sam Altman has said he thinks this is going to
usher in this new era of economic prosperity. It's obviously
going to change a lot of jobs, going to eliminate
a lot of jobs. Is it going to create enough
jobs to balance all that out?

Speaker 6 (36:05):
So you can't one hundred percent say absolutely yes, because
it's part of the uncertain part of human nature and
human progress. But the same question is confronted us multiple times.
It's confronted us in the move from agriculture to industry.
It's confronted us in computerization of things like you know,

(36:25):
And again fear first is like, oh my god, it's
going to employee change. And a lot of work is people
to people interaction, and people interaction can be education, it
can be medicine, it can be legal, it could be communications.
I think that all of that there's infinite demand for
that work. Entertainment media there's infinite demand for that, and

(36:47):
so those can open up new realms of jobs and
all the rest. Am I ultimately very optimistic that it
will create a lot more jobs than it will consume.
The answer is yes, but it doesn't mean it won't
consume jobs, and it doesn't mean that we have to
not navigate the transition, the revolution of moving from agriculture
and industry. We had a lot of suffering in the

(37:08):
cities as we've moved to manufacturing and all the rest,
and you say, okay, let's try to minimize these transitions.

Speaker 3 (37:14):
I did ask chat GPT what questions I should ask you.
I thought it's questions were pretty boring. Yes, your answers
were pretty boring too, So we're not getting replaced anytime soon.

Speaker 7 (37:23):
Yes, but clearly this is really struck a nerve, this.

Speaker 3 (37:27):
Baning thing, Bing's chatbot saying telling folks it's in love
with them.

Speaker 7 (37:31):
Yes, there are people out there who aren't going to
fall for it. Yes, should we be worried about that?

Speaker 6 (37:36):
So that's a deminimous worry. I think that's specific one.
And the reason is, Okay, so everyone's encountered a crazy
person who's drunk off their ass at a cocktail party
who says really odd things, or at least every adult has,
and you know that's not like the world didn't end right.

(37:58):
And so the real issues I think are things like
if we put in a whole bunch of computational systems,
that we are on our trajectory to improving areas of
racial bias or discrimination. Now, I think AI can be
a very positive tool in that because we can improve it,
we can learn it, we can fix it. We can
probably fix it better than we can fix for example,

(38:21):
systems of judges issuing paroles probably easier to do iteratively
by studying it and getting it better through an AI system,
which will function in partnership, not in replacement, but as
a way of kind of improving those things. So those
are the things that really matter. We have to we
do have to pay attention areas or harmful. For example,
someone's depressed. The thing about self harm, you want all

(38:44):
channels by which they can get into self harm to
be limited. That isn't just chatbots, that could be communities
and human beings, that could be search engines. You have
to pay attention to all the dimensions of it. And
by the way, you can never get it perfect.

Speaker 3 (38:57):
So I agree that computers don't have feelings. These chatbots
are just predicting the next word in a string.

Speaker 6 (39:03):
Right.

Speaker 7 (39:04):
What does worry me as a mom.

Speaker 4 (39:07):
Is my kids.

Speaker 3 (39:08):
So what if my kid is spending more time talking
to a chatbot than me, or developing relationships with these chatbots,
or making decisions based on what a chatbot has told
them or nudged them to do, Like, why shouldn't I
be terrified of that?

Speaker 6 (39:23):
Well, I think the question is is what kind of
relationship and what are they nudging them to do? So
for example, say you had your kid and the kid
was interacting with the chatbot that was causing them to
reflect on who they were and their feelings a little
bit better and help them discover themselves and you're like,
well that seems to be an okay relationship, maybe better
than their friends at school even in some ways, and

(39:45):
help them kind of be a to follow the path
they want to be doing. Or say, for example, it
was like, well, here's why actually, in fact, doing your
homework is actually useful to you and here you know,
let's let's help do that. You said, well, that's okay.
So it's not the it's not the fact that there's
an interaction there that bothers you. It's like, is the

(40:05):
interaction going to be in a positive direction? Is going
to be broadly there?

Speaker 7 (40:08):
How are we overestimating AI right now?

Speaker 6 (40:11):
Many ways that we're overestimating it, it still doesn't really
do something that I would say is original to an expert. So,
for example, one of the questions I asked was how
would Read Hoffman make money by investing in artificial intelligence?
And the answer gave me was a very smart, very
well written answer that would have been written by a
professor at a business school who didn't understand venture capital. Right,

(40:34):
So it seems smart would study large markets, would realize
what products would be substitute in the large markets would
find teams to go do that and invest in them.
And this is all written very credible and completely wrong.
And part of that's because the newest edge of the
information is still beyond these systems. Now. It's great when

(40:56):
I said something like what would read Hoffman say on
a German documentary about settlers of Katan, right, and it
gave a very good answer.

Speaker 3 (41:06):
Billions of dollars are going into AI. My inbox is
filled with AI pitches. Last year it was crypto and
Web three. Before that it was self driving cars. Now
everyone's on the AI train. Yes, how do we know
this isn't just the next bubble?

Speaker 6 (41:18):
Well, I think neither Web three or autonomous vehicles actually
think we're bubbles. I do think that the generative AI
is the thing that has the broadest touch of everything.

Speaker 5 (41:30):
Now.

Speaker 6 (41:30):
Obviously, as venture capitalists, part of what we do is
we try to figure that out in advance, you know,
years before the people seeing coming. But I think that
there will be massive new companies built.

Speaker 3 (41:40):
Feces have played a role, and you know, you could
say in the hype cycles, how much is FOMO driving decisions?

Speaker 5 (41:46):
Right now?

Speaker 6 (41:47):
FOM will always drives some decisions, as you know, because
people who are not with it suddenly try to jump
on the train, and some byblay. Sometimes it works. And
it is true that when you study the sequence of technology,
what happens is there's a wave. There's an Internet wave,
there's a mobile wave, there's a cloud wave. There's these waves,

(42:07):
and that transforms the industries and that you need to
be on that wave. So whether you're an early adopter
or late adopter, everyone goes and tries to get on
the wave.

Speaker 7 (42:15):
There's another concern, and I wonder if you share it.

Speaker 3 (42:17):
It does seem in some ways.

Speaker 7 (42:19):
Like a lot of AI is.

Speaker 3 (42:21):
Being developed by an elite group of companies, and people
look in.

Speaker 6 (42:25):
Some ideal universe. You'd say, for a technology that would
impact billions of people, somehow billions of people should directly
be involved in creating it. But that's not how any
technology anywhere anywhere in history gets built. It's a small
number of people. How do you offset that and how
do you expand that? And I think the way that
you do that is try to have broader conversations, try

(42:48):
to be more inclusive about what the concerns are, what's
going on, what their intents are. That's the thing that
I try to help push to.

Speaker 3 (42:57):
So do you see an aim mafia form?

Speaker 6 (43:01):
Hopefully not, especially in the exact term of mafia. I
definitely think that there is because you're firm the PayPal mafia.
I think that there's a network of folks who have
been deeply involved over the last few years and is broadening.
That will have a lot of influence on how the

(43:22):
technology happens.

Speaker 3 (43:23):
Do you think AI will shake up the big tech
hierarchy significantly? It seems like the big tech giants, all
of them are on their toes.

Speaker 6 (43:32):
Well. What it certainly does is it creates a wave
of disruption. For example, with these large language models in search,
what do you want? Do you want ten blue links
or do you want an answer? In a lot of
search cases, you want an answer and a generated answer
that's like a Mickey mini Wikipedia page is awesome. That's
a shift. When you're working in a document, do you

(43:55):
want to just be able to pull out a template
that says here's what a memo template is, or would
you like to say, give me a first draft of
a memo on how artificial intelligence can improve government services?
And it drafts something and then you go okay, and
startups work much more nimbly than large companies. So I
think we'll see a profusion of startups doing interesting things.

Speaker 3 (44:16):
This Can the next Google or Facebook really emerge if
Google and Facebook or Meta and Apple and Amazon are
running the playbook and Microsoft?

Speaker 6 (44:24):
Yes, as I tend to think we have five large
tech companies heading to ten, not five heading to two
or three, and it's competition, and that competition creates space
for startups and all the rest. So do I think
there will be another one to three companies that will
be the size of the five big tech giants emerging
possibly from AI? Absolutely? Yes, right now, does that mean

(44:47):
that one of them is going to collapse? No, not necessarily,
and it doesn't need to. The more that we have,
the better.

Speaker 7 (44:53):
So what are the next big five?

Speaker 6 (44:55):
Well, that's what we're trying to invest in.

Speaker 3 (44:58):
You're on the board of Microsoft, obviously, you know Microsoft is.

Speaker 7 (45:02):
Making a big AI push. How do you see the
balance of power between Microsoft and Google.

Speaker 6 (45:07):
I think it unequivocally has a shot. But one of
the things that I think that Satis said very well
is at minimum with what you're seeing happening with you know,
bing Chad and everything else. And what means is is
all of a sudden, Microsoft's back in the game. It's here,
it's doing stuff, it's inventing, it's creating things. What is

(45:28):
pretty amazing to have had a seat watching how Sati
and his team are kind of bringing a tech company
back to where, you know, a few decades ago it
was one of the leading tech companies and then everyone's
not paying attention to anymore, back to being a leading
tech company, to doing search.

Speaker 3 (45:45):
Did you bring Satia and Sam or have any role
in bringing Satia and Sam closer together? Because Microsoft obviously
has ten billion dollars.

Speaker 4 (45:51):
Now in open Ai.

Speaker 6 (45:52):
Both of them are close to me and know me
and trust me well, so I think I have helped
facilitate understanding and communication. And I would not want to
take anything away from how brilliant each of them is
and how much the thing they have architected is because
they're amazing.

Speaker 3 (46:09):
The AI graveyard is filled with algorithms that got into trouble.
How can we trust open ai or Microsoft or Google
or anyone to do the right thing.

Speaker 6 (46:21):
Well, there's a whole field of aix, AI, safety, etc.
There's people in all of these companies, a lot of
them employed with asking questions and making that work, so
we need to be more transparent. Well, everyone agrees that
we should be protective of children. Everyone agrees that we
should try to make sure self harm isn't there. Everyone

(46:42):
agrees that we should try to not have this lock
in economic classes or other kinds of things, and should
be more broadly provisioned. But on the other hand, of course,
a problem exactly as you're alluding to, is people say, well,
the AI should say that, or shouldn't say that, or
they I should allow people to say that, or shouldn't
allow people to say that, And you're like, well, we

(47:03):
can't even really agree on that ourselves, so we don't
want that to be litigated by other people. We want
that to be a social decision.

Speaker 3 (47:11):
It's a minefield of ethics and fairness and governance issues.

Speaker 7 (47:15):
Is the answer regulation and how can regulation possibly.

Speaker 4 (47:19):
Even keep up?

Speaker 6 (47:20):
When people think regulation, they think you must come and
seek approval before you do something, And that's the reason
why most of these regulated industries have all massively slowed
down on their innovation. So to start regulating now, I
think would be broadly dangerous and destructive and kind of
how do we create and own the industries of the future.

(47:42):
But that doesn't mean do nothing. Say, for example, you're
working with AI companies, we'd like to hear what your
top concerns are. Here are some of ours. We'd like
to have you figure out how to tell us about
how you're addressing our concerns and how you're making improvements
on it month by month, year by year. Maybe you
could have a dashboard. Maybe you could be telling us
about here's how you're measuring how racial bias might creep

(48:03):
into your systems from the data that you're training on.
And if, by the way, you're not doing that well enough,
then we'll talk about the next phase of regulation. But
started as a dialogue and positioning the concerns and kind
of what improvements we want to see and what we'd
like to see, and start that way.

Speaker 3 (48:20):
Elon left open AI years ago and pointed out that
it's not as open as it used to be. He
said he wanted it to be a non profit counterweight
to Google. Now it's a closed source, maximum profit company
effectively controlled by Microsoft.

Speaker 6 (48:35):
Does he have a point, Well, he's wrong on a
number of levels there. So one is it's run by
a fiber one C three. It is a nonprofit, but
it does have a for profit part. It has a
for profit but the for profit part as is structurally
controlled in every way that really matters by the nonprofit.
It's employees run to it's board. Governs too, are all

(48:59):
in on pre profit mission. The commercial system, which is
all carefully done, is to bring in capital to support
the nonprofit mission. Now get to the question of for example,
Open so Dolly, when it was ready for four months
before it was released, why did the delay for four
months and delayed for four months because it was doing
safety training. It said, well, we don't want to have

(49:21):
this being used to create child sexual material. We don't
want to have this being used for assaulting individuals or
doing deep fakes. We don't want it to have being
like revenge pornography or that kind of stuff. So we're
not going to open source it. We're going to release
it through an API so we can be seeing what
the results are and making sure it doesn't do any
of these harms. So it's open because it has open

(49:42):
access to the APIs, but it's not open because it's
open source.

Speaker 3 (49:45):
You've resigned from the board of open AI because of
the appearance of a conflict of interest.

Speaker 7 (49:50):
There are folks out there who are angry actually about.

Speaker 3 (49:53):
Open AIS branching out from nonprofit to for profit.

Speaker 7 (49:57):
Is there a bit of a bait and switch there?

Speaker 6 (49:58):
The first thing is to make a difference in the
AI technologies and how to be a counterweight to all
of the commercial things. To do that, open AI needs
a lot of capital. The cleverness that Sam and everyone
else figured out is they could say, look, we can
do a market commercial deal where we say we'll give
you commercial licenses to parts of our technology in various ways,

(50:24):
and then we can continue our mission of beneficial AI
because we're not primarily motivated commercial. We're primarily and we're
primarily motivated by how do we make this great for society,
great for humanity?

Speaker 3 (50:35):
So you don't think this nonprofit to for profit thing
was a bait and switch.

Speaker 6 (50:40):
No, not at all. And I think that the question
about ok it was all done, I think very transparently.
And I think that the question about it is is
making sure that open AI can provide all of the
broad based kind of AI technology across multiple industries and

(51:01):
not be contained within one company.

Speaker 7 (51:04):
It can't be all AI and rainbows.

Speaker 3 (51:06):
There must be stuff that's keeping you up at night,
Like what keeps you up at night?

Speaker 6 (51:11):
Do I pay attention to what are the unintended consequences,
how it might cement layers of power? Like, for example,
do I pay attention to the fact that it could
flood our media ecosystems with misinformation? Yes, I absolutely pay
attention to that. Of course, our media ecosystems are already
flooded with misinformation. It comes from Russians hacking our political stuff,

(51:31):
or Nigerians or you know, Philippine farms or weird conspiracy theories.
But what really keeps you up at night is in
our fears? Do we miss the things that could be
really valuable? Right? That's that's part of the reason why
I come out so clearly. And it's not because like
if you literally said, like, any money that I'm gonna

(51:53):
make from investing these days already kind of heads to
my foundation and all the rest. That's what I do.
It's not because I have any economic interests here. It's because,
like I think about first, you say, okay, so who
will the first AI tutors be. We'll probably be for
upper middle class families because of economic things, Well, can
we get them to everybody in developed countries? And then well,

(52:18):
what about the kids in you know, Nigeria, or what
about the kids in Indonesia, or what about the kids
in you know, all throughout India. Well can we do
that too? That that's the kind of thing, and how
quickly do we get there? Because I think, you know,
we had this old expression from the eighties. No, no,
it was nineties. I think the digital divide right, Well,

(52:40):
look we all have a digital divide issue. That kind
of thing definitely keeps me up. Now again, I don't
mean to be pollyannish about this, and I and I
put a lot of energy into making sure we're asking
the right what are you in only like alignment questions
or safety questions and so forth. But like when I
read a weird bing chat, I'm mostly just laugh.

Speaker 3 (53:02):
Agi, when computers will be smarter than humans?

Speaker 4 (53:05):
How far out is that?

Speaker 6 (53:06):
So this is one of the kinds of things that
human beings are very bad at making judgments on. What
I mean is like, Agi, is there a percentage that
we will get a computer smarter than humans in our lifetime?
And the answer is yes. And the question is, well,
is it a large percentage or a small percentage, and
what counts as a large percentage of small percentage? You know,

(53:28):
I think that percentage is small, and who knows, maybe
it'll happen. Then it comes back to, well, what kind
of superintelligence? So if you're worried about things being hostile,
Terminix said, well, that's very concerning. But if you're like, oh,
well we could create a superintelligence that is a Buddhist
and thinks that sentient life is very good and goes, oh,
how do I work in collaboration with you? Well that

(53:50):
could be really good. Right. So the whole thing is,
I think it's never good to be driven by your fear.
I think it's much better to be driven by your
curiosity but being very diligent and work very hard at
trying to make the right things happen.

Speaker 3 (54:05):
So does this mean you think super intelligence is quite
a ways out?

Speaker 6 (54:09):
I would say that it's more likely outside of our
lifetimes than in our lifetimes.

Speaker 7 (54:13):
Okay, I appreciate a definitive picture.

Speaker 3 (54:15):
Thank you, Yes, thanks so much for listening to this
episode of the Circuit. I'm Emily Chang. You can follow
me on Twitter and Instagram at Emily Chang TV. You
can watch full episodes of the Circuit at Bloomberg dot
com and check out our other Bloomberg podcasts on Apple Podcasts,
the iHeartMedia app, or wherever you listen to shows and
let us know what you think by leaving us a review.
I'm your host and executive producer. Our senior producer is

(54:38):
Lauren Ellis. Our associate producer is Lizzie Phillip. Our editor
is Sebastian Escobar. Thanks so much for listening.
Advertise With Us

Popular Podcasts

Dateline NBC
The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.