All Episodes

August 18, 2023 28 mins

On this episode of The Circuit, Emily Chang sits down with Microsoft CEO Satya Nadella to hear how AI is shaking up the competition for search. Nadella argues that this new wave of technology is as big as the web browser or the iPhone. Chang also speaks with OpenAI CEO Sam Altman to discuss his company (which has some help from Microsoft), its ambitions and the latest on ChatGPT. 

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
I'm Emily Chang, and this is the circuit. I've been
covering this industry for a long time, and there is
always some new new thing that big tech is chasing.
First it was self driving cars, and then it was
the metaverse, and now everyone is all in on AI.
Now there's one big tech giant that's made it clear

(00:23):
it's not missing out. Microsoft is OpenAI's main commercial partner,
trading powerful servers and billions.

Speaker 2 (00:29):
Of dollars for access to chat.

Speaker 1 (00:31):
GPT, sparking new life into old products, especially their languishing
search engine.

Speaker 2 (00:37):
So it's as big as the inner I think it's
as big.

Speaker 1 (00:40):
But winning in AI is a totally different story. I'm
about to talk to Microsoft CEO Satsia Nadella to find
out how he thinks he can do it. We'll talk
to open ai CEO Sam Altman in a moment, but first,
a new AI chatbot is helping Nadella in some surprising ways.
Have you been playing around with a lot like fun

(01:02):
stuff discovery?

Speaker 3 (01:05):
I am super verbose and polite now in email responses.

Speaker 4 (01:12):
It's watching. Yeah, it's always it was fun.

Speaker 3 (01:15):
Like the guy who leads our office team, and I
was responding to him and he was like, what is
this man?

Speaker 4 (01:20):
You're like sort of so pleasant. That's amazing. How about
the chat, the bing chat? I mean, have you been
what have you been using it to search for?

Speaker 5 (01:31):
Oh?

Speaker 3 (01:31):
You know, interesting enough everything right from the very schedules
to like the biggest thing I have felt is such
you always used it to learn new things.

Speaker 4 (01:42):
And what have you. But you never stay in the floor.

Speaker 3 (01:45):
Because you get distracted by you click it, go away,
whereas here you can stay.

Speaker 4 (01:49):
On task and on one topic and go deep.

Speaker 3 (01:54):
It's sort of very habit forming in the sense that
once you get used to having chat, even if I'm
using it one because there's a lot of times I'm
just navigating using search as a navigational tool, but once
you get used to it, you kind of feel like,
I got to have these rails, right, So.

Speaker 2 (02:10):
Once you try it, you're hooked.

Speaker 1 (02:11):
Microsoft has been working on AI for decades and chatbots
actually aren't anything new, but all of a sudden everyone
is salivating. Why do you think the moment for AI
is now?

Speaker 3 (02:24):
AI has been here, in fact, its mainstream right.

Speaker 4 (02:28):
I mean, search is an AI product.

Speaker 3 (02:30):
Even the current generation of search, every news aggregation recommendation
and you know, YouTube or e commerce or TikTok or
all AI products, except that all I would say, today's
generation of AI is all autopilot. In fact, it's a
black box that we just sort of use that is

(02:51):
dictating in fact, how our attention is focused. Whereas going forward,
the thing that's most exciting about this generation of AI
is perhaps we move from autopilot to copilot where we
actually prompt it. I think this shift from autopilot to
copilot is actually, yes, the next phase of AI, which
in fact is perhaps going to put us as humans,

(03:15):
you know, more in the center of using AI to
our benefit.

Speaker 1 (03:19):
How transformative a change do you think this will be
in how we work?

Speaker 3 (03:24):
But I think that probably the biggest difference maker will
be business chat because if you think about the most
important database in any company is the database underneath all
of your productivity software.

Speaker 4 (03:36):
Except that data is all siloed today.

Speaker 3 (03:39):
But now I can quariate with natural I can say, oh,
I'm going to meet this customer. Can you tell me
the last time I met them? Can you bring up
all the documents that are written about this customer and
summarize it so that I am current on what I
need to be prepped for how do you make.

Speaker 1 (03:52):
Sure it's not clippy to point out that it is
helpful delightful, doesn't want to make me click out asap.

Speaker 4 (04:00):
Two sets of things. One is you know you're laughing
because because look.

Speaker 3 (04:05):
Like our industry is full of lots of you know,
examples from Clippy to even let's say current generation of
these assistants and so on, they all are brittle. I
think we are also going to have to learn that ultimately,
these are tools. Just like anytime somebody sends me a draft,
I review the draft, I just don't accept the draft.
And so that ability to work with this co pilot,

(04:29):
give it feedback, know how to verify it. It's literally
like inspecting somebody's homework, right, which is, hey, tell me
exactly how you did this and so that I can verify.

Speaker 4 (04:37):
Those are the kinds of things that we'll learn.

Speaker 1 (04:39):
You're trying to reinvent search with this AI powered thing,
and I believe it's been using GPT for.

Speaker 4 (04:45):
For a while now.

Speaker 2 (04:47):
What's worked what hasn't.

Speaker 3 (04:49):
One thing that we're learning is the search context, right,
So conversational search is a thing. So this grounding of
your commonation with search data, I think is one mode,
and then there is a completely different.

Speaker 4 (05:05):
Mode that we're also learned, which is people just want
to chat.

Speaker 3 (05:09):
So we are now getting good and even the product
designed so that we make that an explicit choice.

Speaker 4 (05:15):
So for example, when we launched being.

Speaker 3 (05:17):
We didn't have these three modes. We now have how
precise do you want it to be? Or how creative
you want to be or you want to be balanced?
That I think is one of the biggest learnings we
learned it that oh wow, people do in fact want
to engage even in what is chat inside of search
in different ways, and we've got to put the user
control back.

Speaker 1 (05:35):
How much market share do you think you can really
take from Google? Like your prediction, give me a thirteen
year agree.

Speaker 4 (05:40):
We are thrilled to be in search.

Speaker 3 (05:43):
We're a very small player in search, and I look
forward to every inch we gain is a big game.

Speaker 1 (05:50):
You're coming for Search, They're coming for office. They're now
putting AI in there. You know, Google Docs and sheets
and Gmail. Are we just going to see you in
Sindar kind of one up each other every week and
it's race to greatness.

Speaker 3 (06:04):
You know.

Speaker 4 (06:05):
I just want barred in being both to thrive.

Speaker 3 (06:09):
I just want Google Workspace and Microsoft three six why
both to thrive? I mean, look, at the end of
the day, the fun part of being in this industry
and competing is the innovation and competition is the last
time I checked a fantastic thing for users and the industry.
And so yeah, I think you know Google's going to do.
You know, is a very innovative company, and uh, and

(06:30):
we have a lot of respect for them, and I
expect us to compete in multiple categories.

Speaker 1 (06:34):
In my decade plus covering Microsoft, I can't remember you
releasing this much in quick succession.

Speaker 2 (06:41):
Why is it all happening so fast?

Speaker 3 (06:43):
Yeah, it's you know, it's sort of sometimes it feels
it's all happening fast.

Speaker 4 (06:47):
It's we started working.

Speaker 3 (06:49):
On this, you know, a good four years ago, right,
I mean in some sense if you think about when
open AI and Microsoft came together and said, hey, this
next generation of large language models need new infrastructure. Let's
build the infrastructure, tune the infrastructure, let's understand even what
AI safety in alignment looks like for these.

Speaker 4 (07:10):
What are the use cases?

Speaker 3 (07:11):
And this has been four years plus in making so yes,
it feels.

Speaker 4 (07:16):
That we've launched a lot of things just in a hurry.

Speaker 3 (07:18):
This year, but it's been four years in the making,
and obviously it's a great partnership with OpenAI.

Speaker 1 (07:23):
Microsoft reportedly laid off a team focused on ethical and
responsible AI. Meantime, you've got the Center for Humane Technology
calling the race to AI a race to recklessness.

Speaker 2 (07:35):
How do you respond to that?

Speaker 3 (07:37):
In terms of impact on anybody at Microsoft? This is
just probably the thing that weighs on me heavily, because
after all, any restructuring is hard heart on the people
who are most impacted.

Speaker 4 (07:48):
That said, two things.

Speaker 3 (07:50):
One is this is no longer a side thing for Microsoft,
right because in some sense, whether it's design, whether it's alignment, safety, ethics,
it's kind of like saying quality of performance and design
core design. So I can't have now an AI team
on the side. It's all the mainstream. So in some
sense the hard process that companies like ours are going

(08:11):
to constantly go through.

Speaker 4 (08:12):
A lot of change.

Speaker 3 (08:14):
And then I think, if anything, debate, dialogue and scrutiny
on what is this space of innovation? Is it really
creating benefits for society? I think are absolutely and I'll
welcome it. Right, I look at it and say, no
one can run faster than the benefits to the broader

(08:35):
society and then the norms that we enforce as a
democratic society on any technology. And so I feel like
we are at the very early stages of it. So
I would ask us to be open to it, but
at the same time scrutinize it, and let's have a
dialogue on what the benefits are. And in that context,
let's also recognize, especially with this AI, why were we

(08:56):
not asking ourselves, like the AI that's already in our
lives and how what is it doing? It's right, we've
gone straight to saying, oh wow, these l lens have
some hallucination.

Speaker 4 (09:07):
Guess what.

Speaker 6 (09:08):
There's a lot of AI that I don't even know
what it's doing, and except I'm happily clicking away and
accepting the recommendations. So why don't we, in fact educate
ourselves to ask all of what AI is doing in
our lives and then say how to do it safely
in an aligned way.

Speaker 1 (09:24):
Elon musk Kuko founded open ai and then left has
said it's not what he intended. It is closed, sourced
and effectively controlled by Microsoft.

Speaker 4 (09:33):
How would you respond?

Speaker 3 (09:34):
First of all, open ai cares deeply about their mission
and doing it in the most safe way and in
the most open way, and there's an interesting trade between
openness and safety. So that is sort of one of
the reasons why they have what they have in terms
of their governance architecture, and so therefore at some level

(09:54):
they have been very very clear on what principles drive them. Similarly,
we have been very very clear on the print that
drive us around AI safety and responsibility, and we'll stick
to them.

Speaker 1 (10:05):
I have to ask you a question about the economy
and whether you're concerned about a prolonged tech bust. I mean,
we've seen the collapse of three banks, tighter money, more uncertainty.

Speaker 2 (10:16):
How are you thinking about this?

Speaker 3 (10:18):
I think at the highest levels, I think there was
an aberration of maybe a ten year period of low
interest rates and everything that came with it, not just
in tech but in the broader economy, and I just
think that we're just getting back to normal at least.
The thing that perhaps we have to remind ourselves mostly

(10:40):
the world looked like this, which is interest rates were
higher than zero, Inflation was perhaps maybe structurally going to
be higher, just given everything that's happening with supply chains
in the geopolitics, and we all as businesses have to
be accountable to how to manage in that environment, and
tech is one sector. And so I kind of look

(11:03):
at this and say, hey, it's a return to normal
as opposed to anything sort of that we need to
be worried about as being prolonged. In fact, this is
the long run. The economies have to sort of be
more real.

Speaker 2 (11:15):
All right, So this is normal to you.

Speaker 3 (11:17):
I mean I think that sometimes we sort of say,
you know, the last ten years can never be sort
of the way way forward, and it's good. I think
it's better to have businesses that are run efficiently that
have actually measured on the way, both whether it's on
the societal impact or on real economic impact.

Speaker 1 (11:38):
In nineteen ninety five, Bill Gates Ente memo calling the
Internet a title wave that would change all the rules
and was going to be crucial to every part of
the business.

Speaker 2 (11:48):
Is AI that big?

Speaker 3 (11:49):
Yeah? I mean, in fact, I sort of say the
chat GPT when it first came out was like when
Mosaic first came out I think in nineteen ninety three
as the first browser.

Speaker 4 (11:59):
And so yes, it does feel.

Speaker 3 (12:01):
Like you know, to the Bill memo in nineteen ninety five,
it does feel.

Speaker 4 (12:04):
Like that to me.

Speaker 1 (12:06):
So it's as big as the enter.

Speaker 4 (12:07):
I think it's as big.

Speaker 3 (12:08):
It's just like in all of these things, right we
in the tech industry or you know, classic experts at
overhyping everything. I hope at least that what motivates me
is I want to use this technology to truly do
what I think at least all of us are in
tech for, which is democratizing access to it. So when
someone says to me, hey, here is how a farmer

(12:28):
in rural India, you know, can use this technology to
express a complex thought on how to get a subsidy
from a government program and can do that successfully.

Speaker 4 (12:40):
That gives me a lot of sort of you know, hope.

Speaker 1 (12:42):
I think a lot about my kids and how AI
will have something that I don't, which is an infinite
amount of time to spend with them, and how these
chatbots are so friendly, and how quickly that could turn
into an unhealthy relationship or you know, maybe it's nudging
them to make a bad decision.

Speaker 2 (12:59):
As a parent, does any part of that scare you?

Speaker 3 (13:02):
So that's kind of one of the reasons why I
think this moving from autopilot to this co pilot hopefully
gives us more control, whether it's.

Speaker 4 (13:11):
As parents are more importantly even as children.

Speaker 3 (13:14):
Like one of the things that was very cool to
see in the launch of GPT four was the demo
or the launch of con academy stuff. Al sent me
this last night and I was looking at his algebra class.
It was so engaging, right, I mean, think about it,
Like one of the dreams we've always had is can
I have a personalized tutor that is engaging, that is

(13:36):
actually trying to teach me. We should, of course be
very very watchful of what happens, but at the same time,
I think this generation of bots, in this generation of
AI probably just go from engagement to more giving us
more agency to learn.

Speaker 1 (13:53):
What do you think about GPT four and like how
big aly it is?

Speaker 4 (13:55):
It is? It's pretty nonlinear, right.

Speaker 3 (13:57):
That's the advantage of these models is that they're showing
generation to generation that they're getting more efficient at the
current task, and they're showing emergent capability. Like for example,
from GPD three to three five we learned the code,
so similarly now it's sort of really like look at
it's the performance on all those standardized tests. I mean

(14:19):
that is pretty stunning reasoning. So the thing that I
feel this is the closest thing we have to having
a reasoning engine that all of us can use to better.

Speaker 4 (14:32):
Make sense of the world.

Speaker 3 (14:33):
And so going back to the kids' spot, like my
daughter said this Sami the other day, which I think
it was the most profound, which is it's kind of
like having a study body and a tutor all at
the same time.

Speaker 4 (14:45):
She was using bing chat.

Speaker 3 (14:47):
I think she had a PDF open and she was
querrying the pdf, and it's kind of like, Wow, I'm
able to ask questions, which you know, it's always not.
We're not that like you and go to your tutorials
or whatever it is that you need to go. It's
so much easier to have this tool available to make
better sense of the world.

Speaker 4 (15:08):
So yeah, I think it's a tool that has its place.

Speaker 2 (15:11):
And you're just excited, not scared.

Speaker 3 (15:13):
That's kind of the big debate that right now, I
am more excited.

Speaker 4 (15:17):
Like the reason is, even if.

Speaker 3 (15:19):
You narrowly look at it from a technology perspective, this
is more empowering and more understandable than these recommendations on
some social media site or what have you, which are
being driven by some other black box engagement algorithms. So
I'm not I'm open for pushback and scrutiny and debate

(15:40):
on it. I don't think this is anyway close to AGI.
We are not close to anything runaway AI problem jail breaks,
yes we can, but this, you know, we can always
learn and put them on safety rails.

Speaker 4 (15:51):
So I think we overstate the risk.

Speaker 3 (15:54):
We're understating the benefit, even in relation to current set
of technolog.

Speaker 4 (16:00):
And its users and it's harms.

Speaker 3 (16:01):
That's kind of what I think would be a good
one to actually pull out, Like, Hey, would I rather
have this or a recommendation engine that I don't even
know what it's doing.

Speaker 1 (16:12):
Yeah, Well, continue this conversation after this quick break.

Speaker 4 (16:15):
I want to ask about.

Speaker 1 (16:16):
Jobs, because obviously Microsoft makes software that helps people do
their jobs, and I wonder if AI laid in software.

Speaker 2 (16:23):
Will put some people out of jobs.

Speaker 1 (16:25):
Saam Altman has this idea that AI is going to
create this kind of utopia and generate wealth that's going
to be enough to cut everyone a decent sized check
but eliminate some jobs.

Speaker 2 (16:36):
Do you agree with that?

Speaker 3 (16:37):
You know, look a bit from kines to I guess Altman,
they've all talked about the two day work week, and
I'm looking forward to it.

Speaker 4 (16:45):
But the point is, look, the lump of labor.

Speaker 3 (16:49):
Fallacy has never proven out right, which is in some
sense there is displacement. And in fact, if anything, what
we have to do is really do a fantastic job
as a society to deal with any displacement, because if
one job turns into another job, you have to then
skill people on another job. And in fact, in an
interesting way, here's one thing. Even in this Microsoft three

(17:12):
sixty five tool, like there is this power automate tool.
Up to now we've called it the low code no
code tool for doing workflow automation. Interestingly enough, you now
can automate workflows just using natural language.

Speaker 4 (17:24):
Guess what that means.

Speaker 3 (17:25):
Anybody who is in the front lines in healthcare and
retail can automate or be part of the IT journey.
That to me gives means their new jobs and better
wage support. So I feel, yes, there's going to be
some changes in jobs. There's going to be some places
where there will be wage pressure, there will be opportunities

(17:45):
for increased wages because of increased productivity. We should look
at it all and at the same time being very
clear eyed about any displacement risk, because one thing that
we've also learned in the last twenty years is that
any society that doesn't really pay attention to who are
the winners and the losers and to make sure that

(18:06):
as a society we are not really you know, imbalanced
in it in terms of economic opportunity, we will be
better on.

Speaker 1 (18:15):
At the center of a potentially tectonic shift in job
creation is Sam Altman. He's promised that AI will create
a kind of utopia when it joins the workforce, while
also raising alarm about the dangers, signing his name to
statements warning about the risk of extinction from AI. Over
the summer, Altman traveled the world to talk about the
promise and peril of AI. I caught up with him

(18:39):
when he returned to San Francisco backstage at Bloomberg's annual
Tech summit.

Speaker 2 (18:43):
So you've been traveling a time.

Speaker 7 (18:45):
Yeah, what's the like, eat, sleep, meditate, yoga tech routine.

Speaker 8 (18:51):
There was like no meditation or yoga on the entire trip,
and almost an exercise that was tough.

Speaker 5 (18:56):
I slept fine.

Speaker 2 (18:57):
Actually was the goal more listening or explaining?

Speaker 8 (19:00):
The goal was more listening it ended up with more
explaining than we expected. We ended up meeting like many
many WARD leaders and talked about the sort of the
need for global regulation, and that was like more explaining
that listening was super valuable.

Speaker 5 (19:15):
Came back with like one hundred handwritten pages of notes.

Speaker 7 (19:17):
I heard that you do handwritten What happens to the
handwritten notes?

Speaker 8 (19:21):
But in this case, like I distilled it into like
here were the top fifty pieces of like feedback from
our users and what we need to go off and do.
But there's like a lot of things when you like
get people in person, like face to face or over
a drink or whatever, where people really will just like say,
you know, here is like my very harsh feedback on
what you're doing wrong.

Speaker 5 (19:38):
And I don't want to be different.

Speaker 2 (19:39):
You didn't go to China or Russia.

Speaker 5 (19:41):
I spoke remotely in China, but not Russia.

Speaker 2 (19:44):
Should we be worried about.

Speaker 7 (19:45):
Them and where they are on a what they I do?

Speaker 8 (19:51):
I'd love to know more precisely where they are. That
would be hopeful. We have I think very imperfect information there.

Speaker 2 (19:58):
So how has chachipied you changed your own behavior?

Speaker 8 (20:03):
There's like a lot of like little ways and then
kind of like one big thought. The little ways are,
you know, like on this trip, for example, the translation
was like a lifesaver.

Speaker 5 (20:12):
I also use it if.

Speaker 8 (20:15):
I'm trying to like write something which I write a
lot to never publish, just like for my own thinking,
and I find that I like write faster and can
think more somehow, So it's like a.

Speaker 5 (20:24):
Great unsticking tool. But then the big ways I am
I see the.

Speaker 8 (20:29):
Path towards like this just being like my super assistant
for all of my cognitive work super assistant.

Speaker 7 (20:36):
You know, we've talked about relationships with chatbots. Did you
see this as something that people could.

Speaker 2 (20:40):
Get emotionally attached to? And how do you feel about that?

Speaker 8 (20:43):
I think language models in general are something that people
are getting emotionally attached to, and you know, I have
like a complex set of thoughts about that. I personally
find it strange. I don't want it for myself. I
have a lot of concerns. I don't want to be
like the kind of like people telling other people what
they can do with tech. But it seems to me

(21:04):
like something you need to be careful with.

Speaker 7 (21:06):
You've talked about how you are constantly in rooms full
of people going holy, Yeah, what was the last holy moment?

Speaker 8 (21:14):
It was like very interesting to get out of the
SF echo chamber or whatever you want to call it
and see like the ways in which the holy concerns
were the same everywhere, and also the ways they're different.
So like everywhere people are like the way to change
seems really fast. You know, what is this going to
do to the economy?

Speaker 5 (21:32):
Good and bad? There's change, and change brings anxiety for people.

Speaker 2 (21:35):
There's a lot of anxiety out there. There's a lot
of fear.

Speaker 7 (21:38):
The comparisons to nuclear, the comparisons to bioweapons are those
fear is that over to matter.

Speaker 8 (21:44):
It is a lot of anxiety and fear, but I
mean there's like way more excitement out there. I think,
like with any very powerful technology synthetic, bio and nuclear,
two of those, AI is a third. There are major downsides.
We have to manage to be able to get the upsides.
And with this technology, I expect the upsides to be
far greater than anything we have seen, and the potential

(22:05):
downside is also like super bad. So we do have
to manage through those. But the quality of conversation about
how to productively do that has gotten so much better,
so fast. Like I went into the trip somewhat optimistic
and I finished it super optimistically.

Speaker 7 (22:20):
Yeah, so is your bunker prepped and ready to go
for the AI apocalypse?

Speaker 8 (22:23):
A bunker will not help anyone if there's an AI apocalypse.
But I know that, Like, you know, journalists seem to
really love that story.

Speaker 2 (22:31):
I do love that.

Speaker 8 (22:32):
I wouldn't overcorrect on like Boyhood's survival prep.

Speaker 5 (22:36):
Uh, co Scott, I like this stuff. Yeah, it's not
going to help with AI.

Speaker 7 (22:40):
There's been talk about the kill switch, the big red button.

Speaker 5 (22:43):
I hope it's clear that's a joke.

Speaker 2 (22:45):
It's clear it's a joke. Could you actually turn it
off if you wanted to?

Speaker 8 (22:50):
Yeah, sure, I mean we could like shut down our
data centers or whatever.

Speaker 5 (22:54):
But I don't think that's what people mean by it.

Speaker 8 (22:57):
I think what we could do instead is all of
the best practices we're starting to develop around how to
build this safely, the safety tests, external audits, internal external
red teams, lots more stuff. Like the way that it
would be turned off in practice is not the dramatic,
you know, gigantic switch from.

Speaker 5 (23:14):
The movies that cuts the power. Blah blah blah, it's
that we have.

Speaker 8 (23:19):
Developed and our continue to develop these rigorous safety practices,
and that's what the kill switch actually looks like.

Speaker 7 (23:24):
But it's not as theatric There is now a new
competitive environment, for sure, and open aye is clearly the
front runner.

Speaker 2 (23:31):
But who are you looking over your shoulder at?

Speaker 8 (23:33):
This is like not only a competitive environment, but I
think this is probably the most competitive environment in tech
right now.

Speaker 5 (23:39):
So we're sort of like looking at everybody.

Speaker 8 (23:41):
But I always, you know, given my background and startups,
I directionally worry more about the people that we don't
even know to look at yet, that'll that could come
up with some really new idea we missed.

Speaker 2 (23:53):
How would you describe your relationship with Sacha in Nodella?
How much control they have?

Speaker 1 (23:58):
You know, I've heard people say, you know, Microsoft's you're
just going to buy open Ai.

Speaker 2 (24:02):
You're just making big tech bigger.

Speaker 8 (24:05):
Companies not for sale. I don't know how to be
more clear than that we have a great relationship with them.
I think it's a that these like big major partnerships
between tech companies usually don't work.

Speaker 5 (24:19):
This is an example. If you're working really well, we're
like super grateful for it.

Speaker 2 (24:22):
Have you talked to Elan at all behind the scenes. Sometimes,
what do you guys talk about? I mean it's getting
heeded in the public.

Speaker 8 (24:29):
Yeah, I mean we talk about like a super wide
variety of important and totally trivial stuff.

Speaker 7 (24:37):
Why do you think he's so frustrated or kind of
I mean, it's almost there's some attacking.

Speaker 5 (24:42):
Going on, and you should ask him.

Speaker 2 (24:44):
I would like to know. I'd like to better understand it.

Speaker 8 (24:47):
I don't think this is in the top like one
hundred most important things happening related to AI right now.

Speaker 7 (24:52):
For what it's worth, is there any aspect of our
lives that you think AI should never touch?

Speaker 8 (24:58):
My mom always used to say never, say never, never,
say always, And I think that's like generally good advice.
If I've made a prediction now, I'm sure it could
end up being wrong in some sort of way. I
think AI is going to touch most aspects of our lives,
and then there will be some parts that stay surprisingly
the same. But those kind of predictions are like humbling

(25:19):
and very easy to get wrong.

Speaker 1 (25:21):
What's the percentage chance we get to the good future
versus the bad future?

Speaker 5 (25:25):
Very high? But I don't know how to put a
precise number on it.

Speaker 8 (25:27):
You know, when you hear these people say, like my
probability of doing is three percent and minus twelve or
mine sometimes it's like minus nine and minus thirteen.

Speaker 5 (25:34):
And I have this huge argument. I'm just not smart
enough to give the numbers that precise.

Speaker 2 (25:38):
What do you think kid should be studying these days?

Speaker 8 (25:41):
Resilience, adaptability, a high rate of learning, creativity, certainly, familiarity
with the tools.

Speaker 1 (25:48):
So it should kids still be learning how to code,
because I've heard people say, don't need to learn.

Speaker 2 (25:52):
How to code anymore. Just Matt, just biology.

Speaker 8 (25:54):
Well, I'm biased because I like coding, but I think
you should learn to code. I don't write code very
much anymorld. I randomly did yesterday, But learning to code
was great as a way to learn how to think.
And I think coding will still be important in the future.
It's just going to change a little bit or a lot.
We have a new tool.

Speaker 2 (26:11):
What are we all going to do when we have
nothing to do?

Speaker 8 (26:14):
I don't think we're ever going to have nothing to do.
I think what we have to do may change, you know,
like what you and I do for our jobs would
not strike people from a few thousand years ago as
real work. But we found new things to want and
to do and ways to feel useful to other people
and get fulfillment and create, and that will never stop.

(26:35):
But probably I hope you and I look, you know,
if we can look at the world a few hundred
years in the future, be like, Wow, those people have
it so good.

Speaker 5 (26:44):
I can't believe they call the stuff work. It's so true.

Speaker 2 (26:46):
So we're not going to be all just laying on
the beach eating bond bonds.

Speaker 5 (26:49):
Some of us will and more power to people who
want to do that.

Speaker 7 (26:52):
Do you think in your heart of hearts that the
world is going to be more fair and more equity?

Speaker 4 (26:57):
I do?

Speaker 5 (26:58):
I do.

Speaker 8 (26:58):
I think that technology is fundamentally and equalizing force. It
needs partnership from society and your institutions to get there.
But if we can like my like my big picture,
highest level like I'll zoom all the way out. View
of the next decade is that the cost of intelligence
and the cost of energy come way way down. And

(27:19):
if those two things happen, it helps everyone, which is great.
But I think it lifts up the floor a lot.

Speaker 2 (27:24):
So where do you want to take open AI next?

Speaker 8 (27:27):
We want to keep making better and better, more capable
models and make them available more widely and less expensive.

Speaker 2 (27:34):
What about the field of AI in general.

Speaker 8 (27:36):
There's many people working on this, so we don't get
to take the field anywhere.

Speaker 5 (27:39):
But we're pretty happy with our.

Speaker 8 (27:40):
Contribution, Like, we think we have nudged the field in
a way that we're proud of. So we're working on
new things too.

Speaker 2 (27:47):
What are the new things?

Speaker 5 (27:49):
They're still in progress?

Speaker 2 (27:50):
Is there room for startups in this totally?

Speaker 5 (27:52):
I mean, we were a startup not very long ago, but.

Speaker 2 (27:55):
You're almost already an incumbent.

Speaker 8 (27:57):
Of course, But when we started, like you could have
asked the same question, In fact people did. In fact,
I myself wondered, like, is it possible to take on Google?

Speaker 5 (28:04):
Indeed? Mind? Or have they already won?

Speaker 2 (28:07):
And they clearly have it?

Speaker 8 (28:08):
Yeah, Like, I think there's a lot of It's always
easy to kind of count yourself out as the startup,
but startups keep doing your thing.

Speaker 2 (28:17):
Well, nobody's counting you out.

Speaker 1 (28:20):
Thanks so much for listening to this episode of the Circuit.
I'm Emily Chang. You can follow me on Twitter and
Instagram at Emily Chang TV. You can watch new episodes
of the Circuit on Bloomberg Television or on demand by
downloading the Bloomberg app to your Smart TV and check
out our other Bloomberg podcasts on Apple Podcasts, the iHeartMedia app,
or wherever you listen to shows and let us know

(28:41):
what you think by leaving us a review. I'm your
host and executive producer. Our senior producer is Lauren Ellis.
Our associate producer is Lizzie Phillip. Our editor is Sebastian Escobar.
Thanks so much for listening
Advertise With Us

Popular Podcasts

Dateline NBC
The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.