All Episodes

May 9, 2024 40 mins

Ahead of Google's developer conference I/O, Google & Alphabet CEO Sundar Pichai sits down with Bloomberg Originals Host & Executive Producer Emily Chang to discuss the future of search, accelerating work on Google's AI models and how his upbringing prepared him for this moment.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
I'm Emily Chang, and this is the circuit. Google has
been the front door of the Internet for more than
two decades. Now there are so many other doors TikTok, Instagram, Amazon,
Reddit and of course Open Ai with chat, GPT and
so many rising AI players. Google may not be the
first place you go for answers anymore. So the question

(00:23):
is what are they going to do about it? I'm
at the person in charge of answering that question. Alphabet
and Google CEO Sundhar Pachai. At Google's campus in Mountain View.
We had a wide ranging discussion about where the search
giant stands in this AI moment ahead of its big
annual Google Io conference. Joining me now, Alphabet and Google

(00:43):
CEO Sundhar Pachai, thank you for doing this. US really
appreciate it well.

Speaker 2 (00:47):
Hikewise, and you've.

Speaker 1 (00:48):
Got a big event to prepare for you. What does
it take to steer something as large as Google? And
has your answer changed in the last ten years?

Speaker 3 (00:57):
I think the part Whige hasn't changed is I think
we are a deep technology company and so we focus
at that level and AI is the best example of that.
So focusing at that level and making sure you're driving
innovation there and translating it into products and solutions. So
I think that part doesn't change. But at the scale,

(01:17):
at what we do with many different businesses, you have
to find a way to focus and channel on the
real areas that matter, so that takes constant work.

Speaker 1 (01:25):
I think you're bringing in three hundred billion dollars in
revenue a year, making more money faster than ever before,
from multiple businesses. What does printing money look like in
the age of AI.

Speaker 3 (01:39):
These things take time. For example, in a we just
announced that a combination of YouTube and cloud will like
sit Q four at an undredate of over one hundred
million dollars. Now, we build this businesses over the past
eight years or so, right, and so these things take
time and you have to have a long term view
and invest to its. We are doing the same beach search, YouTube, cloud, Android.

(02:04):
We have longer term bets like subscriptions, way more and
so on. So again you're investing and you build it
over time, and that's what translates into revenue and business success.

Speaker 1 (02:17):
Search is still the heart of Google. Some leading computer
scientists have said search is getting worse. More seo spam
et cetera. Do you see their point.

Speaker 3 (02:27):
Part of what makes search a hard problem where we
put all our focuses. Anytime there's a transition, you get
an explosion of new content, and AI is going to
do that. So for us, we view this as the challenge,
which is what will define the work we do. So
the reason at Google we take pride in our search

(02:48):
quality teams is to separate the high quality content from
low quality content.

Speaker 2 (02:54):
So we have.

Speaker 3 (02:54):
Always, through many, many years of search, there are moments
where we see arise in new content, which is both
great because it allows for richer information to come in,
but there's a lot of spam that comes into so
solving that is a viewed as strength of search. Over
the past few months, we have announced a set of
changes and we are getting started making sure search will

(03:16):
again do the same through this AI moment, and I
actually think there'll be people will struggle to do that right.
So doing that well is what will define a high
quality product, and I think it's going to be the
heart of what makes such successful.

Speaker 1 (03:29):
The choices you make influence how billions of people get
their information, and if the new Google is only going
to be more and more AI, AI is super helpful
sometimes but sometimes it's still deeply wrong. Where do you
draw the line?

Speaker 3 (03:43):
I think part of what makes Google Search differentiator is
while there are times we give answers, it will always
link to a wide variety of sources, and so the
ability we know from having served users for a long time.
We've had an answers in search now for many minus years.
We are just now using Generator AI to do that.

(04:03):
But we know people want to explore more. They have
an interest in the curiosity.

Speaker 1 (04:10):
And so the links will live on, yes, and.

Speaker 3 (04:12):
It will always be an important part of search. That's
what users want, and so there will be times when
they want quick answers.

Speaker 2 (04:19):
And I gave the earlier example.

Speaker 3 (04:21):
My son is Celiac, so we did a quick question
to see whether something is included and free. We just
want to know, but often it leads to more things,
and you know then you want to explore more. So
people have intent different intent. I think understanding that meeting
all that needs is part of what makes such unique.

Speaker 1 (04:36):
The images that Gemini initially generated of Asian Nazis and
black founding fathers, you said that was unacceptable. Why haven't
you re released this yet?

Speaker 3 (04:47):
We obviously have instituted a set of changes organizationally, process wise,
investing more in red teaming. We realize it's an opportunity.
We are training our next generation of image models, so
from the ground up, retraining these models just to make
sure we're also making the product better. As part of that,
we viewed it as an opportunity to do everything from

(05:08):
the ground up correct and so we are working on
that and as soon as it's ready, we'll.

Speaker 2 (05:12):
Get it out to people.

Speaker 1 (05:13):
So it's going to be a while.

Speaker 2 (05:15):
I don't think so.

Speaker 3 (05:15):
I think it will be a few weeks from now,
but you know, we are definitely making great progress there,
so we'll be out.

Speaker 1 (05:21):
So now people are calling this Woki, and it's not
just happening here, it's happening across the industry. The way
I understand it, AI is built on patterns that it sees,
which if you look at any pictures of the founding fathers,
you're seeing old white men. How did the model generate
something that it never saw?

Speaker 3 (05:39):
We explained this before, but what we did obviously we
are a company which serves products to users around the world,
and there are generic questions. For example, people come and say,
show me images of school teachers or doctors or nurses,
and we want it to be representative. You know, we
have people asking dis query from Indonesia or the US.

(06:01):
How do you get it right for our global user base.
So that's what we were trying to get right. Obviously,
the mistake was that we over applied, including cases where
it should have never applied. So that was the bug,
and you know, we got it wrong, So we're investing
more in testing and re teaming to make sure that
doesn't happen.

Speaker 1 (06:21):
Would you say it's like good intentions gone awry.

Speaker 3 (06:24):
In this particular case, Yes, but still we are rightfully
held to a high bar and I think we clearly
take responsibility for it and we're going to get it right.

Speaker 1 (06:32):
How concerned are you about AI generated content ruining search?
For example, the AI generated selfie of the tank man
in tenem and square. It shows up in Google search results,
but it never happened.

Speaker 3 (06:44):
The challenge for everyone and the opportunity is how do
you have a notion of what's objective and real in
a world where there's going to be a lot of
synthetic content. I think it's part of what will define
search in the next decade ahead. For example, in AMA
search being able to detect AI generated content also over time,

(07:05):
showing provenance. When did this image first appear online? Giving sources?
You know, it's the kind of work we are undertaking,
right so I actually see it as an opportunity. We
see this today, even in this world of AI, people
often come to Google right away to see whether something
they saw somewhere else actually happen. It's a common pattern
we see, and so we understand what people are trying

(07:27):
to do, and so we're working very, very hard. We
are making progress, but it's going to be an ongoing journey.

Speaker 1 (07:32):
That's such an interesting point. How much of content on
Google is AI generated? And is that percentage growing? How
do you track it, how do you categorize it? And
do you worry about it degrading the results over time?

Speaker 3 (07:46):
If he stands still for sure there is more AI
generated content. We just announced a new set of guidelines
for ranking and quality, and as part of that, we
are using AI to make our algorithms better to detect
and we want to elevate human voices in human perspectives,

(08:06):
and so that is the core work we are undertaking.

Speaker 1 (08:10):
Will llms ever be truly reliable or is there a
ceiling to their accuracy.

Speaker 2 (08:15):
It's a great question. You know.

Speaker 3 (08:17):
Llms are essentially predicting the next words in a sequence
or so on, and so today they can hallucinate. I
think they will get better. We will also have newer breakthroughs. So,
for example, in search, when we are using llms for AIO, ABUWSE,
we are grounding it. We call it grounding in the

(08:38):
underlying search results. So we check it to make sure
what it's saying there is accurate. So we focus on factuality.
So you're trying to harness the creativity of llms, but
grounding it to be factual. So there are going to
be more and more techniques we will all work on.
We are definitely very very focused on it. So I
think this will be an area of debate. I think

(08:59):
we will constantly make progress on this. That's the way
I think about it.

Speaker 1 (09:03):
But will they ever be perfectly right?

Speaker 3 (09:05):
I think as long as it's anchored and presented in
a product with supporting information, I think it can be
if you just stand alone give an LLM answer. We
see this today people if they've read something somewhere and
they don't know whether it's true, they come to Google
to check whether it's true. So we understand this, and
so we'll always ground it with sources and pointing it
to what others have written about it, etc.

Speaker 1 (09:28):
You make a ton of money on ads next to
the links generated by searches. If a chatbot is giving
you answers and not links, and maybe more answers than links.
Sometimes are we in the midst of an assault on
Google's business model.

Speaker 3 (09:41):
What we've always seen people ask questions like that when
people switch from desktop to mobile. What we've always seen
is we don't show ads on a vast majority of
our queries. We show it when users have a commercial intent, right.
People are looking for commercial content and their ads happen
to be a value source of information. So you have

(10:01):
merchants trying to reach users in those moments. So we've
always found people want choices, including in commercial areas, and
that's a fundamental need and I think we've always been
able to balance it. As we are rolling out Aiovius
and search, we've been experimenting with ads, and the data
we se shows that those fundamental principles will hold true

(10:24):
during the space as well.

Speaker 1 (10:25):
Now, every keystroke, every email, everything we've searched is data
that we've given to Google, and that can all be
fed into your AI models, which is a huge competitive advantage.
What debates are you having internally about how you use
that data?

Speaker 3 (10:41):
We give a lot of controls to users. You can
automatically delete your data in Google as you use it,
and for AI, like for example, if you use Gemini,
you know we don't use your data to train the models.
In general, there may be cases in use cases where
we will get permission to do so, but I've always
felt for two decades people uses in products like Gmail,

(11:03):
Google Photos. We've earned that trust because we don't misuse
that data. I think that's the foundation on which we
are achieving our success. So privacy is always foundational day
everything we do, and that'll be true even with AI
as well.

Speaker 1 (11:18):
When did you learn open ai was using YouTube transcripts
to train its models and what's your position on that?

Speaker 3 (11:24):
I mean, it's a question for OpenAI to answer. We
have clearly stated policies in terms of what is acceptable
use for YouTube, and so we'll definitely expect others to
abide by the guidelines. So that's how I think about it.

Speaker 1 (11:37):
Meantime, you've got AI systems that are running out of
training data, like, what are the implications of that?

Speaker 3 (11:43):
I think one of the challenges is going to be
as we scale to the next generation of models and
the models get much larger than they are today, you know,
what is the source of training data? I think there
is still data which is not included in these models
that can be included that can be useful. But I
think over time, if you look at Alpha Go, which

(12:04):
is a product which we designed to solve Go and
chess and so on, AI models learned by playing.

Speaker 2 (12:11):
With each other.

Speaker 3 (12:13):
So in the field you call this self play, there
are notions of synthetic data. So over time there's this
notion of can you have models create outputs for other
models to learn? These are all research areas now, so
I think those are all important areas where we will
achieve breakthroughs to continue making progress.

Speaker 1 (12:31):
Right now, you've got companies turning to AI generated data
to train their models. Aren't there risks to that?

Speaker 2 (12:37):
Yes?

Speaker 3 (12:37):
I think that through it all, are you creating new knowledge?
Are these models developing reasoning capabilities?

Speaker 2 (12:44):
Right?

Speaker 3 (12:44):
Are you're making progress and intelligence of these models? I
think those are the frontiers we need to prove that
you can do that by using these techniques, and to
be very clear, these are the cutting age research areas
where we are investing a lot of resources.

Speaker 1 (12:58):
On that data is the new oil and lms are
proving that out all over again. But do we need
new laws? Can I really publish an article online and say,
but the AI can't train on it.

Speaker 3 (13:11):
We allow people who are creating content to opt out,
specifically out of our Gemini training models, and so we've
given the choice for people to opt out. You know,
I think it's an important moment where we have to
balance what has always been important notions of fair use
and how can you use it for derivative work and
how do you protect the rights of content holders. I

(13:33):
think that's these are important questions and it's important we
strike the right balance even through a moment like that,
And I think both notions are equally important.

Speaker 1 (13:40):
Yeah, I know you've said there will be more and
more breakthroughs, But is LLM technology nearing a plateau?

Speaker 3 (13:48):
I would be surprised if llms are the only thing
we would need to make progress. Right, So, the way
I would think about it is already the current generation
of While we call it LLMS, there are a lot
of underlying breakthroughs. Many of it were developed at Google
transformers or contributing to the mixture of experts, architecture underneath
these models, or reinforcement learning with human feedback. There are

(14:11):
a lot of breakthroughs which have gone into what makes
AI generatd AI what it is today, and so I
expect to have more breakthroughs, whether we think of as
next generation of llms or just AI making progress that
it is a definitional thing, but what is more important
is we are driving that progress. One of the things
that excites me about Google Deep Mind is we are

(14:33):
not only building the cutting edge models. We are investing
a lot of computing and resources our aid such as
talent in driving the next generation set of breakthroughs. So
we are doing that equally with a lot of focus,
which is what gives me a lot of optimism we
will have more breakthroughs.

Speaker 1 (14:50):
There are big concerns that AI is creating this underclass
of workers that are pouring through pages and pages of
text and video and images while in upper class gets richer.
What do you do about that?

Speaker 3 (15:03):
The answer to a lot of this is companies need
to have a bar to make sure workers are well
taken care of. Over the past few years, we've had
to invest, for example, when people were monitoring content on YouTube,
how do you support people better? So I think there
are ways by which you can take care of workers
well being through these things. So I think those are

(15:25):
important notions, important principles, and I think the same principles
apply during this AI moment as well.

Speaker 1 (15:31):
Training all these models requires a ton of energy. How
does the industry keep up the demand for this computing
power without ruining the planet.

Speaker 3 (15:40):
It's definitely very important to get right. At Google, we've
been carbon neutral since two thousand and seven, and over
time we've made a lot of renewable energy investments and commitments.
Some of our largest AI data centers today run almost
entirely on carbon free so we have to really push
the bus boundaries year. I think the question is the

(16:02):
pace of inflection we are seeing with computing will make
this an extraordinary challenge, particularly in a three year plus timeframe,
So it's going to be important to keep up that focus.
We have definitely stated goals and we're going to try
really hard to make sure we can do this very sustainably.

Speaker 1 (16:19):
You know, as we were talking about you pivoted the
company to be AI first years ago. But it seems
when you look at the big picture, like Google missed
the big moment and chat Ept took it. If you
could go back, what would you do differently?

Speaker 3 (16:33):
To be clear, I take a long term perspective and say,
when the Internet just first came about, Google didn't even
exist there, right, So we were in the first company
to do search, we were in the first company to
do email, we were in the first company to build
a browser. So I view this ais we are in
the earliest possible stages, and we've built so much foundational

(16:57):
components within the company, and we are channeling all that
to innovate ahead. So I think we are exceptionally well
set up. You always look back and say, well, if
you had done this differently or something. You do that
to learn and make the company better, But at any
given time, you want to be forward focused in terms
of what you can do from this moment on, and

(17:17):
that's what we've been focused on. I see the relentless
space at which the teams are innovating now within Google,
and when I look at twenty twenty four, the year ahead,
I'm excited at our roadmap, and so I feel very optimistic.

Speaker 1 (17:31):
Your leadership style has been described as slow and steady
and cautious, sometimes maybe too cautious. And you're often compared
to these other tech leaders who are moving fast and
breaking things. How would you describe yourself?

Speaker 3 (17:43):
The reality, I think is quite different. One of the
first things I did when I became a CEO is
to pivot the company sharply to focus on AI as
well as really invest more in YouTube and cloud to
build them into big businesses. These are big, important decisions
and consequential decisions. I constantly look to make those decisions.
I think the larger the company is, you are making

(18:06):
fewer consequential decisions, but they need to be clear and
you have to point the whole company to that. Part
of that at times involves bringing the company along. You
build consensus because that's what allows you to have maximum
impact behind those decisions. But I think in the technology
industry you have to make fast decisions.

Speaker 2 (18:26):
You have to move at a fast pace.

Speaker 3 (18:28):
If you don't do that, we won't be as successful
today as we are, so we'll continue to do that
as we move ahead.

Speaker 1 (18:34):
Any leader in a position like yours has to be
willing to hear the criticism, and I'm not going to
make you read the mean tweets like they do on
late night, But I do have a few Where is
Google running things through legal? Google doesn't have one single
visionary leader, not a one. Do you think you're the
right person.

Speaker 2 (18:51):
To lead Google? Look, it's a privilege to lead the company.

Speaker 3 (18:55):
I look at all the progress we have made, and
I look at the opportunity ahead. It's definitely the privilege
of lifetime. I think people will see the progress the
company is making. You know, as I said earlier, I
think people tend to focus in this micro moment, but
it is so small.

Speaker 2 (19:13):
In the context of what's ahead.

Speaker 3 (19:15):
And when I look at the opportunities ahead across everything
we do, and for the first time all of that
as a common leverage technology with AI, I'd put a
lot of chips, at least from my perspective, on Google.

Speaker 1 (19:26):
All Right, good to know, one more backward looking question,
and then we're going to look only forward. Google researchers
invented the transformer, literally the T and GPT. Do you
wish you capitalized on that? Louder and sooner?

Speaker 3 (19:39):
People underestimate part of what has made search better. We
use transformers in search. Berth and mom. These are all
transformer based models in search. That's what led to large
gaps in search quality compared to other products. So we've
infused transformers across our products. We have a chance to
do that better with Generating AI and with the Gemini

(20:01):
series of models, and we are doing that across our
product portfolio as well as providing it to businesses everywhere
using Google Cloud. Again, I feel there's going to be
more breakthroughs in this field, and so it feels like
we are well set up, we are moving fast and
there's a lot of innovation ahead.

Speaker 1 (20:20):
You recently fired Google employees who were protesting your work
this contract with the Israeli government for cloud services. It
seemed like a distinct change in tone for a company
that's historically welcomed all kinds of views. Why did you
take this.

Speaker 3 (20:35):
Stand Well, first of all, it's important to step back.
I think as a company, we've always had a culture
of vibrant and open debate. I think it's directly has
led to a culture of creativity and collaboration, pushing each
other to build better products. I think it's always worked
best when it's in the service of our mission and
what we are doing for our users, So I think

(20:57):
that's an important principle to keep in mind. I think,
more than any the company, we give various ways by
which employees can praise their concerns, and we take them seriously,
and that hasn't changed. But it has to happen in
a framework of a respectful debate and the civilized debate,
and in a way that it does not disrupt the workplace.
We are a business, I think in a vast, vast,

(21:19):
vast majority of employees you know, buide by that. I
think when we have cases, including in this case, but
a few employees cross beyond what's in the court of
contact and disrupt the productivity of the workplace, or do
so in a way that it makes other people feel uncomfortable,
I think we have to take action, and that's what
we are doing. There's nothing to do with the matter

(21:40):
or the topic they are discussing. It's about the conduct
of ulevent about it.

Speaker 1 (21:44):
I've talked to a lot of employees about this, actually,
and some folks thought it was a little draconian, But
some of your employees were glad to see you taking
a stand. Is this a new Google or.

Speaker 2 (21:55):
A new you?

Speaker 3 (21:56):
Almost all the employees here who I've talked to agree
with the decision. I think they definitely don't think this
is the way you express disagreement. So I think it
is important. Over the past years, through the pandemic, the
company has grown a lot, so sometimes for a large company,
it's worth going back and restating what you mean. And
I think it's partly what I did, you know, re

(22:18):
anchoring the company. I view, particularly in this moment with AI,
the opportunity we have ahead of us is immense, but
it needs a real focus on our mission. So I
felt it was more important than ever to REI trade
that to the company.

Speaker 1 (22:34):
There have been multiple rounds of layoffs. Why take this approach,
Why not cut once and cut deep.

Speaker 3 (22:40):
It's a moment of growth and investment as well. But
rather than just do it by hiring, we are reallocating
people to our highest priorities. So that's what this hard
work is. There are cases where you're simplifying teams, You're
moving people to focus on newer areas. There are times
here simplifying the organization, removing layers so that you can

(23:02):
improve velocity. So I think these are deliberate changes being
undertaken by teams with a view too. It's making the
company better and making sure we are putting as many
people against our highest priorities. And so that's why we
have taken the time to do it correctly.

Speaker 1 (23:17):
And BA Microsoft is obviously making huge investments in AI
as well, open Ai, Inflection and Stroll. We've reported that
their open ai investment was actually impartant because they were
worried about Google and wanted to catch up. How do
you feel about the competition there and should regulators be
looking at it?

Speaker 3 (23:38):
If anything, I look at AI and I see a vibrant,
dynamic competitive field, which is great. We'll really push innovation ahead.
I think of always tell the view if you're working
in the technology space, there is a lot of competition.
We see it all the time. The way you stay
ahead is by innovating relentlessly, right, I think it has

(24:00):
be true all the time, and so I think we've
done that with search, will continue to do that with
search across our other products, be it YouTube and so on.
So I view this is no different. It's just that
it's happening at a faster pace. But you know, technology
changes tend to get faster over time, so it's not
surprising to me at all.

Speaker 1 (24:19):
Microsoft Ceosauch and Adela has had some fighting words and moves.
Who's really choosing the dance music?

Speaker 3 (24:27):
I think one of the ways you can do the
wrong thing is by listening to Noise South Terre and
playing to someone else's dance music. I've always been very clear.
I think we have a clear sense of what we
need to do. We've been gearing up this for a
long time, and that's what you'll stay focused on.

Speaker 1 (24:42):
All right, So you're listening to your own music, That's
that's exactly right. Mark Zuckerberg is making waves as an
open source AI player. Are you going to let him
own that narrative?

Speaker 2 (24:53):
Look?

Speaker 3 (24:53):
I think there's going to be important open source contributions.
I think it's important for the field. I mean, Google
is published and shared. I had a lot of knowledge
to make this field progress forward. We are doing that
with some early models as well. We've announced him a
series of open models. I think it's great that there's
more open source memutum beat from Mistral, beat from Meta.

(25:14):
I think I would expect that in the field, and
I think it's good to keep the frontier of innovation moving.

Speaker 1 (25:21):
Any chance you want to buy TikTok now.

Speaker 3 (25:25):
I think we are focused on the products we are doing,
so it's not something we are looking at.

Speaker 1 (25:29):
What does a TikTok band mean for Google?

Speaker 3 (25:31):
I think it's not clear there'll be a ban on TikTok.
I think the bill that's passed allows for a sale
of the product, so it's too early to tell. I
think there are many wayses could play out, but in
all scenarios, I think there will be a version of
the product maybe around for users. So not spending too
much time thinking about it.

Speaker 1 (25:51):
Apple and Google are huge partners in a search deal
struck years ago. Will you be partners on Gemini too?
And to be clear, we've reported that Apple's talking to
both Google and open Ai.

Speaker 3 (26:02):
We don't comment on partent discussions, but we've always cared
about making sure people can access our products easily. I
think it's consistent with our mission of making our products
universally accessible and useful, and so we've long had a
framework in which we think about these things. And so,
you know, maybe that's all I have to say there.

Speaker 1 (26:22):
What do you think is the future or potential of
AI powered hardware and what will Google's role in it be?
Is the smartphone going to be the form factor. Will
there be something completely new?

Speaker 3 (26:31):
I think two things I think still today is smartphone
sort the center of your computing experience, and I think
with AI you get a chance to rethink that experience
over the next few years. And so I viewed as
an exciting opportunity for us to rethink Android, both with
our partners and with Pixel as well. But I think

(26:52):
one of the things that excites me about the way
we are thinking about Gemini, it's natively multimodal. I think
it can really come to life in a form factoral eyeglasses.
So I think AI will end up playing a strong
role in the vision of ar et cetera. So I'm
excited about that future as well. So I think it
will apply to both people will build purpose built devices,

(27:14):
but I think that's still early. Like I still see
the center of where the AI innovation will happen in smartphones,
followed by glasses, right, That's how I see it.

Speaker 1 (27:23):
In Nvidia has become the power broker for AI chips. Meantime,
you are now investing in making your own chips. What
made you realize you needed to do this? It's a
huge undertaking.

Speaker 3 (27:34):
Nvidia as an extraordinary company. I think Jensen has been
driving this investment for a long long time and they're
seeing the fruits of that long term view. As a
company in media is an important partner for us. But
we've always thought about you know, we are very proud
of our infrastructure. We believe we have the best infrastructure
in the world, and that applies to AI as well.
And part of that when we said the company was

(27:55):
going to be AI first, we realized AI would need
special purpose built chips, so we built our first TPUs
in twenty I announced ratat Io in twenty sixteen. We
are now in our fifth generation, so you will see
us continue to invest there. We'll embrace both GPUs and
TPUs and we'll give our customers choice. But these are

(28:15):
areas where we viewed as foundational investments. We think about
subse cables, we think about our networking chips, we think
about our into an AI what we call our AI hypercomputer,
right our AI data centers. So these are what I
view as core strengths of what positions as well for
the decade ahead.

Speaker 1 (28:33):
Google is facing a ton of regulatory pressure in the
US abroad over your dominance in search, video ads, the
app store, some other big companies have split themselves up
to focus on their core. Has Google thought about that?

Speaker 3 (28:48):
If we look at it from a user perspective, people
are trying to solve problems in their day to day lives,
and so a lot of our products integrate in a
way that provides value for our users. So I think
that is important. Part of what allows us to compete
in the Google Cloud market is the investment in AI
we undertook because of search is what allows us to

(29:10):
take that and compete hard against other larger companies like
Amazon and Microsoft in cloud.

Speaker 2 (29:17):
So I would argue that the way.

Speaker 3 (29:19):
We are approaching it drives innovation adds choice in the market.

Speaker 2 (29:23):
That's how I think about it.

Speaker 1 (29:25):
Last time we talked to you told me China will
be at the forefront of AI. How should policy makers
factor that in to their decisions?

Speaker 3 (29:32):
I continue to hold that view. I think China is
investing a lot in AI. I think they will be
at the forefront of this technology as well. I think
it's important we as a country invest in AI as
well and are at the forefront. But I think over time,
from an AI safety standpoint, we need to develop frameworks
by which we achieve global cooperation.

Speaker 2 (29:52):
To achieve AI safety.

Speaker 3 (29:54):
I know it sounds far fetched now, but we've done
it in other areas like nuclear technology and so on
to some extent. I think we're going to need framework
like that, and so I would expect over time there
needs to be engagement with China on important issues like
AI safety.

Speaker 1 (30:10):
The world is voting this year, and misinformation is only
going to get more complicated in the age of generative AI,
and it's worse in other languages. What do you worry about?

Speaker 3 (30:21):
Look, I think the integrity of elections, particularly in a
year like that.

Speaker 2 (30:24):
You're right.

Speaker 3 (30:25):
I think almost one in three people in the world
are going to go through some kind of democratic electoral process,
which is extraordinary to see. I think we should celebrate that.
I think the role for us. You know, we've all
invested so much in election integrity over the years. We
have a lot of learnings to bring to bear, and
so we are investing early and ahead and deeper than

(30:47):
ever before to get it right. I think AI is
a new tool, but I think so far I don't
think we've seen something extraordinarily different because of it. But
time will tell. But we are doing our utmost prepare
for what's ahead?

Speaker 1 (31:01):
Have you checked out how Google's doing back at home
in the Indian elections.

Speaker 3 (31:05):
We take pride in being a source of information for people,
and I think people come look for information and I
view this is no different than any other moment in time.

Speaker 2 (31:15):
I'm proud of one of.

Speaker 3 (31:16):
The largest democratic processes anywhere in the world, and it
always hair raising moment to see people wort and so
it's great to see.

Speaker 1 (31:26):
Yeah, you're on the cusp of becoming a billionaire. What
are your philanthropic goals? Will we see you bring resources
back to India?

Speaker 2 (31:34):
Definitely?

Speaker 3 (31:35):
We've done some limited amount, but I've always viewed it
as there's a phase of my life and I'm not
doing what I'm doing now, and I do want to
put a lot more time and energy and passion into
philanthropically giving back, and it's a privilege to be able
to do that.

Speaker 1 (31:48):
Are there any particular causes that you are really passionate about?

Speaker 2 (31:52):
Too early to tell.

Speaker 3 (31:54):
I've done a variety of things, but I'm still forming
where it can be most impactful.

Speaker 1 (31:58):
There's no question that ai I will reshape the labor market.
Is blue collar going to be the new white collar?

Speaker 3 (32:05):
I think at least the current phase of AI I
see looks like it will help people, and it's true
in my use today, and that's how I expect a
radiologists to use it, to have AI assisting you. So
I think there is a real scenario in which it
lowers the barrier take coding, for example, more people will

(32:26):
be able to code, it will take the grunt work
out of coding, it'll make people who code more productive,
it will expand the opportunity set, etc. So that's the
near term. Longer term, it's tough to predict. Typically in technology,
when we have predicted, it's kind of played out a
bit differently, so I still think it's too early to tell.
But yeah, I think AI in the physical world will

(32:48):
happen slower than in the virtual world. So maybe there's
an element where it impacts differently than other technology transformations
in the past.

Speaker 1 (32:57):
Artificial general intelligence is it mean to you do we
get there?

Speaker 3 (33:02):
And when it's not a well defined phrase, it means
different things to different people. I think it meant a
lot more many years ago in the context when AI
was more narrow it couldn't do a general set of tasks.
That's why people would call out agis distinct. We are
definitely working on AI in a way. It's more generalized
technology now. But I think if you define AGI as

(33:26):
AI becoming capable across a wide variety of economic activity
and being able to do it well, I think that's
one way to look at it. That's how I think
about it in my head. I still think we have
some ways to go, but the technology is progressing pretty rapidly.

Speaker 1 (33:41):
So Google's going to get us to AGI.

Speaker 3 (33:43):
We are committed to making foundational progress towards AGI in
a bold and responsible way, and so you know I
we'll focus on the effort to do that and do
that well.

Speaker 1 (33:53):
The concerns about AI leading to human extinction? Are those
legitimate or totally overblown?

Speaker 3 (33:58):
I think we are far away from needing to think
about things like that at this moment. But I definitely
have a more optimistic view of how this will play out.
I think the essence of humanity is being able to
harness technology in a way that benefits society, and more
than any other technology, I see as having the conversations
early enough with AI so that gives me faith in

(34:21):
humanity that we will get it right.

Speaker 1 (34:23):
You've said there's even some things about AI that you
don't understand well. AI always be somewhat in a black box?
Will there always be some things that we will just
never know?

Speaker 3 (34:36):
I have a little bit of a counter into the
view there. Humans are very mysterious too, right, And humans
are more of a black box than we give credit for.
Often when people explain why they did things, telling something,
it's not entirely clear that's why they did that specific thing.
AI will also help us today. We can't make sense
of many complex systems. You know, how does the global

(34:56):
economy work, et cetera. You know, AI will give us
more insight and more visibility into many complex things. So
it will explain the world better. And maybe over time
you can query the AI and you can get a
better explainability. I think that should be one of our
design goals design attributes is to develop explainable AI over time,
and so I think it's too early to tell.

Speaker 1 (35:20):
When I asked open Ai CEO Sam Altman why we
should trust him, he said, you shouldn't. Why should we
trust Google?

Speaker 3 (35:28):
I shared the notion that you should blind lead. You know,
that's why it's important to have systems in place. Regulation
is a part to play. It does to balance innovation.
But I think as THECAI systems get more capable, you know,
regulation will have an important role to play, and it
shouldn't just be based on a system of trust people

(35:48):
or trust companies. I think that's that's not how you
deploy very powerful technology. But at this early stage of technology,
you have to balance that with a view to allow.

Speaker 2 (35:58):
Innovation to flourish.

Speaker 3 (36:00):
We have to remember that the positive upside here is
tremendous as well in the areas like healthcare and many
other areas. So I think we have to take that
view too. But over time, I think you have to
build frameworks to make sure this technology is deployed responsibly.

Speaker 1 (36:13):
We've talked a lot about the opportunities. What is the
biggest threat to Google's future?

Speaker 3 (36:19):
I view for all companies, particularly at scale, you know,
biggest threat is not executing well. I think as long
as we stay focused on our mission and approach it
by building foundational technology and using it to build products,
innovate with it, and do that with a sense of

(36:41):
urgency and focus on users, I think will do well.
But that's what will define our success more than anything else.

Speaker 1 (36:49):
You spend so much time thinking about what Google's future
will be should look like, what's the killer bet that
could secure Google for the next twenty five years? Is
it AI or is it quantum computing?

Speaker 3 (37:00):
I would say AI is I've always viewed AI as
that transformational opportunity for the company. I felt that for
almost a decade, and you know, I continue to feel
that about the next decade ahead.

Speaker 1 (37:11):
Are we going to look back on this lm era
and laugh? Is this going to all look so basic
and rudimentary?

Speaker 2 (37:17):
I hope we do.

Speaker 3 (37:19):
My kids aren't impressed by touchscreens or the fact that
they have this extraordinary amount of computing in their hands.
So today, for example, people tell about like, look at
how much computing we are using. To me, doesn't feel
like a large amount. It's just large relative to what
it was before. So similarly, there's no reason we won't
scale up our computing one hundred thousand times in a
few years.

Speaker 2 (37:40):
So yes, I.

Speaker 3 (37:42):
Hope some of this looks like a toy in the future,
because that will mean that we've applied it to achieve
breakthroughs in cancer, etc. Right, So I hope it is
that way. Otherwise we didn't do our job.

Speaker 1 (37:53):
Well, you just did a big reorc is that with
succession in mind.

Speaker 3 (37:58):
When you run a company at the scale reorganizations which
focused clearly towards meeting the moment with AI making sure
we are simplifying the company and able to execute well.
So that's what, at least the set of reorganizations we're
focused on.

Speaker 1 (38:13):
How long do you see yourself continuing to do this?

Speaker 3 (38:16):
I don't think about that on a day to day basis,
but you know, as a board, etc. We've always had
responsible conversations around this topic, and I think it's important
to do that. But I'm committed and I'm excited about
the journey ahead.

Speaker 1 (38:27):
So what motivates you to keep going? It's a hard job,
and this is like a huge job, tons of energy.

Speaker 3 (38:33):
I still get delighted and surprised by how technology makes
progress and playing a part in that is where I
get a lot of my energy from. And so I
view this as to me, if anything, this moment is
more it's something I've thought about for a long time
and almost it's part of a journey I've been working
on for a long time, and so this is the moment,

(38:55):
and so it's more exciting than most of the moments.

Speaker 1 (38:59):
Is there a healthy dose of paranoia, like not becoming
stand the t Rex out there, yeah, I think, and
going extinct.

Speaker 3 (39:05):
I think that you know, there's a part of me
which is always internalizedating the old Andy Grove face. Only
the paranoid survive, but to a healthy level. I don't
obsess about it. But I never take our successful granted,
you know. I constantly feel you have to re earn it,
and you have to do it with a sense of
hunger and urgency and being mission focused and being user focused.
So all that is important, and I think this moment

(39:27):
is no different.

Speaker 1 (39:29):
Thanks so much for listening to this episode of the Circuit.
You can watch our full episode with Google CEOs and
Darpachai on Bloomberg Originals. I'm Emily Chang. Follow me on
Twitter and Instagram at Emily Chang tv, and watch new
episodes of the Circuit on Bloomberg Television or streaming on
the Bloomberg app or YouTube. And check out our other
Bloomberg podcasts on Apple Podcasts, Spotify, the iHeartMedia app, or

(39:52):
wherever you listen to your shows and let us know
what you think by leaving a review. They really make
a difference. I'm your host and executive producer. Our senior
producers are Lauren Allis and Alan Jeffries. Our editor is
Alison Casey. Catch you next time,
Advertise With Us

Popular Podcasts

Dateline NBC
The Nikki Glaser Podcast

The Nikki Glaser Podcast

Every week comedian and infamous roaster Nikki Glaser provides a fun, fast-paced, and brutally honest look into current pop-culture and her own personal life.

Stuff You Should Know

Stuff You Should Know

If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2024 iHeartMedia, Inc.