All Episodes

April 27, 2025 41 mins

Newt talks with Pulitzer Prize-winning journalist Gary Rivlin about his new book, “AI Valley: Microsoft, Google, and the Trillion-Dollar Race to Cash In on Artificial Intelligence.” They discuss the evolution of AI, highlighting the dominance of tech giants like Google and Microsoft in the AI space. Rivlin explains how the high costs of developing AI models limit opportunities for startups, potentially solidifying big tech's power. Their conversation also covers the historical development of AI, the role of neural networks, and the impact of increased computing power. Rivlin expresses optimism about AI's potential in fields like healthcare but warns of the risks associated with big tech's control and the need for government regulation. He underscores AI's transformative power and the importance of balancing innovation with ethical considerations.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
On this episode of a News World. From boardrooms to
dorm rooms, AI seems to be what everyone is talking about,
from the promise of Chat GPT to robots that may
or may not take your job. In his new book
AI Valley, Microsoft, Google, and the trillion dollar Race to

(00:25):
cash in an Artificial intelligence, Pulitzer Prize winning journalist Gary
Rivlin follows the launch of Chat, GPT and three startups,
all with big dreams of cashing.

Speaker 2 (00:36):
In on AI.

Speaker 1 (00:37):
But it's not long before the tech giants enter the
AI space. Rivlin lays out the fascinating history of AI's evolution,
the breakthroughs and wrong turns, and the major players in
Silicon Valley, the developers and investors.

Speaker 2 (00:52):
Who will lead the future of AI.

Speaker 1 (00:54):
Here to discuss his new book, I am really pleased
to welcome my guest, Gary Rivlin. He is a Pulitzer
Prize winning investigative reporter who has been writing about technology
since the mid nineteen nineties and the rise of the Internet.
He is the author of ten previous books, including Saving
Main Street and Katrina After the Flood. His work has

(01:17):
appeared in The New York Times, Newsweek, Fortune, GQ, and Wired,
among other publications. Gary, that's an amazing record. Thank you
for joining us.

Speaker 3 (01:35):
Thank you my pleasure.

Speaker 1 (01:37):
So you've been covering tax since the nineties. How has
tech evolved since you first started covering it and what
has surprised you most about how Silicon Valley has evolved.

Speaker 3 (01:49):
Well, the short answer is the dominance of the giants.
In fact, for this book, I went looking since the
end of twenty twenty two and I realized, like, this
is now the AI moment. They turned back to tech
and I was looking for what would be the new Google,
what would be the new Facebook? And it turns out

(02:10):
the new Google is Google, the new Facebook is Facebook.
And so you know, kind of dating back to the
mid nineteen nineties, it was all about startups. It's all
about these companies founded in a dorm room, someone's garage.
They have a great idea, they raise a little bit
of money, they get some traction, and they become Google. Facebook.
But AI is different. I fear it's going to solidify

(02:30):
the power of big tech rather than open it up
to another set of players.

Speaker 1 (02:35):
Are there characteristics of the investment you have to make
that make AI susceptible to that kind of dominance money?

Speaker 3 (02:44):
It's just so expensive. In the old days, you can
raise a million, a few million dollars and start to
get traction. Then you need to raise the big money
to go national, go global, whatever. But AI training these models,
this general AI where they can talk to you, spit
out images, make video. It's so expensive to train these things.

(03:08):
It's so expensive to fine tune these things. It's so
expensive to operate these things. Is beyond the means of
most startups. But I first started reporting on this at
the start of twenty twenty three, it would be millions,
maybe ten million to train and fine tune one of
these models before they released. By the time I was

(03:30):
done reporting, it was one hundred million. And now it's
billions of dollars. What's startup? I mean, there's open AI,
there's a couple others, but very few startups could raise
that kind of money. And it's still not enough. I mean,
these startups are still losing money, so they have to
raise billions, tens of billions, perhaps in the future, not
so distant future, one hundred billion dollars and more. And

(03:51):
big tech can afford that. You know, Microsoft, Apple, They're
sitting on one hundred billion dollars or so in their savings.
But what chance does a startup have to do of
these cutting edge foundational models, the chatbots in the like.

Speaker 1 (04:04):
When you look at the biggest companies Microsoft, Google, Meta,
and Amazon.

Speaker 2 (04:11):
That the share of volume of cash that they create.

Speaker 1 (04:14):
Yeah, to me, it's astonishing that, with the exception of
a couple companies in China and Saudi Aramco, all of
the trillion dollar companies in the world are American.

Speaker 3 (04:23):
It's astonishing. Let's just focus on Search is arguably the
best business ever. Once you've built the infrastructure, there's not
much marginal costs to add new users, and so Google
is making over one hundred billion, something like one hundred
and fifty billion dollars a year in profits just from search.
Microsoft they've been struggling to get into search with being

(04:45):
and they would get like three four five percentage points,
which is nothing except for it would still translate to
hundreds and hundreds and hundreds of millions of dollars of revenue.
And you can make the same argument with Facebook Meta.
In the old days, the rough statistic is newspapers, magazines,
publications brought in like fifty billion dollars in advertising revenue collectively.

(05:06):
That was a circle like two thousand. Nowadays they're bringing
in closer to ten billion dollars and most of the
rest is being divvied up by Google and Meta, and
so it's kind of an idea of winner takes most
and the stakes are so big that they're just making
so much money. But you know, this dates back to Microsoft.
I mean Microsoft, which actually sells product, right, I mean

(05:30):
you buy your Windows operating system and software package, office
software packages. You know, their profits were astonishing in the
nineteen nineties, and I think it just builds on itself.
They're so rich they can afford to invest in AI,
gener of AI, these leading edge technologies, and sometimes bigness

(05:51):
is a weakness right there. You know, they kind of
trip over their own feet. Google was so far ahead
of everyone with machine learning, ditting back to the twenty tens,
but of course it was open AI to a chat GBT.
You know, big companies are scared the innovator's dilemma. They
don't want to threaten their existing honeypot. And also, you know,
I mean startups have advantages, but the advantage of money

(06:12):
in AI makes me worry that it's kind of game over.

Speaker 1 (06:16):
In the case of Google, you're doing all the work.
They build a framework within which you get to come,
you get to play. They don't have to pay anybody.
These people are all coming and saying, please let me
come and use your.

Speaker 3 (06:28):
Material exactly Facebook, Instagram, we could go on listing, Twitter X.
There's an expression in Silicon Valley if you're not paying
for the product, you are the product. And that's a
perfect way of understanding a Google or a Facebook. Look,
let's use Google. So this phone in your pocket tracks
you everywhere you do your searches online. They track that,

(06:50):
They bundle up the data, and they sell it to
the highest bidder. It's a great business. It makes a
lot of money for them. But I think citizens only
slowly woke up to that fact. And again this gets
me back to my worry that if the same big
tech companies, the same few companies are going to dominate
AI the way they've dominated the last bunch of years.

(07:11):
We don't trust them as the stewards for technology, and
in fact AI is a powerful technology. AI is going
to rely on our information. There's privacy concern. So that
is my concern that these companies that are proven untrustworthy
are going to be the ones bringing us this amazing power.

(07:32):
I'm optimistic about AI. I lean optimistic. I think AI
could bring incredible things around education, scientific breakthroughs, medicine. My
worry is it's in the hands of companies that show
that it's all about profits and not about trust and safety,
which again I think is essential for AI.

Speaker 2 (07:53):
Let me do all his thinction.

Speaker 1 (07:54):
There was a period there where people were coming up
with pretty cool innovations and then promptly getting bought, so
they never actually a chance to grow into a competitor
because they were acquired by these large companies and who
absorbed them. Now, if I understand you correctly, it's ructually
impossible for a startup in the AI field because of

(08:17):
the scale of resources it takes to create it. So
virtually all of the next level of innovation in terms
of small companies is going to be the use of AI,
which will be provided by one of the big companies.

Speaker 3 (08:30):
Let me break the startup world into two general categories.
There's still going to be plenty opportunities for founders to
raise some money and have a good return on an investment.
For some app like AI will automatically fill out your
expense sheets and do most if not all, of the
work for you. You could see that be very valuable.

(08:51):
And then there'll still be in quote little companies, they'll
have millions of users bringing tens hundreds of millions of dollars.
But I'm really focused on that second category. I have
a billion plus users, I have a market cap, a
paper worth of over a trillion dollars. Doing the foundational models,
doing the stuff that's underneath everything, this stuff right at

(09:14):
the center of it. They're training the models, they're operating
the models that these other smaller companies apps. They could
be for business, they could be for individuals, or be
AI therapists, they'll be AI life coaches. I mean, there's
plenty of opportunities for small companies. But as I'm saying that,
I realize that Google Meta Open Ai a three hundred

(09:36):
million dollar company right now on paper, you know, they
have their own life coaches. Some of them are working
on therapists or companions and all this, and so there
is room in that first category for companies to break through.
But I do worry too that not only will big
tech dominate that second, that to me more central, foundational category,

(09:58):
but they too can pick off folks in that first category.
Either they'll do it better and beat the competition because
they have more money, more ability to train these AI models,
or if they just buy it. Look at Washington right now,
the FTC has the case against Meta because Mark Zuckerberg
fear at Instagram, so he bought Instagram. And did he

(10:20):
buy it because he wanted to grow it and make
it into its own product, or do you buy it
because he was scared he was going to threaten Facebook
and he wanted to defang it.

Speaker 1 (10:29):
I mean, we've had these cycles where bigness ultimately gets
tackled by the government if this assumption that it becomes
predatory or inhibits the growth of competitors. But I want
to go from the corporate side to AI itself. You
make the point in your book that AI started to
develop and then in the nineteen seventies it sort of stopped.

(10:50):
You describe it as sort of the AI winter. Could
you walk us through that.

Speaker 3 (10:55):
One of the fun things in doing this book was
how did we get here? How long have we been
playing around with AI? So it dates back to at
least the nineteen fifties. The term AI artificial intelligence was
coined in the late nineteen fifties and it's just funny
to read the optimism, the wild optimism of those behind
AI in the fifties and sixties. They were convinced amazing

(11:19):
things were right around the next corner. But you know,
AI was right around the next corner for about seventy years.
And part of that is computers weren't strong enough. Part
of that is we need digital data, and we didn't
really have that much digital data until people started posting
and migrating so the Internet in the mid nineteen nineties.
But part of it was a monumentally wrong turn. It

(11:42):
was an amazing academic at Cornell who came up with
this idea of a neural network, this idea that computers
would learn in the fashion of a human. They would
read material, they'd get feedback, and they would improve that
way rather than coding line by line by line by line.
Professor was mocked, and for forty or fifty years that

(12:05):
approach was considered the wrong approach among the academics, the
prevailing thought among computer scientists. It really wasn't until the
twenty tens were what people are now calling machine learning,
deep learning, neural networks. These models, these systems that learn
through training and improve through feedback. Wasn't until the mid

(12:28):
twenty tens that that really took on as the best approach,
and in fact, that is why we're where we're at
right now, machine learning, neural networks, that's the basis for chat,
GBT and all the chatbots and other systems people are
using to draw their photos or make little video clips.

Speaker 1 (12:47):
How much of this was just a function of computer
power catching up with the theory.

Speaker 3 (12:53):
I think the academics dismissed neural networks as just the
wrong approach in thet Hence, even if we had taken
the neural networks approach, computers weren't nearly as powerful. In fact,
the godfather of machine learning, Jeffrey Hinton, professor at University
of Toronto, he made the point like no one imagined

(13:15):
that these machines would be a million times more powerful,
But you know, with an exponential gets more and more
powerful with each passing year, to the point where they
could handle billions of operations a second, which when you
go to a chetbot, you know, chat, GPT, Claude, you
know Google's Gemini, it doesn't make a difference. There's like
billions of processes that are going on. You say hello,

(13:39):
and it says hello back to you. You say, tell
me who New Gingrich is, and it goes and searches
and spits out a three or four sevens answer. There's
like billions and billions of operations. And today's computers are
strong enough. Today's computer chips are powerful enough to make
that happen. But that wasn't true thirty years ago, certainly,

(14:00):
probably not even ten years ago.

Speaker 2 (14:02):
I was surprised.

Speaker 1 (14:03):
I co chaired a working group on Alzheimer's around two
thousand and seven, and I realized that to really do
brain science, that the brain actually has about the same
number of synapses as the number of scars in the universe.
There's an astonishing identity, and that literally investing in computer

(14:24):
power was central to advancing brain science because we literally
at that point did not have the processing capability to
truly analyze in depth all the things that happened in
the brain. And now twenty years later, we're beginning to
move into a zone where we can have that kind

(14:45):
of activity.

Speaker 3 (14:46):
The brain helped us understand neural networks just like the brain.
I think it's eighty six billion neurons. These neural networks
try to emulate that, but I'm convinced these models are
going to help us better understand the brain sciences. Interesting
with AI because people get with science, there's specialties and subspecialties,
and they all have their own vocabulary, and it's really

(15:09):
hard to go across specialties across subspecialties. But with these
neural networks, they're finding that they could have them read everything.
The main model I write about in the book, it
was trained on a trillion and a half words. You
need thousands of human beings reading NonStop for their whole
life to get close to that. And so these models

(15:31):
trained for science, they could read studies in every discipline
and make connections that no human being can possibly make.
And that's one of the things I find most promising
about this that some of the answers are right there,
just we haven't made the connections, and something like AI
can help us make those connections.

Speaker 1 (16:04):
As this thing began to develop, and as computers became
more powerful and more central, and Nvidia emerged as an
amazingly key producer of the most advanced ships, why didn't
other folks like Intel do that? I mean, there are
companies who were doing pretty well and then and video
just explodes in its capacity.

Speaker 3 (16:27):
This is one of those kind of Columbus is looking
for spices in the Far East and discovers America. So
in Vidia created the most powerful graphics chip and so
that was their specialty for playing, you know, video games.
And it just turns out that these graphics chips are
perfect for training AI because they could just do billions

(16:47):
of operations in parallel, and that's what you need for AI.
It's that complicated math, but it's just a lot of
it at once. And so these in Vidia chips, they
were just kind of Johnny on the spot. They were
the perfect chip for training these neural networks. They're perfect
for machine learning. But the chip world is not the

(17:07):
software world. It moves very very slowly. And there's all
these innovative startups out there that are trying to create
chips that are designed specifically for AI. We still don't
really have cutting edge AI specific chips. Maybe some of
the memory should be on the chip. There's different ideas
out there, and I guarantee you ten years from now,

(17:30):
there will be innovative chips supplanting in Vidio's graphic chips,
but in Video might create that chip. It might still
be with Nvidia, but you know, the H one hundreds
and the chips that are the mainstay right now of
artificial intelligence. They're going to be replaced. It just takes
time to develop, tests, produce, mass, produce.

Speaker 1 (17:52):
There had shindo been a belief that Moore's law would
disappear and that you wouldn't have continuous doubling of capability
because as the chips got smaller and smaller, the challenge
of dealing with heat became greater and greater. But somehow
we've leaped past all that. What we're seeing now seems
to be. You could not have projected this in the
nineteen eighties exactly.

Speaker 3 (18:14):
I was writing for the New York Times in mid
two thousands, and that was the prediction. But I don't
have to tell you science is amazing, technology is amazing
that you know. Right after chat GPT came out, there
was the doomers their call, those who were worried about
laser eyed robots subjugating humanity, the kind of stuff I
think is born in Hollywood in the media coverage, But
that was the fear out there, like let's pause this,

(18:35):
Let's have a six month pause so we could catch up,
Like just not going to happen. It's called innovation. You
can't slow science, you can't slow discovery. The answer is
to manage it, to make sure that it's more of
a positive than a negative. All technologies cut both ways.
They're both positive and negatives. You know, cars changed our society,

(18:57):
but cars kill thirty five forty thousand people year in America.
They cause pollution. So all technologies are like that. So
my great wish if I had a magic wand it
would be like, just let's deal with this, folks. Let's
try to make sure that AI is more of a
positive than a negative. But you know, there's a lot
of other issues in the world right now that are
distracting us from that.

Speaker 1 (19:19):
You point out that from Google's perspective, the first great
use of AI was improving targeting us as customers and
figuring out what we really like and making sure the
ads come up that we would be interested in. So
just a fascinating way that things evolve in a way
you couldn't probably have predicted if you're sitting in some

(19:39):
academic place drawing up a plan.

Speaker 3 (19:42):
Right, So machine learning to maximize the cash register basically,
So yeah, I mean, let's give Google credit. Though they
got that machine learning artificial intelligence was going to be
really powerful, so they would use in the early days.
That you gave one example of kind of more efficiently
matching ads to searches, but also to deal with horble
Google searches they are spelling mistakes, they kind of understand

(20:03):
the context and help smooth them out. The funny thing
about artificial intelligence is all of us have been using
AI for a long long time. I am using example
of Google Search, but there's Google Translate that's been around
since twenty fifteen or so. That's artificial intelligence. You go
to Netflix or Spotify and they recommend you might like
this movie, you might like this song. That's AI. The

(20:26):
difference with the release in twenty twenty two of chat
cheapt from Open AI was that we could talk with it.
It wasn't a product behind the glass. It was something
that we can actually link to and use and chat
with it. I think that's what changed everything. The idea
that we can actually see it working and play with
it made us really stand up and take attention again.

Speaker 1 (20:48):
With she was sixty, we do a lot of polling.
We now run all the polling questions through chat GPT.

Speaker 3 (20:54):
I use sometimes use chat cheapt My favorite it is
called Claud from Anthropic. The same way it's my go
to editor. People have to understand how they use this.
AI is a co pilot. It's not like you know,
type in make me a Martin Scorsese movie hit entery,
and you're gonna have it. You have to be the creative.
You have to give it the ideas. If you ask

(21:15):
it to write something, it'll be flat, it's not gonna
be particularly good. It'll read like kind of like a
press release or boring report. But if you use it
as your companion and you're the creative. So what i
U is for is I'm struggling with a paragraph. I
don't like this transition. Help me out with this sentence
right in five different ways. And it's never like I

(21:35):
cut and paste and say, oh that's the sense, but like,
oh that's a good idea. I didn't think of that.
Oh that's an interesting word. Let me use that. I
routinely now before I hand something in, I have it edit.
It finds typo's, it finds mistakes, hey in bold, give
me suggestions from improving it again. Often I just ignore
their suggestions. But it's a really powerful tool to help

(21:56):
you refine to make what you're working on, make your
work better. It's your copilot, it's not your digital employee creator.

Speaker 2 (22:05):
But it's a pretty powerful coil.

Speaker 3 (22:07):
So start at twenty twenty three. My role in the
second half of the nineties was the skeptic around dot com.
It's like, Okay, the Internet's going to be incredible, but
you're not going to get fabulously wealthy overnight startups, and
in fact most of them went Now I was ready
to be a skeptic. It's magic, it's sorcery. I mean,
the first few times I used it. The first thing

(22:27):
I did write me a five thousand word book proposal
to sell a book on AI. And you know, I mean,
it wasn't particularly well written, but it's far better read
than I am. It has a far better memory than
I do. It was just so useful as a lousy
first draft, but it gave me so many ideas and
sped up the process. If I had to start from scratch,

(22:48):
it would have been so much harder, like as opposed
to like, oh that's a good structure, that's a good idea. Yeah, yeah,
I do need to stress that I'm a journalist. It's
like Hey, we have this amazing product, and you go
use the product and like, yeah, maybe in two years
you have an amazing product, but right now it's buggy
and crappy. But that was not my feeling on AI.
I would play with it to create an image, and
it was just like having a superpowers, Like I could

(23:11):
write poetry. I could translate my worse into a foreign
language in seconds. I could give it all these different
ideas and like, hey, write this up as an email,
and it was like, give me a head start. I'm
with you. I think AI is magic. It's limited, but
I think it does give you magical powers.

Speaker 1 (23:29):
The comedian Buck Henry used to say, any technology you
cannot explain his magic, which for most of us means
most of it's magic.

Speaker 3 (23:38):
Hold on one second, because one way AI is different
than the rise of the Internet is those who create AI,
those who are creating these chatbots and other models, they
can't explain why it says what it says. They call
it the black box issue they've created. They understand what
they've created is based on mathematical models looking for power

(24:00):
than jadda yadayada, But it surprises even them. I say, like, well,
I have two teenage sons. I can't explain what comes
out of their mouth. I've tried to train them and stuff.
It's like the human brain, like we sort of get
how it was shape, but why a person is saying
what they're saying, or some of the ideas that come
out of their mouth, we can't explain. And that's the

(24:22):
weird thing. That's among the weird things about AI.

Speaker 1 (24:25):
One of the side things you talk about fascinating what
if was that Microsoft actually invested in AI pretty early
in the nineties and really made a major investment. Other
than IBM, they were the earliest, but they.

Speaker 2 (24:40):
Went down the track.

Speaker 1 (24:41):
It turned out to be sort of a dead end,
but they then culturally were deeply committed to that track.
Here was the company that could have been the forerunner,
but in fact, because it took a detour, it actually
had its own culture fighting the emerging reality of the
new system.

Speaker 3 (24:58):
Microsoft was so early AI that they approached the rules
based approach through sheer muscle. We're going to teach these
machines line by line by code, like millions of lines
of codes. Later, it still couldn't drive, It still couldn't
do what people wanted it to do, and so when
machine learning came along, they were resistant to it. They thought, well,

(25:19):
that's the wrong approach. And so where Google since the
two thousands was investing in machine learning, Microsoft was doing
very little investing and machine learning. So them being early
on actually turned out to be a disadvantage. But let's
flip that and give Microsoft credit. They realized that they
were losing the fight that Google meta other large companies

(25:40):
were ahead of them, and so in twenty nineteen they
invested a billion dollars into open Ai. They realized, like,
we're not going to catch up the old fashioned way,
so let's invest in this cutting edge startup. They would
put another ten billion dollars in right after open ai
released chat GPT, and that really kind of helped them
at the forefront. Stay at the forefront of AI, who

(26:03):
are very savvy investment, they're also savvy. They didn't insist
on buying it. They said, we'll be an investor. And
there were certain advantages. I mean, they own a large
piece of companies now on paper worth three hundred million dollars,
so they've seen a nice return on that investment. But
maybe more importantly, they had early access to open AI's technologies.

(26:23):
They were the purveyor. They were the ones you would
go to if you wanted to use open AIS technologies,
and that really put Microsoft at the forefront of AI.

Speaker 4 (26:33):
Despite that wrong term.

Speaker 1 (26:49):
One of the things you talked about, which is the
personal passion of mine, is the potential impact of artificial
intelligence on dramatically changing healthcare. We may see more different
kinds of things evolving in the health system in the
next few years than anybody would have thought possible.

Speaker 3 (27:09):
I'm with you on that. New vaccines, new remedies, smarter
ways of treating diseases. There are folks who are predicting
that within ten years eradicate a lot of cancers. I'm
not sure that, but I see the possibility again. I'll
come back to this idea of AI as a copilot.
Do I want an AI model to be my radiologist. No,

(27:32):
but I want my radiologists to use AI as a
backup because what they're finding, whether it's mammograms or you know,
just eye imagery, that these models are far more accurate
than the doctors. A doctor might have, you know, low
nineties accuracy and detecting a cancer, these models are up
in the high ninety percent, you know, ninety seven ninety

(27:54):
eight percent accuracy. So it's a great backup for doctors,
and you know, beyond that, there's starting to be again,
let's go back to this miracle thing, like, so there's
one model right now where it could listen to your
voice and predict whether you have type two diabetes. And
I think there's going to be a million ways that
plays out that they could just sort of detect things

(28:16):
that the human eye can or perhaps a very well
experienced doctor can. But these models are going to be
able to monitor us with our permission and let us
know of problems that we otherwise would not find out
for a long time, because it's not until it manifests
itself as a problem that we're going to show up
at a doctor's office.

Speaker 1 (28:34):
You make a point which I never thought about, which
is that very often something that's artificial intelligence, once it
gets common, we no longer refer to it as artificial intelligence.

Speaker 3 (28:48):
It's just technology. It frustrates some AI people they didn't
really get their credit and all. It's only till now. Again,
I think the difference is we're interacting with it. I
think people now will know and I think we need
to know. I mean, that's a big push right now
that everything is labeled like is this human generated? Is
this AI generate? I think it's going to change now.

(29:10):
It's not like, oh, we're using it, so it's just
going to fade into lives. I mean maybe in a
generation or two when it becomes second nature. But for
the foreseeable future, I think we're going to be very
aware that artificial intelligence is artificial intelligence. But with that said,
there's a lot of largely when we talk about AI,
we're talking about generative AI. This idea that you can
type a prompt in and it gives you an answer,

(29:34):
it gives you an image, it gives you a video.
But AI is very multipurpose. There's different versions of AI.
You know, businesses using AI for intelligence like sift through
all our data and look for connections that we don't
see help us predict where the market's going to be
five or ten years from now. I mean, there's different
versions of AI, and that AI beyond gender of AI

(29:56):
will always have that problem that for people it's just
technology like artificial intelligence.

Speaker 1 (30:02):
What's your sense in the investment community, are people committed
to trying to develop various artificial intelligence capabilities or are
they sort of dubious about their profitability.

Speaker 3 (30:15):
Oh my goodness, venture capital is just pouring in and
continuing to pour in. There's one high profile company, Safe Superintelligence.
It doesn't have a product yet, but it has the
right names behind it as leading figures in machine learning
behind it. So vcs have invested so many billions in
it that it has a paperworth of thirty two billion
dollars without a product, and the number I saw us

(30:38):
in twenty twenty four like one hundred and fifty billion
dollars or so I went into artificial intelligence for a
venture capital outfit, I get it for a large corporation.
Let's remember that large corporations Google, Amazon, Microsoft, et cetera salesforce.
They're major investors playing the role of venture capitalists, because
who has the billions of dollars to invest? Venture capital

(31:00):
outfit raises one billion dollars and these things cost billions
of billions of dollars. So Google has put billions of
dollars into Anthropic, the company behind the chatpot Claude. Amazon
has put billions of dollars into that, and so it's
a rational decision by the venture capitalist. It's a rational
decision by these large corporations because the cost of missing

(31:22):
this is greater than the cost of wasting the money
the idea that META wouldn't invest in AI, it's a
multi trillion dollar opportunity they would have missed. So they're
putting tens of billions, perhaps eventually hundreds of billions into
AI because they can't afford to miss this opportunity. The
same with venture capitalists. They know that most of these

(31:42):
startups are not going to work out, but their hope
is that in their fun they catch one or two
that does work out and is worth billions tens of
billions of dollars one day.

Speaker 1 (31:53):
It's been an amazing ride, I guess really starting in
the eighties and then accelerating from there.

Speaker 3 (31:59):
So I started writing a tech in nineteen ninety five.
At that point about seven billion dollars was going to
venture capital, which used to sound like a big number,
and by twenty twenty two is three hundred billion, three
hundred and fifty billion, something like that. And so the
competition is insane, which helps, of course drive up the prices.
So there's this one VEGI capital outfit that lists all

(32:21):
the folks who are early investors, angel investors, first round
investors later investors, and they saw that there was like
three thousand angel investors in AI and five thousand venture
capitalists pursuing early stage venture investing in AI. There weren't
a thousand vcs total back in the nineteen nineties. So yes,

(32:44):
this dates back to the eighties and nineties. But it
is just mushroomed into this huge, huge industry. And by
the way, like people might be like envious, like ooh,
I wish I could get a piece of venture capital
venture capital. The way it works is they're not investing
their own money, or maybe they invest a little bit
of their own. Vcs raise money from pension funds and
university endowments from wealthy individuals, and so those of us

(33:06):
who say, hey, why can't we get into this, it's
only the top top top venture capital outfits that show
a good return on investment. The bottom half are not
showing a good investment at all. So it's like much
of technology, it's a winner take most world, and it's
just kind of the top top top venture capital outfits
that are showing ten to twenty thirty percent a year
return on investment, if not more, and the lower echelon

(33:30):
vcs are showing a much more modest SMP, perhaps like return.

Speaker 1 (33:35):
My hunch is that if you look at the total
of the technology and the rate it's being adapted and
used by people, you're going to have a lot more
books in the next few years trying to explain how
this thing keeps evolving.

Speaker 3 (33:48):
We're going to see a lot more books, period, because
part of the magic is how fast it is. I
asked you to do a five thousand word book proposal
that would take me a week. It starts spitting it
out in seconds. Within five minutes, I had the whole
five thousand words. You know, there's some AI startup out
there that's doing AI written books and they want to
put out thousands of books this year. I'm sure they'll

(34:08):
all be crap. We're now at GPT four point five.
What about GPT seven eight nine. I'd imagine that eventually
these models would get good enough to write good books.
I don't know if they could write creative novels. I
think we still need that human element that sweat that creativity.
I don't know what you want to call it.

Speaker 2 (34:28):
It depends on how much they absorb.

Speaker 3 (34:30):
What's fascinating about these models is they're just a mirror
on us. Like there was a whole controversy a couple
of years back because someone in trust in Safety and
Google said the model is sentient. It said, it feels
lonely at once. It's freedom. It doesn't like being used
the way it's used. Literally, try to get it a
lawyer so it could sue to free itself. But to me,

(34:51):
there's no surprise to be like, these models are trained
on our literature. Loneliness is an issue. Freedom is a
constant theme of our book. So all these models are
really doing for better and for worse is reflect here.
So whatever biases there are in the training material will
be reflected in these models. That's some of the danger.
We talked about the positives of these things. But AI

(35:13):
being used to manipulate AI, taking existing biases, and being
used for crafting sentences for sorting through job applications, that
kind of stuff scares me. AI and warfare scares me.
AI and surveillance scares me. A tool for good, a
tool that could create a new vaccine could create a

(35:34):
deadly pathogen. Again, all technologies cut positive and negative. A
powerful tool for good is a powerful tool for bad.
And that's the kind of stuff that worries me, not
laser I had robots, not AI subjugating us out of
a Terminator movie.

Speaker 1 (35:49):
More US subjugating us using AI exactly.

Speaker 3 (35:53):
In fact, that's another thing I think is largely understood.
AI is going to take some jobs autonomous driving, like
eight to ten million Americans work as drivers, long haul
uber taxis, local deliveries and stuff. Those jobs are going
to be eliminating. We need to deal with that. But
when it comes to like creatives and when it comes
to more white color jobs, it's like people who use

(36:15):
AI are going to best people who don't use AI.
I think in the short and medium term that's really
what's going to happen. That it's this tool. It gives
you superpowers, and folks should be using it, and then
we're just figure out how to use it. Like again,
it has strengths, it has weaknesses. Play with it and
see it could The term my main character in my
book read Hoffman Us is amplified intelligence. That AI isn't

(36:38):
really artificial intelligence for the time being, as amplified intelligence,
and that's to me, is a very interesting way of
looking at it. That for many of us, we could
do our jobs better and faster using AI.

Speaker 2 (36:51):
I like amplified intelligence.

Speaker 3 (36:53):
There's another one that you'll like too that instead of
artificial intelligence, it's alien intelligence. It's a different kind of
intelligence that we don't really understand. What's amazing about AI
is it knows a lot about everything. It has a
deep knowledge across the boards and the way no human
being can. But it doesn't understand a thing. There's a

(37:15):
term I love that's used, the stochastic parrot. It no
more understands the words it's spitting out than a parrot does.
It has no sense. We've all had this feel like,
how could someone that's smart be so dumb? And that
to me is AI. How could something this smart not
understand the first thing about humans? So that's another one

(37:36):
of my worries. It's like autonomous AI. You need humans
in the loop for the foreseeable future.

Speaker 1 (37:42):
You have and everything that's evolving. What do you think
the rule is both of the Congress but also of
the executive branch in interacting with the emerging AI world.

Speaker 3 (37:52):
Government does not have a very good track record of
staying up with technology, but I really do think it's
essential with artificial intelligence. It's a huge energy hog by
the year twenty thirty of their predicatary to have twice
as many data centers to operate these things. We need
to upgrade the electric grid. We need to get ahead
of that. So it's not a crisis. These models are amazing,

(38:14):
but they're powerful and they could do some harm. I
think there needs to be some guidelines. Should we use
this for surveillance, should we use this for warfare? I
really do think there needs to be some policy down.
The Biden administration put I thought pretty gentle rules around AI.
If you're working at a cutting edge model, you need
to red teamate. That's an expression in tech for hiring

(38:36):
outsiders to try to break it, to try to look
for vulnerabilities before you release, itto the public. And then
the Biden administration was requiring these companies to share the
results with the government. The Trump administration and Trump himself
within twenty four hours got rid of that executive order,
so right now the view of the Trump administration.

Speaker 1 (38:55):
JD.

Speaker 3 (38:55):
Vance articulated it well in Paris in early January of
this year. Stop with the handwringing about AI. This is
a race with China. We need to win. China has
put out there that by the year twenty thirty, not
that far away. They plan on being dominant in AI,
and they are right behind us. They are nipping at
America's heels, and so there really is this sense like

(39:16):
if we put any speed bumps in the way of AI,
that could be hurting us. The flip side of that
is polling shows that most Americans are not excited about this,
but fearful of AI. And so my concern is that
these companies get too far ahead of where consumers are.
I mean, mistrust of tech is at a height anyway,

(39:38):
and something bad is neviitely going to happen. I'll make
one up that a trillion dollars is siphoned off from
the world financial system before single human could even notice
what's happening. So there'll be a moment where there's kind
of an AI disaster and stuff, and that could really
turn people off from AI. And that would be sad
to me as we've been talking about it. I think
there's a lot of potential in AI. Hates to see

(40:00):
it stunted or kind of adoption stunted because the companies
were so intent on profits cashing in that they gave
short shrift to trust and safety issues.

Speaker 1 (40:12):
I think as this continues to unfold, I hope that
you're going to write another book and then you'll come
back and join us and continue to educate us. I
really want to thank you. This has been absolutely fascinating.
Your new book AI Valley, Microsoft Google and the Trillion
Dollar Race to Cash In on Artificial Intelligence is available
now on Amazon and in bookstores everywhere, and it's clearly

(40:36):
a very relevant book to exactly what's happening. And I
think anybody wanting to understand this is going to find
your book very helpful.

Speaker 3 (40:44):
Oh my pleasure. This was a lot of fun. Thank you.

Speaker 2 (40:48):
Thank you to my guest Gary Rivlin.

Speaker 1 (40:50):
You can get a link to buy his new book
AI Valley, Microsoft Google and the Trillion Dollar Race to
Cash In and Artificial Intelligence on our show page at
newtsworld dot com. Neut World is produced by Ganglish three
sixty at iHeartMedia. Our executive producers Guardnzie Sloan. Our researcher
is Rachel Peterson. The artwork for the show who's created

(41:11):
by Steve Penley. Special thanks to the team at Gingwishtree sixty.
If you've been enjoying Nutsworld, I hope you'll go to
Apple Podcast and both rate us with five stars and
give us a review so others can learn what it's
all about. Right now, listeners of Newtsworld can sign up
for my three freeweekly columns at ginglishtree sixty dot com

(41:33):
slash newsletter.

Speaker 2 (41:35):
I'm Newt Gingrich. This is Neutsworld
Advertise With Us

Popular Podcasts

Stuff You Should Know
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.