Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Bloomberg Audio Studios, podcasts, radio news.
Speaker 2 (00:08):
It's the rarest of product launches, when a company debuts
something most of the general public couldn't have imagined, and
then all of a sudden, it seems like no one
can talk about anything else.
Speaker 3 (00:19):
Techi's everywhere short circuiting with excitement. This is the closest
thing I've saying to like the star Trek computer.
Speaker 2 (00:24):
Are we going to have this thing filling out like
dating profiles?
Speaker 3 (00:27):
Now?
Speaker 2 (00:27):
Am I going to fall in love with chat gpt?
Open Ai released chat GPT in twenty twenty two, and
as enthusiasm for the chatbot surged, so did the company's ambitions,
along with interest from investors, which made another round of
headline grabbing news from open Ai.
Speaker 1 (00:42):
So surprising the company that gave us chat GPT fired
it's CEO, Sam Altman, who has drawn comparisons to tech
giants like Steve Jobs, was dismissed by the open Ai
board Friday. We're reporting their efforts to get Sam Altman back.
Speaker 2 (00:56):
Open Ai rein stinty Sam Altman is CEO and hitting
go on a bold shake out tapping Altman was fired
and rehired over the course of a long Weekend. Well,
it has been and continues to be a wild ride
for open Ai, a company that's not even ten. A
famous Altman quotation is the days are long, but the
decades are short. He recently amended that to the days
(01:20):
are long and the decades are also being very long.
At least that's what he told journalist Josh Tieringal recently.
Josh got an invite from open Ai to spend a
few days with Altman, someone he's interviewed before, for a
piece that's the cover story of the latest issue of
Bloomberg Business Week.
Speaker 3 (01:35):
Two years after the launch of check GPT and a
year after the board firing, you know, they had some
stuff that kind of wanted to straighten out for the record.
Speaker 2 (01:44):
Josh went to San Francisco and conducted what is the
most wide ranging interview Altman has done as open AI's
chief executive. They talked about the past and looked ahead
to the future, acknowledging that there is only so much
faith you can put in predictions about what's next for
a company like open Ai. I'm David Gera, and this
(02:05):
is the big take from Bloomberg News Today. On the show,
Josh Tieringel unpacks his interview with open AI's Sam Altman.
New details about the launch of chat gpt and that
weekend in twenty twenty three when Altman lost his job
and fought to get it back. Plus what Josh expects
AI regulation will look like during Donald Trump's second term,
and what motivated Altman to donate one million dollars to
(02:28):
Trump's inauguration committee. I'd love to start with some history
and a bit of biography here. What was Sam Waltman's
life like before he founded this company.
Speaker 3 (02:43):
Before open ai took off. Sam Altman was a Silicon
Valley famous guy. He was known as a venture capitalist.
He was known as a sort of CEO and startup whisperer.
He had very rigorous standards as far as the kin
of people he would invest in. But he ran y Combinator,
(03:03):
which is a really famous fund that took an active
role in kind of curating leadership. He was also known
on the kind of circuit of you know, Ted talks
and conferences like he was that kind of famous. As
soon as chat gpt launched, I would say within weeks
he was world famous. And that's what happens when you
make something that one hundred million people use within six
(03:26):
eight weeks.
Speaker 2 (03:27):
Can you walk us through what the launch of chat
gpt was like for him, how he experienced that moment people.
Speaker 3 (03:33):
Knew that there were chatbots. Chatbots were a thing, they
were generally pretty clunky, and so chat GPT three point
five shows up, and people inside open Ai were really
skeptical of why they were launching it. They thought it
wasn't ready. Basically, they thought it was going to fail.
And Sam, as he details in the interview, said that,
(03:54):
you know, he doesn't make a whole lot of I
say we're doing it, we're doing it kinds of decisions,
but in this instance, you know, he kind of put
his finger in the wind. He read the zeitgeist. He
thought the product was more than good enough, and so
they launched it. And so for the first handful of
days it was doing okay, and there was a lot
of skepticism still within the company, well, well, look, this
(04:16):
is ridiculous. Why did we do it. It's not taking off.
Because he'd been at y Combinator, he was familiar with
the sort of pattern of a launch, and so what
he was seeing was in the first five six days
that you know, there would be a peak of usage
during the day and it would go down at night,
a peak of usage during the day and go down
at night. But what made it different and where he says,
(04:36):
I think that people didn't quite realize inside the company
what they had was that the trough was always higher
and the peak was always higher. And so after about
a week he was like, folks, we are failing to
understand what we have on our hands.
Speaker 2 (04:48):
Central to this story is how this company is structured.
It's organized at the beginning as a nonprofit. Why was
that the case?
Speaker 3 (04:56):
You know, the structure of the company is almost as
complicated explaining artificial general intelligence. It's weird, And I think
it started from this very sincere place, which was we're
going to make artificial intelligence that will benefit the world.
And so what we shouldn't do is have this incredible
profit motive looming over us at all times. Let's not
(05:18):
make short term quarterly decisions. Let's make decisions in decade
long increments. And that was the thing that all of
the founders agreed to, and among those founders or co
founders was Elon Musk Right. What they found out along
the way is not only that that was a sort
of doomed structure for a lot of just human reasons,
but that the power of compute, which is the noun
(05:41):
that we sort of use to talk about what it
costs to generate an artificial intelligence model. Just the cost
of GPUs, the cost of energy is so huge that
a nonprofit couldn't compete. And so at some point they
were confronted with the decision, which is remain this sort
of pure nonprofit in which you're kind of a sleepy
(06:02):
research arm but all the real computing work is happening
within the big three or four companies in the world,
or start to compete. And I think the founding origins
of that were sincere and that they really believed initially
when we're just going to do this for humanity, and
then had to confront reality.
Speaker 2 (06:20):
You mentioned the sleepiness of research. It clearly is the
thing that animates him. He has this obsessiveness with research,
obsessiveness with artificial general intelligence. What is AGI?
Speaker 3 (06:31):
David, Are you kidding me? You call me and you
go back and forth, you go back and well, I'll
tell you. I mean, well, yeah, one of the most
interesting things and confounding things if you're just a normal person.
Tuning into this debate is that even the people pursuing
artificial general intelligence cannot tell you what artificial general intelligence is.
(06:55):
And so even in the interview, at some point, you know,
Sam says, if it we were to create a model
that could do the work of multiple humans, that you
could assign it a task and it can complete it,
that would be AGI ish. So the guy, the most
famous guy pursuing AGI uses ish. Now that is confounding.
(07:17):
That said, a lot of great scientific discoveries and a
lot of you know, the things that propel us forward
in civilization are a little bit ish, but it's very
hard for me to tell you what AGI is if
Sam Altman can't do it.
Speaker 2 (07:31):
Let me ask you one more question along these lines,
which is you mentioned you've spoken with him a number
of times. How has your sense of him, your understanding
of him evolved through those interviews.
Speaker 3 (07:41):
I think that the one thing that continues to resonate
is he is one hundred percent held on getting to
AGI before anybody else, and he is running the company
with that singular goal in mind. And so when they're
doing all these raises, when they're staying focused on the
latest model, tried to stay out in front of everybody
(08:01):
I think it's because he's animated by this sense of
purpose around the science and a belief as a business
person that getting there first is the only thing that matters.
Speaker 2 (08:13):
There's this crucial moment in the Open Eye story that
plays out over a weekend. Sam Alton is fired. In
a few days later, he's rehired. We're now more than
a year away from that, and I wonder if anyone
if he has a clear understanding of why things transpired
the way they did.
Speaker 3 (08:27):
I think he has the clearest understanding of anyone who
is not on that board. I tried to get at
it in a number of ways, right. I even offered
my own theory, which is that basically the board was
a bunch of purists and they were formed at a
moment when a nonprofit pursuit of artificial intelligence seemed like
(08:49):
the right thing, and that they were struggling to adapt
to the reality that basically being a nonprofit in this
space was going to doom the company to failure, and
that Sam was determined to let it fail. I think
that is actually the crux of the tension. My hunch,
honestly is that this was something that was doomed to
happen from the moment they decided to be founded as
(09:10):
a nonprofit without getting all the parties together in front
of microphonts. I think we have enough information to know
that this was a conflict around the purpose of the work,
and the original board really felt like it was consistent
with the mission of open Ai to kill the company
if it couldn't make AI with a sort of rigorous, safe,
(09:33):
nonprofit standard, and he was not going to let the
company die.
Speaker 2 (09:37):
You asked Sam what the fallout has been from that moment,
from that hectic weekend, if he felt like afterward he
needed to convince his colleagues that he's good, I think
is how you put it. How did his firing, his
rehiring effect his ability to sort of work with people
in open ai and more broadly in the kind of
nascent AI industry.
Speaker 3 (09:56):
Yeah, look, I'm super fascinated by the human elements behind
all of this stuff. You know, he said that the
first couple of days and probably the first couple of
weeks were super weird. People looked at him funny. People
didn't know exactly what this was all about. I think
within the industry itself, you know, it was the classic
everybody at a competing company, you know, had a bowl
(10:17):
of popcorn and was like, let's see how this goes, right.
But I also think that there was an understanding in
the industry writ large, in the companies that they partner
with and the companies that they compete with, that Sam
is a force, and that they didn't question his credibility
or his credentials to run the company. They were probably
hoping he would get fired and be forced to start
(10:40):
all over again somewhere else that they could catch up
and then as far as it, you know, to return
to what happens inside the company, I think it's one
of those things where if you're there day in and
day out and there is the kind of attrition and
turnover that you would normally expect in a startup, you know,
within a couple of months, it was a blip. So
I think that's how you reckoned with it.
Speaker 2 (11:02):
After the break, we turned from open AI's past to
its future. When Sam Alton returned as the CEO of
open Ai, he retook the helm of a rapidly evolving
and closely watched company. He told journalist Josh Tirerngeal that
(11:24):
after Altman was reinstated. His attitude was, as he put it,
we got a complicated job to do. I'm going to
keep doing this, basically resolved to put his head down
and get back to work. I asked Josh how that
played out and what altman's life is like day to day.
There's a moment where he shows you his calendar, and
I wonder if you could just describe sort of what
that's like.
Speaker 3 (11:44):
Yeah, so, Sam, when I asked about how he runs
the company, just sort of day to day where he's
spending his time, you know, he just flipped his laptop
around and pulled up his Google calendar and it's a mess.
I mean, it's just an absolute mess of colors, conflicts
starting from about seven am going to about nine fifteen,
with some dinners after even that, and lots of overlapping meetings,
(12:09):
lots of small meetings, lots of one on ones with engineers.
I'm sure many of your listeners have calendars that look similar.
What I would say is that it was just day
after day after day prescheduled. There's not a lot of
walking around time. It's indicative of a company that is
in a full competitive sprint to get someplace. So yeah,
(12:30):
it was pretty daunting.
Speaker 2 (12:32):
Let me ask you a bit about the future of
the company and his plans for it. Opening Eye has
already changed the history the trajectory of the AI industry.
What did Sam tell you about his plans for the
future of the company, sort of where he sees things going.
Speaker 3 (12:45):
He used the word protect, that the company is structured
to protect research, and so I think those are the
words I would look to as I monitor the company
over the next few years. He is hell bent on
protecting the research and getting to AGI. I think he
believes everything else will take care of itself if they
can do that. And in that way, you know, even
(13:07):
though the tech is wildly new and unconventional, the business
approach is fairly conventional, right. It's not that different to
a late nineties, you know, web startup, which is we
got to get audience, right, We got to get as
big as possible, as quickly as possible, and then the
finances will sort themselves out. It's Amazon's strategy, It's Facebook strategy.
(13:27):
So that's how I would project with the next few
years old protect the research, productize effectively. See where we
are in eighteen months to two years.
Speaker 2 (13:36):
Josh, what are the biggest challenges that he and OpenAI
face sort of seeking that objective. Is it bandwidth? Is
it getting compute?
Speaker 3 (13:43):
The three challenges that the industry as a whole phases
are just getting the compute right, getting access to the
GPUs that you need the energy to power those GPUs.
And then the biggest question and most unknowable is are
the models plateau Are you continuing to see artificial intelligence
(14:04):
gain steam in training the models right? He's extremely confident
that their models are not plateauing. There's some debate within
the industry whether that's bluster or not. On energy, you know,
I'd like to be a fairly prepared, smooth interviewer. I
will admit I was rendered a momentarily mute when I
(14:26):
asked him about energy. He's like, Fusion's gonna work. I'm sorry,
what now? Which fusion? When? Huh? But you know, he
is a co founder of a company called Helion with
reied Hoffman and some others and believes in fusion is
coming and fusion will be a sort of silver bullet
for our energy issues. And then on chips, you know, look,
(14:47):
they like everybody else, they are finding and buying as
many chips as they possibly can they have their own
fab effort going to make sure that they're never beholding
to one supplier. So those are the three things. He
feels very confident that their position to address all three.
And then the fourth, which is unique to open AI,
which I asked about, is like, right, let's talk about
(15:07):
governance because we now are in a place where the
man who really is calling himself a kind of co
president has a competing AI company was once Sam Altman's
co founder and is currently suing the company. That is
a level of volatility. You know, I can't predict, he
can't predict, but Elon Musk's existence is a factor in
(15:30):
how companies will do in developing AI.
Speaker 2 (15:33):
You mentioned governance, Let me ask you about his relationship
with Washington and policymakers and politicians there. He donated a
million dollars to Donald Trump's inauguration fund. He told you
he supports any president. How authentic did that sound to
you when he said that that he, you know, would
support a Democrat or Republican, whoever's in.
Speaker 3 (15:52):
The White House. You know, it's hard for me to
speculate about his authenticity. What I can say is that
taking a step back strategy whether I'm Sam Waltman or
anyone else who is competing with Elon Musk, I think
the smartest approach is to lavish praise on the President
and to try and create any sort of friction I
(16:13):
possibly could between Elon Musk and the President. And so
by saying oh, I support of course, I support the
American President. I may not agree with everything, but he's
the president. I wish for his great success, and then
saying oh, you know Elon, you know he's going to
do what he's going to do. He's the co president.
I think anything that creates that friction is probably beneficial
(16:35):
to people competing with Elon. It will not surprise me
if that's a tactic taken by others as well. But
I you know, as I said, I can't speak to
the authenticity of it. I just think there's not the
worst strategy.
Speaker 2 (16:46):
How does he see the role of Washington in regulating
this new technology? Does he see a role for Washington here?
And what does that role look like under this new administration?
Speaker 3 (16:56):
Yeah? Famously, Look, Sam is a he thought this should
be a nationalized technology similar to nuclear power. He thought
it was that powerful, that dangerous, and that important to
the national interest and he got no buyers, no bites,
no interest, and he needed the money, I mean, the
open AI needed the money in the investment. Under the
Biden administration, he was very close with Secretary of Commerce
(17:19):
Jeania Romando. He was very close to the commission that
worked on the executive order around AI. He wants regulation.
I think some of that is ideological. I think some
of it's competitive when you're out in front, Hey, let's
tap the brakes on everybody else, right, But in the
last four years he has been an active participant and
(17:40):
collaborator with the federal government in figuring out what to
do with artificial intelligence. But he's no dummy. I think
he's going to take the temperature of the Trump administration
see where it is, react accordingly. But yeah, historically he's
been very much in favor of regulation of these models.
Speaker 2 (18:04):
This is the Big Take from Bloomberg News. I'm David Gura.
This episode was produced by Alex Tie and it was
edited by our senior producer, Naomi Shavin. It was mixed
and sound designed by Alex Segura and fact checked by
Adriana Tapia. Our senior editor is Elizabeth Ponso. Our executive
producer is Nicole Beamster Boor Sage Bauman is Bloomberg's head
of podcasts. If you liked this episode, make sure to
(18:25):
subscribe and review The Big Take wherever you listen to podcasts.
It helps people find the show. Thanks for listening, we'll
be back tomorrow