Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:03):
Bloomberg Audio Studios, Podcasts, radio News.
Speaker 2 (00:20):
Hello and welcome to another episode of the Odd Lots podcast.
I'm Joe Wisenthal.
Speaker 1 (00:25):
And I'm Tracy Alloway.
Speaker 2 (00:26):
Tracy, you know, we've done tons of course on like
electricity and AI and data centers and all that stuff,
but we've never actually done like a well, we've never
talked to someone who is building data centers.
Speaker 1 (00:42):
Putting it all together, you mean.
Speaker 2 (00:43):
Yeah, putting it all together like what you know, just
a bunch of you know, I've had consultants, so we
talked to energy people, but like, how does this business
of essentially, I guess, building a building, putting a bunch
of chips in there, getting the electricity, and then in theory,
selling all of that at a markup? Like, how does
it actually work?
Speaker 3 (01:01):
You know?
Speaker 1 (01:01):
What I was reading recently This is kind of a tangent,
but not really because we're talking about the physical and
financial process of building these things. But I saw this
is online. There's a guide to the like physical Planning
around an IBM system three sixty from like nineteen sixty
(01:23):
three or something, and it's two hundred and thirteen pages long.
Speaker 2 (01:27):
Have you read it yet?
Speaker 1 (01:28):
I did flip through it, there's like there's guidance on
minimizing vibrations obviously, like temperature and humidity and stuff like that.
I did not read the full two hundred pages, but
I'm kind of thinking like if this is what if
this is all the thinking that had to go into
like one computer, albeit a supercomputer in the nineteen sixties,
(01:49):
but like a pretty basic machine. When we look back
on it now, how much planning and thinking has to
go into building like these huge cloud servers and all
their associated infrastry structure, both physical and software as well.
Speaker 2 (02:03):
No, totally, and you know, we you know, one of
the ways that we've touched on this subject a little
bit is in our conversations with Steve Eisman, who's been
investing at least as far as we know, in a
lot of these like industrial HVAC companies and electricity gear
companies and stuff like that. So like companies that have
actually been around for a really long time, sort of
(02:25):
standard cyclical businesses, and then they've like caught the secular
tailwind because with this boom in AI data center construction,
suddenly there's this sort of continuous bid for all their
gear and services.
Speaker 1 (02:37):
I'm going to start an anti vibration floor maker or something.
Do you think that's a viable business? Does anyone care
about vibrations anymore?
Speaker 2 (02:44):
I am certain that in various high tech environments you
do not want to have vibrations. You know, you have
like a valuable chips, you don't want them to be
like degrading.
Speaker 1 (02:54):
Because people are walking around.
Speaker 2 (02:55):
Yeah, or just you know what, all the machine and
all your air conditioners and equipment and all that stuff,
you can't be having that stuff degrade.
Speaker 1 (03:03):
Well, the other interesting thing that's happening in the space now,
So in addition to the physical challenge of building a
bunch of this stuff, there's also the financial aspect of it.
And I guess as AI becomes more and more of
a thing, and clearly, as you laid out, there's a
lot of enthusiasm around the space. At the moment, you
are seeing a bunch of financial entities get interested as well.
(03:26):
So obviously venture capital has been pouring money into the space,
but we're starting to see some new types of financial
investments in AI. And I'm thinking about one thing in particular,
and it is the recent GPU or chip backed loan
that was reported by the Wall Street Journal and I
think we should talk about that aspect.
Speaker 2 (03:47):
Of it too, totally, because one of the things that's
happening in tech is this big sort of shift from like, okay,
we're all of your costs in the past, where a
lot of them were sort of op x, the cost
of engineers, et cetera. And now suddenly tech companies have
to think about CAPEX for the first time, these big
upfront costs that are in theory going to pay off
for a long time, which in theory then changes how
(04:09):
you should think about the financing model.
Speaker 1 (04:11):
Absolutely well, I am.
Speaker 2 (04:12):
Excited to say because we literally do have the perfect
guest we're going to be speaking with, Brian Venturo. He
is the chief strategy officer at core Weave. Corewave. For
those who don't know, it's probably the company right now
that people most associate with being at the heart of
the AI data center boom. They have a bunch of
(04:35):
in video chips, they have investments from in Nvidio right
here in the sweet spot. As you mentioned, one of
the interesting things that's going on is they not long
ago announced a debt financing facility sit back basically by
the GPUs that they would acquire, so literally the perfect
person to understand like the business of these AI cloud
(04:59):
data center So Brian, thank you so much for coming in.
Speaker 3 (05:02):
Thanks for having me. It's the second time I've been
on the podcast.
Speaker 2 (05:05):
That's right. We talked to Brian years ago. It's interesting
to think about at that time because I think that
may have been like twenty twenty or twenty one, and
the excitement then was that these chips could be used
for crypto mining and other things like sort of distributed
video editing and stuff like that, and then Ethereum stopped
using mining. But it was sort of fortuitous timing because
(05:26):
right around then AI went crazy and that's probably I
don't know, in my view, maybe a higher use of
these chips before we get to that. Do you worry
about vibration in your data center?
Speaker 3 (05:38):
So everywhere that's close to a fault line is designed
around that and is part of code. So you know,
the engineering firms that help us build these data centers
have taken all of that into account, and all of
our racks are you know, seismically tuned to make sure
that we can withstand the normal vibration from the Earth.
So yeah, it's been something that's been in those annuals
(06:00):
for a long time. Some of our hardwer manufacturers actually
have vibration testing labs where they put the racks on
top of a big kind of platform that shakes, and
it's pretty dangerous and uncontrollable and hard to watch. But
you know, there's people out there that have been solving
this problem for decades.
Speaker 1 (06:15):
Now I missed the boat on that business choir. It
sounds like it's been dealt with decades ago. Okay, well, actually,
why didn't I start with a very simple question, which
is when when you're looking at the business of core Weave,
so a specialized cloud service provider, let's put it that way,
what are the different components that you have to think about?
(06:38):
You know, Joe kind of alluded to all these different
ingredients that go into the business, but walk us through
what those actually are.
Speaker 3 (06:46):
Sure, so, there's there's three pieces that as a management team,
we think are incredibly critical to the business. The first is,
you know, our technology services that we provide on top
of the hardware, right and this is everything from the
software layer through the support organization to you know, how
we work with our customers. This isn't the type of
thing that you just go plug in and it works.
In these large supercomputer clusters, there may be two hundred
(07:08):
thousand infinibank connections that connect all the GPUs together, and
if one of those connections fails for whatever reason, the
job will completely stop and have to restart from its
previous checkpoints. So, you know, everything that we do on
the software side and engineering side is to make sure
these clusters are as resilient and performant as they possibly
can be to ensure you know, our customers can run
(07:28):
their jobs, you know, increase efficiency and get all of
the kind of monetary value they can out of the chips.
So technology piece is really hard. It's something that I
think is very overlooked by the market, but it's just
as hard as the two other kind of pieces that
this business stands on. The second is, you know, the
physical nature of the business in that you have to
(07:49):
actually build and run these data centers and those hundreds
of thousands connections inside the supercomputers. Like somebody has to
go put those together and make sure they're clean and
make sure they're labeled correctly to be able to remediate failures.
And when you're building a thirty two thousand GPU supercomputer
that is one of the fastest three computers in the planet.
(08:09):
You know, you're running thousands of miles of cable inside
a very dense space, right. These data centers are built
very tiny to make sure that you can connect everything together,
and that becomes a huge logistical challenge. So, you know,
the data centerpiece, which we're going to talk more about today,
is very challenging to design for the use case. And
then the third piece is how the hell do you
(08:30):
finance the whole thing? Right, And you know, we've been
very successful in the financing aspect of this, but you know,
whether you're financing technology operations or the physical build of
these things, it is an incredibly capital intensive business and
constructing those financial instruments to back our business is very hard,
and we have to be very very thoughtful around who
(08:51):
the counterparties are, how do we think about credit risk,
how do our investors think about that credit risk, How
do we deal with contingencies inside the contracts to make
sure that they are financeable on the scale that we've
done over the last eighteen months.
Speaker 2 (09:02):
Talk to us a little bit more. We could probably
talk about data center financing credit and have have that
be a whole episode, but when you think about you
have to think about your counter party's credit risk. Talk
to us a little bit about what you're who those are,
what the type of entity is.
Speaker 3 (09:20):
Sure, so I'll get myself in trouble if I just
start naming them off. Yeah, some of them are more
public than others. You know, I'm going to refer to
them as you know, hyperscale customers. We have AI lab customers,
we have large enterprise customers. We've really constructed our portfolio
of business around the idea that you know, if we're
(09:40):
going to build ten billion dollars of infrastructure for somebody,
we have to know there's a balance sheet we can
lean into behind it, right, and we're the pace at
which we've grown. You know, our customers are demanding scale
so quickly that the credit of the counterparty is incredibly
important to find the low cost of capital we have
with these ADIT facilities we've announced, right, So you know,
(10:02):
when people talk about how this is a credit facility
backed by GPUs, it's not really backed by GPUs. It's
backed by you know, commercial contracts with large international enterprises
that may have triple a credit, right, So you know
it's it's the framing of the.
Speaker 1 (10:15):
Aid receivables finance.
Speaker 3 (10:17):
Basically it's closer to trade receivables financing than it is Hey,
we're going to go leverage up a bunch of GPUs
and see what happens.
Speaker 1 (10:23):
Huh, okay, well walk us through the I guess like
the sequence in some of these financing agreements. So you know,
if a customer comes to you and they say, we
want a certain amount of compute, can you do this
for us? And you start going down the process of like, okay,
what do we need to make this happen? What do
(10:43):
those like financial agreements actually look like. And who's bearing
the initial risk? Is it the customer? Is it you?
Speaker 2 (10:51):
Good question?
Speaker 3 (10:52):
So when we're approached by a customer, right, you know,
the ask is typically going to be pretty pretty general,
and they're going to say, hey, we're looking for facity
in Q one of next year. What's the largest thing
you can do? And you know, we take that effectively
as a mandate of okay, hey, you know this customer.
Speaker 2 (11:08):
We're not business.
Speaker 3 (11:09):
But before you know, we're really comfortable with them, we
know that we're going to get a contract done. We'll
go out and we'll try to secure an asset to
you know, to go build it. And we may have
it in our portfolio already. We maybe it may have
been a strategic investment that we made. But once we
find the data center asset, that's when we go back
to the customer and say, okay, like we can commit
to doing this. This is the timeline. We'll structure a
contract around it. Depending upon who the customer is. There
(11:29):
may or may not be some credit support associated with
it around the scaling of the you know, that asset,
and then we'll get a commercial contract in place, and
we will initially fund a large portion of that project
off of our own balance sheet. Right. It's why you
also see us raising equity, right, is we have to
have the capital to accelerate the business. And then once
(11:49):
we have that and we're making progress, you know, think
about it as you're building real estate. Right, you have
a construction loan and then you have a stabilized asset loan,
and we basically fund the construction loan piece off of
our balance sheet. When we get to a more stabilized asset,
that's when we go out and kind of do that
trade financing or trade receivables financing our with our partner lenders.
You know, they worked with us before, they know that
these things are going to stand up, They know how
(12:11):
they perform, and at that point in time, it's it's
pretty easy for them to underwrite that risk.
Speaker 2 (12:31):
It's funny. Tracy and I had coffee with someone yesterday
who is sort of in the space I want docs here,
And I was like, what should we ask Brian? And
he's like, ask him why he won't let my company,
why I'm still on the waiting list or something, or
why he hasn't approved my company to use core weave.
But what are some of the bars or the threshold?
(12:51):
So you know, I apparently there's a lot of demand
for compute these days. What does it take to get
in the door and get access to some of your
chips and electricity?
Speaker 3 (13:01):
So it's it's a great question. It's a question that
we get all the time from our sales teams, right
is you know, we're faced a lot with a sales
team that is incredible at delivering product to customer and
we don't have anything to sell. And it's kind of
my job. As the strategy organization at Core, We've were
responsible for two things. It's product and infrastructure. Capacity, and
(13:24):
you know, I spend most of my time going out
and finding those data centers and being able to support
those deals and the growth that we had over the
past twelve months. The company was pretty flat out right
in building and delivering this infrastructure. You know, publicly on
our documentation page it says that we have three regions.
We'll have twenty eight regions online by the end of
the year. I think we delivered eleven of them in
Q one alone, Right, So we're building at a scale,
(13:48):
you know, i'd say that almost larger than some of
the three big hyperscalers. But in terms of how do
you become a customer of Core, it's really relationship driven,
right is. We want to make sure that we're going
to be able to be successfu with our customers and
have an engineering relationship and we're aligned on what they
need and.
Speaker 2 (14:04):
We can deliver what they need.
Speaker 3 (14:05):
The last thing that we want is for somebody to
walk in the door and say, hey, I need this
for three weeks and two weeks into it, they're unhappy
and we can't give them what they need to be successful.
Right is, you know, our customers are making such large
investments in this infrastructure, that we have to have, you know,
a lot of conviction that we will be successful with
them and provide a good experience. So it's not that
(14:26):
we're trying to keep people out, it's we're trying to
ensure positive experiences for people that we do bring on board.
Speaker 2 (14:32):
Do you build complete housed facilities or is it all
you're going to bring your chips and expertise into an
existing Tier one data center and essentially rent floor space
from them.
Speaker 3 (14:44):
Yeah, so a year ago it was we were effectively
just a co location tenant, and now we've gone a
lot more vertical for some strategic builds where we're either
a partner in the project where we own equity and
the development company, or we're building the project ourselves. We've
been scaling that team up over the past six months,
and we had to at our scale to be able
(15:04):
to guarantee outcomes. Right, is, we were in a position
where we had data centers getting delayed with things that
weren't communicated to us, and you know, we had to
go build the capability to handle that situation and you know,
make sure we can still deliver for our customers.
Speaker 1 (15:17):
One of the differentiators that you and some of your
colleagues have emphasized previously, is this idea that you're designing
the server clusters kind of from the ground up, whereas
like other hyperscalers maybe are doing it on a sort
of different mass scale. But can you walk us through
like what is the benefit of doing it that way?
(15:38):
And then secondly, does that end up being an impediment
to I guess efficiencies or economics of scale and how
customized Like do you really get here?
Speaker 3 (15:49):
So from a customization perspective, it's aggressive, right, And I
say that because you know, our customers are involved in
the design of you know, our network topology of the
East West fabric for the GPU to GPU communication, for
things like cooling. You know, I have customers that toward
the data centers under construction process with me like once
a week, and it's to the point that they're impacting
(16:14):
how we build the base level networking products to ensure
they have enough throughput to you know, meet their use
case needs. Whereas in you know, what I what we
call the legacy hyperscaler installations, It maybe they have a
couple thousand GPUs that are in a data center that
was really built for CPU computation or to provide services
(16:34):
to ten thousand customers that is really with a much
lower base expectation of what they're going to be doing. Right,
So it's things around connectivity for storage, it's things around
power and cooling, It's things around how they want to
be able to optimize their workloads inside of the GPU
to GPU communication. You know, we have some customers that
even customize their infiniban fabrics and the size of those
(16:57):
fabrics and how they connect together. So you know, we
work with them to really understand what their use case is,
where they're worried currently and in the future, and then
design around that. So it's a pretty comprehensive program when
we're building something from the ground up.
Speaker 1 (17:09):
And how much complexity does that introduce into the business
and does it end up being a limiting factor on
your growth or is demand just so strong at the
moment that it's not really an issue.
Speaker 3 (17:20):
The customization that we do is typically going to be
above what our base level offering is, meaning the environment
will be more performant because the customer required it. So
it's typically not going to be limiting to us from
a future you know, revenue or resale perspective. It's going
to make the asset more valuable. But you know, we're
we're designing our reference builds for ninety nine percent of
(17:40):
use cases, and we're trying to price it efficiently, and
then when customer wants something above and beyond, you know,
it impacts price. But for these installations it's probably deminimus, right,
So you know, it doesn't really add a lot of
complexity for us from a business perspective, so we're happy
to do it.
Speaker 2 (17:55):
You mentioned that some of the hyperscalers, yes they have GPUs,
but they like built in an environment for like legacy CPUs.
Can you talk a little bit about a just the
difference between the legacy architectures and the new one and
then in the design, like what kind of bottlenecks you
run into? Is there issues with labor like the types
(18:18):
of people who know how to string these things together well,
or other different cooling requirements for this type of compute
environment that did not exist, Like what are what are
the challenges in building out these sort of like fundamentally
different environments.
Speaker 3 (18:33):
Yeah, so that that's changed also in the last twelve
months in that you used to be able to take
what was an enterprise data center and you know, creatively
retrofit it to be capable of supporting the AI workloads
to a certain density level. Okay, right, Like instead of
filling up a cabinet, you could put two servers in
a cabinet and you could meet the power and cooling
requirements of the installation. It you use a lot more
(18:55):
floor space, but it was doable. One of the incredible
things about is that they're always pushing the boundary on
the engineering side, and their next generation of chips is
largely dependent upon much more aggressive heat transfer, and they've
introduced liquid cooling to the reference architectures. So as liquid
cooling comes in, it changes what type of data center
is capable of doing this, and it truly requires that
(19:18):
ground up redesign and almost greenfield only build to support it.
Is you've gone from an environment where you could take
an enterprise data center and deploy less servers per cabinet
and get away with it to hey, nobody's ever built
this before. It's at an incredible scale and it has
to happen on a yearly cadence now, so the data
center industry is in't a full sprint to figure out, Okay,
(19:40):
how do we do this? How do we do it quickly?
How do we operationalize it right? And you know that's
kind of where I've been spending all of my time
over the past six months.
Speaker 1 (19:48):
Can I ask a really basic question, and we've done
episodes on this, but I would be very interested in
your opinion, But why does it feel like customers and
AI customers in particular, are so I don't know if
addicted is the right word, but like so devoted to
in Nvidia chips, Like what is it about them specifically
(20:10):
that is so attractive? How much of it is due
to like the technology versus say, maybe the interoperability.
Speaker 3 (20:18):
So you have to understand that when you're an AI
lab that has just started and it is a it's
an arms race in the industry to deliver product and
models as fast as possible, that it's an existential risk
to you that you don't have your infrastructure be like
your Achilles heel. Right, And and Vidia has proven to
(20:40):
be a number of things. One is they're the engineers
of the best products, right. They are an engineering organization first,
and that they identify and solve problems. They push the limits.
You know, they're willing to listen to customers and help
you solve problems and design things around new use cases.
But it's not just creating good hardware. It's creating good
(21:02):
hardware that's scales and they can support at scale. And
when you're building these installations that are hundreds of thousands
of components on the accelerator side and the infinband link side,
it all has to work together well. And when you
go to somebody like in Video that has done this
for so long at scale, with such engineering expertise, they
eliminate so much of that existential risk for these startups. Right.
(21:22):
So when I look at it and I see some
of these smaller startups saying we're going to go a
different route, I'm like, what are you doing? Right? You're
taking so much risk for no reason here? Right, this
is a proven solution, it's the best solution, and it
has the most community support, right, Like go the easy
path because the venture you're embarking on is hard enough.
Speaker 1 (21:41):
Is it like the old what was that old adage?
Like no one ever got fired for buying Microsoft? Is
it like no, yeah, or IBM something like that.
Speaker 3 (21:50):
But the thing here is that it's not even nobody's
getting fired for buying the tried and true and slower
moving thing. It's nobody's getting fired for buying the tried,
true and best performing and you know bleeding edge thing.
Speaker 2 (22:03):
Right.
Speaker 3 (22:03):
So I look at the folks that are buying other
products and investing and other products almost as like they're trying.
They almost have a chip on their shoulder and they're
going against the mold just to do it.
Speaker 2 (22:14):
There are competitors to in video that they claim cheaper
or more application specific chips. I think Intel came out
with something like that. First of all, from the core
weave perspective, are you all in on in video hardware?
Speaker 3 (22:31):
We are?
Speaker 2 (22:32):
Could that change?
Speaker 3 (22:33):
The party line is that we're always going to be
driven by customers, right, and we're going to be driven
by customers to the chip that is most performant, provides
the best TCO, is best supported and right now and
in what I think is the foreseeable future, like I
believe that is strongly in video.
Speaker 2 (22:52):
Think about okay, maybe one day you guys IPO And
I'm looking through the risk factors, and one of the
risk factors, right, we have a heavy reliance on in
video chips. There is a risk that a competitor thing,
what would it take for one of these competitors that
does ostensibly over cheaper or hardware or perhaps lower electricity
consumption in your view, To make one of those risk
(23:14):
factors real.
Speaker 3 (23:15):
I think that they'd have to be willing to quote
unquote buy the market. And when I say that, I
mean they'd have to subsidize their hardware to get a
material market share. And from what I've seen, there's no
one else that's really been willing to do that so far.
Speaker 2 (23:30):
And what about Meta with Piedtorch and all their chips.
Speaker 3 (23:33):
So they're in house chips. I think they have those
for very very specific production applications, but they're not really
general purpose chips, okay, right, And I think that when
you're building something for general purpose and there has to
be flexibility in the use case. While you can go
build a custom AASIC to solve very specific problems, I
don't think it makes sense to invest in those to
(23:54):
go to be a five year ass set if you
don't necessarily know what you're going to do with it.
Speaker 1 (23:58):
So you talked about the advantages of Nvidia hardware like
the chips themselves, but one of the things you sometimes
hear is that those same chips might perform differently in
different clouds. So what is it that you can do
to sort of boost the performance of the same chip
in your structure or ecosystem versus say an AWS or
(24:21):
someone like that.
Speaker 3 (24:22):
Sure, a great question. We do a lot of work
around this internally and it's a big part of our
technical differentiation. And what we call it internally is mission control.
And mission control is effectively a portfolio of different services
that we run on our infrastructure to make sure that
these incredibly complex supercomputers are healthy and performant and are optimized,
(24:43):
you know, where we take a lot of that responsibility
off of our customer engineering teams, right, And it sounds
like that might be an easy lift, but when you're
running supercomputer scale, you know you need a team of
fifty to do that, right, So we provide a ton
of software automation around that, providing that health checking and
observed ability to our customers. But it's also the engineering engagement, right,
(25:03):
is you know, working with our customers to understand, Okay,
what are you doing, what's the best way to optimize this,
how do we you know, how did we design the
data center to be more performant, to make sure your
storage solution was correct, Your networking solution was correct. So
it's not just a hey core we've provides like this
one little thing that makes it better. It's the comprehensive solutions,
starting from the data center design, through the software automation
(25:26):
and health checking and monitoring, via mission control, via the
engineering relationships that really add that value.
Speaker 2 (25:31):
Let's talk about electricity, because this has become this huge
talking point that this is the major constraint and now
that you're becoming more vertically integrated and having to stand
up more of your operations. We talked to one guy
formerly at Microsoft who said, you know, one of the
issues that there may be a backlash in some communities
who don't want, you know, their scarce electricity to go
(25:52):
to data centers when they could go to household air conditioning.
What are you running into right now or what are
you seeing?
Speaker 3 (25:58):
So we've been very very selective on where we put
data centers. We don't have anything in Ashburn, Virginia, right
and the Northern Virginia market, I think is incredibly saturated.
There's a lot of growing backlash in that market around
power usage and you know, just thinking about how do
you get enough diesel trucks in there to refill generators
that they have a prolonged outage.
Speaker 1 (26:17):
Right.
Speaker 3 (26:17):
So I think that there's some markets where it's just
like okay, like to stay away from that, and when
the grids have issues and that market hasn't really had
an issue yet, it becomes an acute problem immediately. Like
just think about the Texas power market crisis back in
I think it's twenty twenty one, twenty twenty, where the
grid wasn't really set up to be able to handle
the frigid temperatures and they had natural gas valves that
(26:40):
were freezing off at the natural gas generation plants that
didn't allow them to actually come online and produce electricity
no matter how high the price was. Right. So there's
there's going to be these acute issues that you know,
people are going to learn from and the regulators are
going to learn from to make sure they don't happen again.
And we're kind of citing our our plants and markets
where our data centers and markets where we think the
(27:01):
grid infrastructure is capable of handling it right, And it's
not just is there enough power, it's also on things.
You know, AI workloads are pretty volatile in how much
power they use, and they're volatile because you know, every
fifteen minutes or every thirty minutes, you effectively stop the
job to save the progress you've made, right, and it's
so expensive to run these clusters that you don't want
(27:21):
to lose hundreds of thousands of dollars of progress, So
they take a minute, they do what's called checkpointing, where
they write the current state of the job back to storage,
and that checkpointing time, your power usage basically goes from
one hundred percent to like ten percent, and then it
goes right back up again when it's done saving it.
So that load volatility on a local market will create
either voltage spikes or voltage SAgs, and a voltage sag
(27:45):
is what you see is what causes a brown out
that we used to see a lot of times when
people turn their cognitioners on and it's thinking through, Okay,
how do I ensure that, you know, my AI installation
doesn't cause a brown out when people are turning their
you know, during checkpointing, when people are turning the air
conditioners on. Like that's the type of stuff that we're
thoughtful around, like how do we make sure we don't
do this right. And you know, talking to engineerings and
(28:07):
in Video's engineering expertise, like they're working on this problem
as well, and there they've solved this for the next generation.
So it's everything from is there enough power there? What's
the source of that power? You know, how clean is it?
How do we make sure that we're investing in solar
and stuff in the area to make sure that we're
not just taking power from the grid. To also when
we're using that power, how is it going to impact
(28:27):
the consumers around us?
Speaker 1 (28:29):
I want to ask you more about what in Nvidia
is doing, but just on that note, what's the most
important metric for evaluating a data center's quality or performance?
Is it like days without brownouts or an interrupted power supply,
or is it measures of efficiency like power usage effectiveness
or something like that. If I'm serving a bunch of
(28:50):
data centers, I want to pick a good one. What
should I be looking for?
Speaker 3 (28:53):
So right now, the market's pretty thin, So right now.
Speaker 1 (28:58):
Options Okay, I imagine I'm like the biggest customer on
earth and I can get in anywhere. What should I
be looking for?
Speaker 3 (29:06):
So it's the first thing goes back to the electricity piece, right,
is the grid stable? Is there enough power supply? You know,
is there excess renewable generation in the area that doesn't
have the ability to make it too downstream consumers? Right?
A lot of the renewables that we have in the
US are built in places that don't necessarily have the consumers.
So you're citing these data centers in places where you
(29:28):
have this excess supply, So that that's the first piece, right,
is how good is the electricity supply? And how angry
are the people around me going to be if I
take it? Now? You go from there into everything else
is kind of solvable, right, And the way that you
design it, and if you're building a green field, it's okay.
You know what type of ups systems am I putting in?
Are they capable of handling that load volatility?
Speaker 2 (29:50):
You know?
Speaker 3 (29:50):
How am I thinking about my cooling solutions? There's been
a big shift to liquid cooling, right, and liquid cooling
from a PE perspective, isn't a thirty to forty percent
decrease in electricity utilization like people think? It's more like
sixty to seventy percent, right, And the reason for that
is it's not just the efficiency of the data center plant.
(30:14):
It's also that now if you're not cooling things with air,
you don't have to run the fans inside the servers
as well. And for these AI installations, because they're so dense,
the fans consume a lot of energy. Right. So everything
that we're building now is a combination of liquid and
air cooling, right. And the liquid cooling piece has solved
the PUE issue, right, And we're everything we're doing is
(30:34):
trying to say, Okay, how much power can we use
only for running our critical IT operations versus cooling the
environment making sure the environment's running correctly from a resiliency perspective,
And there's been big strides made there over the last
whole months.
Speaker 1 (31:06):
Does colocation trump grid reliability? Like if I'm Elon Musk
building some sort of new AI thing as I think
he's doing in Texas, say like, am I just going
to have to find a data center in Texas? Or
how much flexibility do I have to use one further away?
Speaker 3 (31:25):
So great question, it's it's a different answer for different
use cases at different times. And right now, you know,
we were in the middle of this rush to train
whether they're open source or proprietary foundation models at the largest,
most valuable companies in the world, and they're mostly worried
about access to contiguous compute capacity. Right, how much compute
(31:47):
can I get in one location, all connected together so
I can go faster than the next guy. But when
the models are trained, they want that compute to then
be local to their customer base, right, is how do
they take it from the middle of nowhere and then
go serve it in the metropolitan markets. And as the
use cases are more distilled and they get more real time,
think like the type ahead suggestions that you get in
(32:09):
your Gmail account right as you're typing something, and it's
getting better and better. It's you know, that's an AI
model somewhere like predicting what you would want to say next,
And they want to make sure that's delivered at human speed.
So that human speed is a latency consideration. Right as
you're citing those GPUs and you're citing that compute to
be locals to the people that are using it. So
(32:32):
that move has started probably four months ago where we
saw customers finally becoming concern around latency for their serving
use cases. So initially training people don't really care where
it is cheap power, reliable grid. They just need it
all contiguous and they need it fast. And then down
the road as their applications find success, they're more worried
about where the compute is for their customers.
Speaker 2 (32:53):
What are some of the areas that are going to
be the next Northern Virginia when it comes to data
center clusters.
Speaker 3 (32:59):
So I think we're seeing it in Atlanta already, where
Georgia has paused or has attempted to pause some of
their tax incentives around it because they want to make
sure they do grid studies. I think that we're we're
probably going to see it in some of the other hotspots.
Speaker 2 (33:14):
You know.
Speaker 3 (33:16):
You know, you see aws up in Oregon who is
trying to find creative alternative ways to power their data
centers from non grid generation to alleviate some concerns there.
But you know, I think that the market has to
solve this problem. And you know, you're starting to see
some of the startups around nuclear generation in you know,
(33:36):
the small reactors at the data center level. As people
are you know, being thoughtful for five to ten years
from now, do.
Speaker 1 (33:42):
You have any influence on the type of power being
built in certain areas? You know, could you say to
a utility company of some sort, we're here, we need
access to energy, but we want it to come in
a particular form.
Speaker 3 (33:57):
So you can. But you have to understand that the
investment cycles and the physical build cycles for those are
so much longer than you know how quickly our customers
need infrastructure, right. So you may go to a market
and say, hey, we're going to be here over the
next ten years, we'd like you to install X y Z,
you know, renewable, and they're happy to do it. It's
just that you have to find a medium term solution
(34:17):
while that's being built.
Speaker 2 (34:19):
I'm going to ask a question. So there was a
news story, and maybe you won't comment on the news story,
specifically about core Weave having made a one billion dollar
offer for a bitcoin miner called core Scientific, apparently was rejected.
According to things I've read in the news. Setting aside
this deal, there's you know, there used to be a
(34:39):
lot of crypto mining and then ethereum went from proof
of work to proof of steak and that all basically
disappeared overnight. There are still bitcoin miners. I never get
the impression it's like that great of business. But whatever
are there bitcoin miners that have latent value in the
fact that they I mean, I know those chips don't
the bitcoin mining chip, the actual acis don't work for
(35:01):
AI because all they are is bitcoin mining chips. But
are there by dint of their access to electricity, space,
et cetera, is there a fair amount of latent value
in the general physical structures that they've built for the mining.
Speaker 3 (35:16):
So I'm just not going to answer your question at all.
I'm gonna go on a tangent.
Speaker 2 (35:20):
Okay, that's fine.
Speaker 3 (35:21):
So I think that when I think about core Weave
and what our mission is, it's to find creative solutions
to problems in in you know, various markets, and those
various markets can be blocking for us and our customers to.
Speaker 2 (35:36):
Achieve our goals.
Speaker 3 (35:37):
So if power is a concern for us, and power
availability and substations and substation.
Speaker 2 (35:43):
Transform, coin miners definitely have access to power.
Speaker 3 (35:46):
That that is true.
Speaker 2 (35:47):
I'm just stating fact you could keep doing it.
Speaker 3 (35:50):
So you know, as we go and we try to
solve these problems, you know, we're going to go to
places that others may not have thought of, and we're
going to go do due diligence and I'm going to
personally go and walk the sites and I'm going to
you know, look through and see, okay, can we.
Speaker 2 (36:07):
Pull this off?
Speaker 3 (36:08):
And we're going to get our engineering partners in to
help us design retrofits. And you know, we're going to
do deals with the companies that we believe have the
ability to provide us value.
Speaker 1 (36:19):
Since we're doing stuff in the news. This has been
in the news for a while, so it doesn't really count.
But the new Nvidia chips, the GB two hundreds, what
will those do for core weave and when would you
expect to get them?
Speaker 3 (36:33):
What will they do for us? It's more about what
they're going to do for our customers, right, and I think.
Speaker 2 (36:38):
That they are.
Speaker 3 (36:41):
This is a great question. They are going to open
up a lot of both training and inference use cases
in the AI side that I think our customers have
been blocked by UH with the existing generation in that
you're now able to think seventy two of these GPUs
(37:02):
together to work almost as one unit, and previously that
was limited to eight. They have a much larger what's
called the frame buffer, which is how much memory that's
usable for their matrix operations. So you know, I think
that we're going to see a lot of new use
cases show up for this stuff, but I think it
extends well beyond AI as well, and it's going to
(37:22):
be a lot more useful for things like scientific computing.
One of the things that has me really excited is
the computational fluidynamics and I'm specifically thinking about the uses
for that in F one under the new regulation in
twenty twenty six. I'm excited for the new platform. I
think in a year and a half people are going
to be using it for things that are different than
(37:42):
anybody expects today. And that's to me. The pace at
which this is changing is the piece that's really cool.
Speaker 1 (37:50):
Wait, I'm sorry, I hate sports.
Speaker 2 (37:52):
What's the six? Explain how the invidio is.
Speaker 3 (37:56):
Yeah, So the F one platform, they have very tight
restrictions around what type of compute and how much compute
you can use to do aerodynamic testing in your cars,
and you can either do real life testing in a
wind tunnel or you can do it through CFD analysis.
And what are the great uses for the you know,
the Grace Blackwell and the Grace Hopper architectures. Impairing that
(38:17):
Grace super chip with the GPU is they're great for
CFD workloads, right, and the.
Speaker 2 (38:23):
DAFD stands for computational fluid dynamics yep, yep.
Speaker 3 (38:27):
And the regulations around the existing program in F one
are they're only able to use CPUs. They have very
like specific limitations around it. But there's been a lot
of talk of that changing for twenty twenty six car models,
and for me, like, that's pretty cool and I'm gung
ho excited about possibly supporting that.
Speaker 2 (38:46):
That does sound very fun. I want to get back
to actually the financing a little bit because I guess
two questions. So the logic of why you would borrow
money both I guess for the equal position of chips,
and the chips are sort of collateral, but I understand
they're not really chip back loans per se. A. Do
(39:08):
you see your clients getting more into debt financing rather
than equity financing. I mean, there's a whole generation of
software companies from the Zerp era that was just you know,
all equity and never had any debt at all, and
they never really had to think about like their compute costs,
or they did, but not as much. Do you think
(39:29):
that will rise their own use of debt instead of
equity in terms of their own financing. And another topic
we talk about a lot on the show private credit,
like there is there an emergence of an ecosystem of
lenders for whom this is going to become a specialty
of some sort.
Speaker 3 (39:46):
So the first piece of the question, I don't believe
that the venture backed kind of AI lab startups will
ever take on debt in this type of environment, largely
because they don't have the collateral to back it. If
they're buying cloud services to run their infrastructure. And you
may see some that start to buy their own infrastructure
and to do that themselves, but it is a herculean
(40:06):
task to do this at scale. Right, There's a reason
why clouds exist is that there's a lot of complexity
that they abstract away. On the second question around are
is there a private credit sector that's going to be
built to do this? I think that it's more you're
seeing public lenders that are extending into the private credit
space because the opportunities are there. And I'm going to
give you the party line answer that my CEO gives
(40:29):
all the time is that you know, as we're thinking
about financing our business, the biggest thing for us is
our cost to capital, and we're always going to do
the things that provide us the lowest cost of capital.
And you know the lenders that we work with, including Blackstone,
that have been so wonderful for us, you know, them
extending on the private credit side as we go to
the public markets because we're dragged there by cost of
(40:50):
capital concerns, I would expect them to be involved as well, right, So,
I think it's a continuation of the business they've been
doing in the public markets, just kind of extending into
this capital intensive business.
Speaker 1 (41:00):
Wait, what was I guess you can't get into specific details,
but my impression was for these types of loans that
the interest rate is usually higher than like a basic
bank loan or say issuing a corporate bond.
Speaker 3 (41:15):
I would definitely say our cost of capital is lower
than some of the corporate issuance is out there, Okay,
but you know it's definitely higher than if our cost
of capital today is definitely higher than if we were
republican public entity.
Speaker 1 (41:27):
But specifically on the GPU backed loans, and I know
you keep saying it's not really a GPU back loan,
but that's sort of an uphill battle to call it
trade receivables financing instead. It sounds so much better that way,
I know, I know, but like on that in particular, Okay,
there's collateral, so maybe that brings the overall like borrowing
rate down. But on the other hand, it's kind of
(41:48):
a new thing, new structure. How does that compare with
more traditional types of finance.
Speaker 3 (41:53):
Yeah, so you know that every credit facility that we do,
the cost of capital declines, and it's declining because it's
the execution risk and the ongoing concern risk are reduced. Right.
And you know, when we first did this, people like
you guys are crazy. You have no history of execution.
And as we've gone through and we've done it, like
now there's a path that everybody that's underwriting these loans
(42:14):
now understands. Okay, this is what happens, this is how
it reforms, This is what we should expect from the customers.
This is what we should expect from receivables. They get
more comfortable, they're willing to do it at more aggressive rates, right,
so that the risk premium associated with it has just
decreased over time.
Speaker 1 (42:27):
Got it.
Speaker 2 (42:27):
I just have one last question I sort of touched
on it earlier. But Okay, we know that power is scarce.
We know that, you know, there's not an infinite number
of Nvidia chips et cetera. Like those are quite scarce
for the other stuff. You know, we've done episodes in
the past like talking about like just generic electrical gear components,
(42:47):
and we've certainly done a lot on like labor shortages.
What are you seeing on that front sort of like
simple gear and the sort of basic building blocks of
a new construction and how difficult that is to acquire.
Verse to say, if you were doing this, you know
you started in twenty seventeen, I imagine a lot of
the things were more plentiful back then.
Speaker 3 (43:05):
Yeah, so it's not even that they're less plentiful today
than they were. You know, the lead times were always
the lead times for this electrical gear. It's that there
was capacity to go buy off the shelf, right there
was inventory in the data center market. And the inventory
is basically gone. And you know, I see deals today
that get brought to me and there's seven people bidding
(43:25):
on the same deal and they're all trying to sell
it to like similar customers. So the market has gotten
pretty thin. So now you're looking at it, going Okay,
my only option here is for new built, and you're
looking at lead times that haven't really shifted that much
on things inside of the data center. The substation transformers
are multiple years out, and part of that reason is
(43:46):
that it takes a year for them to cure after
they're manufactured. Like, there's no getting around that, there's no
speeding that piece up.
Speaker 2 (43:52):
I mean, it takes a year.
Speaker 3 (43:53):
You when the transformer is built, that's taking on so
much power that whatever the process is, it has to
sit for a year and harden before it's able to
take on that electrical load. So even if you went
and said, hey, I'm going to build ten more of
these this year, it's still a year away before you
can use them.
Speaker 2 (44:09):
Huh right.
Speaker 3 (44:10):
And those are the types of things from a manufacturing
perspective you just can't get around, and it takes time
for the supply chain to catch up. But you know,
the problems that I'm solving on a day to day
basis in these builds isn't even around the substation transformers.
It's around like small components that somebody missed it when
they ordered the gear sixteen weeks ago. And now you
have to go scramble and call in favors across the
(44:30):
country of Hey, who has this part? I need it
by tomorrow because I have fifty thousand GPUs that are
blocked by this one little thing, right, So it's a
lot of it is logistical and human coordination and solving
dumb problems in real time.
Speaker 2 (44:42):
Ryan Venturro, thank you so much for coming on odd Laws.
That was fantastic. Thanks for having me, Tracy. I'm really
glad we did that conversation because there are a number
(45:04):
of these sort of like big picture ideas in there
that we've sort of hit on of course, about data
centers and AI and electricity consumption, and it was really
interesting to hear some of them. So, like, for example,
just this idea of like northern Virginia is out and
like needing this sort of hunt to find these spots
in the country where there is ample electricity and basically
(45:28):
nobody local is going to get upset at you for
using it.
Speaker 1 (45:31):
Yeah, no one will come out with pitchforks. The thing
that stood out to me from a bunch of these
conversations at this point is the arms race aspect of it,
and how urgent building out AI is for a lot
of these companies, and then there seems to be this
mismatch between the immediate need for scale and compute and
(45:52):
energy now versus these really long timelines of actually building
the stuff out and Brian mentioning the substation transformers taking
a care of cure.
Speaker 2 (46:05):
I had no idea about that.
Speaker 1 (46:06):
I didn't know that either. But that's a really good example.
Speaker 2 (46:08):
That's super interesting, and of course now we have to
do a how do you build a substation transform.
Speaker 1 (46:14):
How do you cure a substation transformer?
Speaker 2 (46:16):
Totally? I mean maybe this is probably something that electrical
engineer is not interesting to them at all, But for me,
I did not realize that there was this one year long,
one year long curing process. You know, I think there
are like a couple other things that now I want
to talk more about, so I'm interested. I mean, like
Coreweave is an in video company. It's not owned by Video,
(46:38):
but you know it's joined at the hip in many respects.
So how difficult is it going to be either for
some other maker of chips, whether it's an Intel or
some other maker of software environments, whether it's Meta and
PyTorch going against Kuda or whatever, like that's a really
interesting question to me, Like, you know, we have to
(47:02):
do more essentially on like how much of a lock
and video really has on this industry.
Speaker 1 (47:06):
Yeah, this seems to be the really big question. And
then the other thing I was thinking about, and I
know Brian emphasized this and other Core Weave executives have
emphasized this before, but this idea that hyperscalers maybe are
starting from a point of being disadvantaged because they have
to retrofit all this old infrastructure for this new AI
(47:29):
technology totally, and like I can see that. But on
the other hand, these are insanely impressive companies. You are
explicitly trying to compete against Core Weave in this business,
and they're not going to stand still. And so I
guess there's an open question over how much progress they're
making or how fast that progress is actually happening.
Speaker 2 (47:49):
Right, Large companies always are going to have some challenges
when there's like a new model or something. But these
companies have all the money in the entire world, right,
and they also have all you know, one of the
things that Brian said is like they if they were
if one of them are going to do it, they
would have to go out and to buy a big
chunk of the market, which again they have all the
money in the entire world. So theoretically, whether it's the
(48:12):
big companies and retrofitting the clouds or building new clouds,
or you know a lot of them like a Google,
even if they're for now using their TPUs internally primarily like,
it does seem like in theory the opportunities out there,
particularly with the the sky high amount you know, valuation
that a company like in video is getting.
Speaker 1 (48:34):
Oh yeah, you mentioned the sky high valuation. That was
something that also stood out to me, just on the
financing side. So this idea of you know, the debt
financing deal that they did, and I'm not going to
call it trade receivables because.
Speaker 2 (48:47):
No one GPU backed loan.
Speaker 1 (48:49):
Yeah, no one will be interested when we start talking
about trade receivables. But the GPU back loan. This idea
that like, okay, it's a new structure, but the more
you do it, the more the cost of particular capital
starts to fall, the more the market gets comfortable with it.
I mean, we can talk about whether or not it's
priced correctly for a new type of unfamiliar risk, but
(49:11):
it does seem like that might be a new avenue
for the vast amounts of capital that are needed for
this business.
Speaker 2 (49:17):
So one, it's interesting to think about the idea that, like,
you know, I don't think it's like totally true. You
know that if you need compute at scale for AI,
that you don't just get to call up core weave
and get it, and you actually have to prove that
you're going to be a good customer and so like
have something that is probably going to be sustainable, have
(49:37):
the balance sheet capacity. So this even if the sort
of software the end users aren't themselves raising debt, it
does sound like they have to have a lot of
equity upfront just so that they're perceived as a sustainable,
viable customer for a company like corewev. I also thought
on the electricity front, like obviously we talk all the
(49:59):
time about just sort of the raw demand for electricity.
But this idea what he said, and I hadn't heard
anyone say it that the runs the modeling runs stop
everyone do you say thirty minutes and have to be saved.
Oh yeah, And so you have this big variability at times,
and that creates its own specific issue because it's not
just steady state flow of electricity and solving for that.
(50:21):
That's probably another area in which the legacy data centers
or cloud companies. Perhaps my guess would be that they're
just sort of the demand is more constant and therefore
something that would be a novelty for them.
Speaker 1 (50:36):
Just thinking about the financing more, I do kind of
wonder how much of this is like AI built on
top of AI on top of AI. Like, yeah, to
the point where if if the bubble were to burst,
or if funding was suddenly pulled from a bunch of
these startups, like what would that mean for core weaves financing?
(50:57):
And what would that mean for black Rock, which lent
money based on the GPUs that the clients are taking on,
who might not be there anymore. I don't know.
Speaker 2 (51:05):
By the way, have you ever looked at a chart
of riot lockschain?
Speaker 1 (51:09):
Oh no, not for a while?
Speaker 2 (51:12):
Yeah, well, I mean they're still there as a minor,
but like here we are in the midst of this
pretty big crypto bal run. I mean, I guess it's
cooled a little bit, but and that stock is done
terribly so it's interesting to wonder, and apparently it doesn't
seem like anyone's made a bid for them. But it
is interesting to wonder, like, Okay, those chips are useless
for AI because they don't work for that, but you know,
(51:36):
they do have capacity and they do have electricity agreements
already in place. So it does make you wonder whether,
like some of the bitcoin mining companies which aren't really
getting a very the market is not excited about them, clearly,
even in the midst of this crypto bal run.
Speaker 1 (51:53):
Maybe they should go back to being a diagnostics company.
That's what they were before, is it. I think so.
I think they're one of the ones that changed their
name and then like there something including blockchain, and then
their shares went up enormously and now they're back down.
Speaker 2 (52:07):
Well they have been. Riot Platforms has been around, Okay,
now I'm curious. Yeah, so it's a bitcoin mining company,
but it's been the stock has been around since two
thousand and three. So pretty clearly, uh, pretty clearly they
were in some other business. I don't know what.
Speaker 1 (52:24):
Yeah, I'm looking on the terminal, it says Riot Blockchain,
formerly Bioptics, has ditched the drug diagnostic machinery business for
the digital currency trade.
Speaker 2 (52:34):
Well, there you go. So if you have some sort
of computing power or something. I don't know what they
were doing before, but maybe it is interesting to think about.
Maybe some of the option value for some of these
miners isn't there. Non is in all the infrastructure other
than the bitcoin mining operation.
Speaker 1 (52:50):
Maybe we should put in a bid.
Speaker 2 (52:51):
Let's do it.
Speaker 1 (52:52):
We can crowdfund and start our own business. Okay, maybe
we should leave it there.
Speaker 2 (52:57):
Let's leave it there.
Speaker 1 (52:57):
This has been another episode of the All Thought podcast.
I'm Tracy Alloway. You can follow me at Tracy Alloway and.
Speaker 2 (53:03):
I'm Joe Wisenthal. You can follow me at the Stalwart.
Follow our guest Brian Venturo. He's at Brian Venturo. Follow
our producers Carmen Rodriguez at Carman Erman dash Ol Bennett
at Dashbot, and Kilbrooks at Kilbrooks. Thank you to our
producer Moses Ondam. For more odd Lots content, go to
Bloomberg dot com slash odd Lots, where we have transcripts,
a blog, and a newsletter and you can chat about
(53:25):
all of these topics, including AI, including semiconductors, including energy
in our discord discord gg slash.
Speaker 1 (53:33):
Hot Lots and if you enjoy all thoughts, if you
like it when we talk about AI and chips and
energy and all that stuff, then please leave us a
positive review on your favorite podcast platform. And remember, if
you are a Bloomberg subscriber, you can listen to all
of our episodes absolutely ad free. All you need to
do is connect your Bloomberg account with Apple Podcasts. In
(53:55):
order to do that, just find the Bloomberg channel on
Apple Podcasts and follow the instructions there. Thanks for listening.