Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Zone Media. Hello and welcome to Better Offline. I'm your
host and chief romance officer, ed Ze Tron. In the
last episode, I dug into the fundamental weaknesses and open
(00:25):
AI the supposed leader in the genital of AI boom,
and today I'm going to get into a much larger,
more systemic, more terminal problem and the signs that things
are really really falling apart. And as ever, I will
have links to everything I'm talking about in the episode notes,
so you know I'm not making it up, which one
person suggested I did once and it bothered me a
great deal. But back to the actual stuff. The problems
(00:48):
that open ai is facing are those faced by the
entire generative AI industry. One's born of their sole focused
on the transformer based architecture underlying large language models like
chat GPT open A issues besides the fact that they're
in a terrible business as discussed in the last episode,
is that generative AI, and by extension, the model GPT
in the product Chat GPT, doesn't really solve complex problems
(01:10):
that would justify the massive costs behind it. It is
these massive intractable challenges that are a result of these
models being probabilistic, meaning that they don't know anything. They're
just generating an answer based on maths and training data,
something that model developers are running out of at an
incredible pace. Hallucinations, which occur when models authoritatively state something
that isn't true, or, in the case of an image
(01:31):
or a video, makes something that just looks wrong. Well,
they're impossible to resolve without new branches of maths, and
while you might be able to reduce or mitigate them,
their existence makes it hard for business critical applications to
truly rely on this kind of AI. I don't even
know if i'd call it an AI, but regardless, we
go forward, and even tech's most dominant players can't seem
(01:52):
to turn generative AI into any kind of real business line.
The Information reported in early September that customers of Micross
three sixty five Suite are barely adopting its AI powered
copilot products, with somewhere between zero point one percent and
one percent of the four hundred and forty million paying
people who pay for Microsoft three sixty five, which is
about thirty to fifty dollars a person, by the way,
(02:15):
are willing to pay for AI, and just to be clear,
I muddied that little. It's thirty to fifty bucks per
person per head to add this stuff. I'll get into
it in a minute. One firm, according to the information,
was testing the AI features and was quoted as saying
that most people don't find it that valuable right now,
and others are saying that many businesses haven't seen breakthroughs
(02:35):
in productivity or other benefits, and that they're not sure
that they will. In an internal presentation provided to be
by a source, users of Microsoft SharePoint copilot complained that
Microsoft chatbot kept getting questions wrong, sometimes failing to provide
references even for correct answers, with another complaining that the
copilot was and I quote using content not connected as
(02:57):
a document resource to answer questions. And by the way,
the whole point of share point is that it's your
data informing everything. I assume it was drawing from its
training data or perhaps the internet anyway, genuinely not useful.
And you'd think that with these new services that don't
seem that useful, that are questionably useful, that Microsoft would
(03:18):
be doing people a deal right wrong. How much is
Microsoft charging for these services? Thirty dollars a seat per
person on top of what you are already paying, are
as much as fifty dollars a month extra for specialist
products like co pilots for sales, Microsoft is effectively asking
customers to double their spend, and by the way, that's
(03:39):
with an annual commitment for products that don't seem to
be that helpful. And really, that is kind of the
state of generative AI, the literal leader and productivity in
business software, cannot seem to find a product that will
make people more productive and that they will then pay for.
And it's in part because the results are kind of mediocre,
and also that the costs are so burdensome that there's
(04:01):
no way for Microsoft to avoid charging a premium. And really,
if Microsoft needs to charge this much, it's either because
Sachin Adela is really desperate to hit half a trillion
dollars in revenue by twenty thirty, or that the costs
are too high to charge much less. Maybe it's a
little bit of both. And this all only serves to
shed further light on just the mediocrity of generative AI
(04:22):
and how limited large language models are. And all of this,
by the way, is existentially threatening to Open AI because
they've coasted to one hundred and fifty seven billion dollar valuation,
almost entirely based on hype. And so it's that company's
always tried to tell us that the future of AI
will blow us away, that the next generation of large
language models are eminent and they're going to be incredible.
(04:45):
And the artificial general intelligence, where machines can reason and
act beyond human capabilities, that's just around the corner. And
by the way, all of that is in part thanks
to the media sloping it down and just assuming that
they get it right. But until now that that's all
they've really had to do. But I think we're finally
getting the rubber meeting the road with this. I previously
(05:06):
said one of the pale horses of the AI apocalypse
is when a big stupid magic trick was necessary, a
product that someone shoves out the door in hopes it
will impress people and keep them believing in the magical future.
And you'd think that they'd have something really good right
now because open ai just raised all this money and
the practical applications just are obviously not there, except well,
(05:29):
you know, no, no, no, no, this is open AI.
They wouldn't make a big, stupid mistake, would they. I mean,
one of the things I always tell clients of mine
in pr is not to shove a product out the
door before it's ready, and to also make sure it's
really obvious why people should pay for it. Otherwise you're
just kind of launching something into the ether and hope
people will find a reason to sell it for you.
(05:51):
And yeah, that's exactly what they did. It happened. On
September twelfth. Open Ai launched OH one, which had been
code named Strawberry, with all of the excitement as a
trip to the Proctologist. Across a series of tweets, CEO
Samuel and described one as open ayes most capable and
aligned models, yet then immediately conceded that O one was
still flawed, still limited, and it still seems more impressive
(06:14):
on its first use than it does after you spend
more time with it. Oh my god, he admitted. He
then promised it would deliver more accurate results when performing
the kinds of activities where there's a definitive right answer,
like coding maths or answering science questions. One might think
that he'd walk in with I don't know, like a
product built on top of one or like an use
(06:35):
case or thing that would make the audience go, wow,
I could build something with this. He didn't. I don't
think he wants to try. I don't think he hasn't
had to try that hard. So far people have been
sloping down his slop happily. This boy may not have
any tricks left. But let's talk about how O one works.
And I'm going to introduce you to a bunch of
(06:55):
new concepts here, but I promise I won't get too
deep into the weeds. And I really want you to
know how these machines work. It's critical for critiquing these companies.
And the big way they take advantage of you is
that they claim all of this is black magic, that
you could never possibly understand it. You absolutely can. And
if you want their explanation, I'm going to have it
in the show notes. Okay. When presented with a problem,
(07:19):
OH one breaks it down into individual steps that hopefully
would lead to a correct answer in a process called
chain of thought. Again, these things are not thinking. They're
not thinking, but this is the term. It's also a
little easier if you think of OH one as two
parts of one model. On each step, one part of
the model applies something called reinforcement learning with the other one,
(07:40):
which is the model actually outputting things rewarded or punished
based on the perceived correctness of their progress. And this
is what is called reasoning, by the way, even though
it really doesn't match human reasoning at all, and then
based on the reward of the punishment, it generates a
final answer from this chain of thought consideration. This is
(08:00):
different to how other large language models work in the
sense that the model is generating outputs than actually looking
back at them then ignoring or approving what it thinks
are good steps to get to an answer, rather than
just generating one and saying here's the answer. This may
seem like a big breakthrough or even another step towards
artificial general intelligence, and it isn't, And you can tell
(08:21):
that by the fact that open ai opt to release
O one as its own standalone product rather than something
built into GPT. It's also telling that the examples demonstrated
by open AI, like maths and science problems, are the
ones where the answer can be known ahead of time
and a solution is either correct or false, thus allowing
the model to guide the chain of thought through each
step towards that answer, rather than actually having to produce
(08:43):
something where they might not necessarily be one. Open ai
didn't show the one model trying to tackle complex problems
such as high end mathematical equations or otherwise where the
solution isn't known in advance by its own admission. Open
AI has heard reports that one is actually more prone
to hallucinations than GPT four H, and the model is
less inclined to admit when it doesn't have the answer
(09:05):
to a question when compared to other previous models. This
is because despite there being a model that checks the
work of the model, the work checking part of the
model is still capable of hallucinations. It's kind of like
a kid being taught something by a teacher who just
occasionally gets things horribly wrong. That child, though they may
mostly get right answers, will learn bad things. Now, learning
(09:28):
here isn't really what's happening, but the output of the
end will be informed by a model that makes the loocinations.
It's like, I don't know, got a town full of dogs.
You get a bunch of baboons in to get rid
of the dogs. The baboons succeed and getting rid of
the dogs Now you just got a bunch of baboons,
so you get in, aren't no robots? Robots destroy the baboons.
At this point, you've got robots. If the robots are autonomous,
(09:48):
they start taking over the town. So they need to
find a bigger robot to take over the town from
the robots. Now you've just got an escalating problem where
things are only going to get worse. And if you
work open ai and that sounds accurate, please email me anyway.
According to open ai, OH one also, thanks to this
chain of thought process, feels more convincing to human users
because it provides more detailed answers, and thus people are
(10:12):
more inclined to trust the outputs even when they're completely wrong. Now,
if you think I'm being overly hard on open ai,
consider the ways in which the company is marketed. One
open ai described O one's reinforcement training as thinking and
reasoning when it's making guesses and then guessing on the
correctness of these guesses at each step. Where the end
destination is often something that can be known in advance.
(10:34):
Generative AI does not know anything. These are still probabilistic models.
This thing is not thinking at all. There is no reasoning.
It's got a model, reading a model, giving a model
answers from it's a mess, and it's an insult to people,
actual human beings who, when they think, are acting based
on many, many complex factors, their experience, their knowledge, the
(10:56):
knowledge they've accumulated over years of experiences, their brain chemistry,
so on and so forth. Well, we may to guess
about the correctness of each thing we're guessing at, and
we may reason through a complex problem. All of this
is based on something concrete. When we get something wrong,
it's based on actual experience versus training data and probabilistic models.
(11:19):
This shit is not thinking at all, and by god
is it expensive. Pricing for one preview, which is the
first model, is fifteen dollars per million input tokens and
sixteen per million output tokens. In essence, it's three times
as expensive as their most expensive model GPT four O
for input and four times is expensive for output. And
(11:39):
then there's a hidden cost. Data scientist Max Wolf reported
the open ayes reasoning tokens the output it uses to
get you to the final answer where it says, Okay,
I need to find out the solution to this problem.
So here are the thirty steps I've gone through. Yeah,
those are actually generated using the most expensive tokens, the
output tokens. So the more it has to think, the
(12:01):
more expensive it gets. All of the things it generates
to consider an answer are also charged for, which means
the more complex it is, the more expensive it's going
to be worse. Still, if you integrate this model, open
ai does not show you what it's reasoning. All of
that calculation happens in the background, and they still charge
(12:21):
you for it. You just don't know how much every
oh one step is charged to you in an indeterminate way,
and open ai claims that they can't show you because
of competitive reasons. Ugh, nasty company, really greasy and they're
still gonna burn Okay, okay, though it's different GPT four.
(12:42):
Oh and it's really expensive. But is it better? Of
course it must be better, right, right? It sounds great.
It's thinking, right, it's reasoning right. No, No, it's not,
it's not. It's worse. This crap's worse. Let's talk about accuracy.
On how can news the reddit styles I owned by
am Horman's former alum y Combinator, one person complained about
(13:03):
O one hallucinating libraries and functions when presented with a
programming task, and making mistakes when asked questions where the
answer isn't readily available in the Internet. On Twitter, Henrik Nyberg,
a startup founder and former game developer, asked OH one
to write a Python program that multiplied two numbers, then
calculated the expected output of said program. While OH one
correctly wrote the code, although said code could have been
(13:24):
more succinct, the actual result was wildly incorrect. Karthig Cannon, himself,
a founder of an AI company, tried a programming task
on O one where it also hallucinated a non existent
command for the API he was using. Another person, Sashi Yanshin,
tried to play a game of chess with O one,
and it hallucinated an entire piece onto the board and
then it lost. And because I'm a little shit, I
(13:46):
also tried asking one to listen a number of states
with A in the name. After contemplating for eighteen seconds,
it provided the names of thirty seven states, including Mississippi,
you know, the classic state with an A in it.
By the way, there are thirty six states that have
in them, just in case you're curious. I then asked
for a list of states with the letter W in
the name, and then it sat and it thought for
eleven seconds, and then included North Carolina and North Dakota.
(14:10):
Great stuff. By the way, I also asked tho one
to count the number of times the letter R appears
in the word strawberry, which is the pre release code
name for this. It's said too, I would have hard
coded that one. Personally. You can't give me that kind
of joy now. Open AI claims that I one performs
similarly to PhD students on challenging benchmark tasks in physics, chemistry,
and biology, just not in geography, it seems, or basic
(14:33):
elementary level English, or maths or programming. Also, I mean
for the PhD listeners. I've met a few PhD people
who authoritatively state things that are completely untrue that they
know nothing about. This is not a broad stroke thing,
but I get the sense that it's true anyway. This
is I should know, the big stupid magic trick I
(14:54):
predicted in the past. Open AI is shoving strawberry out
the door as a means of proving to investors and
the greater public. But they've still got it that the
AI revolution is still here, that this thing is thinking,
and what they actually have is a clunky, on exciting
and expensive model that doesn't really seem to have any
measurable improvement. Okay, I'm sorry. It has a measurable improvement.
(15:15):
You can measure it on the weird rigged test they
do for all of these things. And the thing is,
at this point, you'd think that even Apple, when they
pulled together a new thing, even when they had the
first Apple Watch and it was not obvious why you
had to own it, they still had apps that were
connected to it. They still had things you could point
out and go, oh, that's cool, I've got four square
(15:36):
on this four square on there at the time. Nevertheless,
they had apps to show. I just feel like open
Ai has this deep contempt for Silicon Valley and for
the world at large. They don't even have it in
them to be like, Okay, we have this new model,
and here is the new thing we built with it,
and this thing does this and now you will see
(15:57):
how important this company is. Instead, we get this crap.
We just get this very boring crap. And sure, I'm
sure someone technical is going to email me and say, ed, wow,
chain thought reasoning. There are other companies that have been
doing it already. Anthropic already had something like this, and
even then they didn't do shit with it. Where's the product, man,
(16:18):
where's the thing I meant to care about? Why should
anybody give a shit about this? Well, sam Altman is
likely trying to trump up the reasoning abilities of O one. Well,
people you know, such as the people bankrolling him, will
actually see he's a ten to twenty second waiting time
for an answer which may or may not be correct.
But you have a bit more detail, which isn't even
(16:39):
the reasoning happening because open AI hides that bit. Nobody
gives a shit about better answers anymore. They want generative
AI to do something new, and I don't think the
open AI has any idea how to make that happen.
Sam Orman's limp shitty attempts to anthropomorphize I one by
making it think can use reasoning obvious attempts to suggest
that this is on how part of the path through AGI.
(17:02):
But even the most staunch AI advocates, well, they can't
seem to get excited about this. In fact, I kind
of argue that O one shows that open AI is
desperate and out of ideas. Now if you don't have
any ideas, though, the following advertisements will be more than
happy to fill your empty little brain with new ideas
that involve giving someone money or downloading something. And I
(17:23):
must implore you to just accept everything that follows. I
don't endorse any of it because I don't know what
it's going to be, but you must. And we're back.
(17:47):
So I think now is a good time to get
back to the root of the generative AI problem. Generative
AI is being sold to you on multiple lives that
it's AI, it's actually artificial intelligence, it's going to get better,
that this will become artificial general intelligence, that this will
become the thinking computer, and all of this is inevitable.
(18:07):
Putting aside terms like performance, as they're largely us as
a means of generating things accurately or faster, rather than
being good at anything. Large language models have effectively platowed
more powerful never seems to mean does more, and more
powerful often means more expensive to run or more expensive
for you as the user to access, meaning that you've
just made something that doesn't do more and does cost
(18:29):
more to run. If the combined forces of every venture
capitalist and big tech hyperscaler have yet to come up
with a meaningful use case that lots of people will
actually pay for. I just don't see one coming. Large
language models and yes, that's where all of these billions
of dollars are going. Are not going to magically sprout
new capabilities a big Tech and open Ai burn another
one hundred and fifty billion dollars. And yes that number
(18:51):
isn't hyperbole. It's actually pretty close to the amount being
plowed into these companies when you include things like investments
in companies like Anthropic and open Ai. And they're genuinely
and sane amount of capex from the likes of Google,
Amazon and Microsoft going into expanding data centers and buying GPUs.
Nobody seems to be trying to make these things more efficient,
or at the very least nobody's succeeded in doing so,
(19:12):
because I think if they had, they'd be shouting it
from the rooftops and as on the side. By the way,
the biggest sign that no one's actually making money from
this is that no one's talking about how much money
they're making. Microsoft and all of these companies they love
talking about making profit. They love doing that. Beyond earnings.
(19:32):
They love talking about it instead whenever they're asked to go, oh, hey,
I will do some things in the future, I need
to take a phone CALLT and then they kind of
disappear from the room. Amy heard CFO of Microsoft classic
bullshit artists dancing around Yeah, oh net revenue increase checking
a watch. It's just really sad. It's really sad because
what we have here is a shared delusion, a shared
(19:55):
delusion about a dead end technology that runs on copyright theft,
one that requires a CONTINUALUS supply of capital to keep
running as it provides services that are at best in
essential sold to US dressed up as a kind of
automation that does not exist, and it doesn't provide, costing
billions and millions of dollars and continuing to do so
im perpetuity. Generative AI doesn't run on money or cloud
(20:16):
credit so much as it does on faith. And the
problem is that faith, like investor capital, is actually a
finite resource. And that's where I bring you one of
my biggest anxieties about this industry, because I think we're
in the midst of a SUBPRIMEI crisis where thousands of
companies have integrated this stuff into their software at prices
that are far from stable and even further from profitable
(20:38):
for the services providing them. This concern, by the way,
isn't unfounded. At the latest open ai dev Day, they
said that they'd slash prices for their APIs by ninety
nine percent over the previous two years, largely as tech crunchies.
MAXT theorized due to price pressure from Meta and Google,
both of whom want to take that API access for
I assume some reason. Anyway, almost every AI powered startup
(21:01):
uses large language model features is based on some combination
of GPT or chlaud so open Ai or Anthropics models.
These models are built by two companies that are deeply unprofitable.
Open Ai they can lose five billion this year, Anthropic
is on course to lose two point seven billion this
year on much less revenue, and they all have pricing
design to get more customers through the door than make
(21:22):
any kind of profit. Open Ai, as mentioned, is subsidized
by Microsoft, both in cloud credits they received in the
twenty twenty three investment and the preferential pricing Microsoft offers
for their cloud services about a quarter of the price
of what everyone else pays. And these companies willow open
Ai and Anthropic. Their pricing is entirely dependent on the
support of big tech in the case of open ai,
(21:43):
Microsoft's continued support. In the case of Anthropic, Amazon and
Google both as investors and service providers. Based on how
unprofitable these companies are. I hypothesized that if open ai
or Anthropic charge prices closer to their actual costs, they'll
be a ten to one hundred times increase in the
price of API calls, though it's impossible to say how
much without the actual numbers of direct burn from these companies. However,
(22:06):
Let's consider for a moment that the numbers reported by
the Information Estimate the open AI's server costs with Microsoft
will be four billion dollars in twenty twenty four, which
I add are over two and a half times cheaper
than what Microsoft charges others. It's like about four dollars
in something and they pay them out a dollar something
per GP per hour. And then consider, after knowing that
(22:27):
they're getting this massive discount, that open ai still loses
over five billion dollars a year. Open ai is more
than likely charging only a small percentage of what it
likely costs to run its models, and can only continue
to do so if it's able to continually raise more
venture funding than has ever been raised ever and continue
to receive preferential pricing from Microsoft, a company that recently
(22:48):
mentioned that it considers open Ai a competitor and has
complete access to its IP and research. While I can't
say for certain, I would think it's reasonable to believe
that Anthropic receives a similarly preferential pricing package from both
Amazon Web Services and Google Cloud. Both of those companies,
by the way, put billions into them. Assuming that Microsoft
gave open ai ten billion dollars of cloud credits and
(23:09):
it spent four billion on server costs and let's say
two three billion dollars on training costs, that are both
short to increase. With new models, open Ai will either
need more credits will have to pay actual cash to
Microsoft sometime in twenty twenty five, and Microsoft did participate
in the latest round, by the way, but it's not
obvious how much, and it was much less than last time,
(23:29):
which was I believe ten billion, mostly in cloud credits.
While it might be possible that Microsoft, Amazon and Google
extend their preferred pricing indefinitely, the question is whether these
transactions are profitable for them in any way. As we
saw following Microsoft's most recent quarterly earnings, there's growing investor
concern over how capex is being spent and the amount
that's being required to build the infrastructure for generative AI,
(23:51):
with many voicing skepticism about the potential profitability of the technology,
including Jim Cavello of Gold and Sex. And what we
really don't know is how on profit generative AIS for hyperscalers,
because they baked those costs into other parts of their ownings.
What we can't know for sure, I imagine this stuff
is if this stuff was in any way profitable, they'd
be talking about it all the time. They would never
(24:12):
shut up. This would be their new golden goose, and
they're not. In fact, the most concrete information we have
about open AI's balance sheet comes from leaked reports, well
sourced reporters at places like The New York Times, and
the information and invested prospectuses that found a wider audience
than Altman perhaps would have liked. So you may remember
from a few months ago that the markets have become
a little skeptical of the generative AI boom and Nvidia
(24:35):
CEO Jensen Huang had no real answers about AI's return
and investment from his latest earnings, which led to a
historic two hundred and seventy nine billion dollar drop in
Nvidia's market cap in a single day. This, by the way,
was the largest route in US market histories. The total
value lost is equivalent of nearly five Layman Brothers Its
peak value. They've recovered some of it, but nevertheless, that's
(24:57):
what we in the business called are not so good.
At the beginning of August, Microsoft, Amazon, and Google all
took a similar beating for the markets for their massive
capital expenditures related to AI, and all three of them
will face the wheel next quarter in a couple weeks
in fact, if they can't show a significant increase in
revenue from the combined one hundred and fifty billion or
more in capex that they put into new data centers
in the Nvidio GPUs. What's important to remember here is
(25:20):
that other than AI, bigtech really doesn't have any other ideas.
There are no more hypergrowth markets left, and as firms
like Microsoft and Amazon begin to show signs of declining growth,
so too does their desperation to show the markets that
they've still got it. Google, a company almost entirely sustained
by multiple at risk monopolies in search and advertising, also
needs something new and sexy to wave in front of
(25:40):
the street. Except none of this is working because the
products aren't that useful, and it appears most of its
revenue comes from companies trying out AI and then realizing
it wasn't worth it. And if you think back to
what I was saying about open aised cloud costs, they're
making what eight hundred to a billion on this? How
much does Google make? Probably much less considering their multiple
(26:03):
stories about people not really caring about Gemini. But at
this point there are really two eventualities. Big Tech realizes
that they've gotten in way too deep in this, and
out of the deep fear of pissing off the street,
chooses to reduce capital expenditures related to AI, or the
second one, Big Tech, desperate to find a new growth hog,
decides instead to cut costs to sustain their stupid fucking ideas,
(26:24):
laying off workers and reallocating capital from other operations as
a means of sustaining this death march nowhere, it's unclear
which will happen if Big Tech accepts that generative AI
is in the future. I don't really have anything else
to waive at Wall Street, but they could do their
own version of from twenty twenty two. Meta did this
Year of Efficiency thing, which involved reducing capital expenditures and
(26:46):
laying off thousands of people while also promising to slow
down a little with investment. This, by the way, is
the most likely path for Amazon and Google, who, while
desperate to make Wall Street happy, they still kind of
have their profitable monopolies now at least. Nevertheless, there really
needs to be some kind of revenue growth from AI
in the next few quarters. That has to be material.
(27:07):
It can't just be this thing about AI being a
maturing market or how annualized run rates have improved, and
said material contribution will have to be magnitudes higher if
capex has increased along with it. I just don't think
it's going to be there, whether it's Q four twenty
twenty four or Q one twenty twenty five, or maybe
a little later. Wall Street's going to punish big tech
(27:28):
for this the sin of lust, and the punishment is
going to be to savage these companies, even more harshly
than Nvidia, which, despite Jensen Huang's bluster and empty platitudes
is pretty much the only company that's actually making money
on AI, and that's because you do need their chips
to do all this. But I worry more than anything
that option two is more possible. I think these companies
(27:51):
are really capable of committing to AI as the future
and the cultures are so disconnected from the creation of
actual value or like software or solving problems that actual
people face, that they're willingly start laying people off if
it means bankrolling these operations. I really really worry about that.
By the way, the mass layoffs that could come from
this will be horrifying, because otherwise it's just going to
(28:13):
be feeding profit into this, and at this point they're
feeding in pretty much all their profits. And all of this,
by the way, could have been stopped if the media
had actually held the leaders of tech companies accountable. This
narrative was sold through the same con as the previous
hype cycles, and the media assumed that these companies would
just work it out like they did with crypto and
the metaverse, despite the fact that it was blatantly obvious
(28:35):
that they wouldn't work this out. You think I'm a duma, Well,
ask to me this, what's the plan, what does generative
AI do next? If your answer is that they'll work
it out or that they have something behind the scenes
that is incredible, you're an absolute mark. You're a participant
in a marketing scheme. It's time to wake up. It
(28:56):
is time to wake up to how stupid this is.
And I'm sure some of you will say, oh, oh,
you're going to look so stupid in six months. People
were telling me that six months ago. And I still
don't look stupid other than the ways they do, and
they're unrelated to the podcast. But let's get back to
the real problem, and let's get back to the really
worrying stuff, because I believe that the very least Microsoft
(29:19):
will begin reducing costs in other areas of its business
as a means of sustaining the AI boom. In an
email shared with me by a source from earlier this year,
Microsoft's senior leadership team requested in a plan that was
eventually scrapped, reducing power requirements from multiple areas within the
company as a means of freeing up power for GPUs,
including moving other services compute to other countries as a
(29:40):
means of freeing upseid capacity, specifically for AI. On the
Microsoft section of anonymous social network Blind, where you're required
to verify that you have a corporate email of the
company in question. One Microsoft worker complained in mid December
twenty twenty three that AI was taking their money, saying
that the cost of AI is so much that it
is eating up pay raises and that things will not
(30:01):
get better. In mid July twenty twenty four, another shared
their anxiety about how it was apparent to them that
Microsoft had and I quote a borderline addiction to cut
costs in order to fund in Video's stock price with
operational cash flows, and that doing so had and I
quote damaged Microsoft's culture deeply. Another added that they believe
that copilot is going to ruin Microsoft's FY twenty five,
(30:22):
referring of course to their financial year twenty twenty five,
adding that the f y twenty five copilot focus is
going to massively fall in f y twenty five, and
they knew of big co pilot deals in their country
that have less than twenty percent usage after almost a
year of integration, adding that corpor is too much and
that Microsoft's huge AI investments are not going to be realized.
(30:44):
While Blind is anonymous, it's kind of hard to ignore
the fact that there are many, many posts that tell
a tale of a kind of cultural cancer in Microsoft,
with disconnected senior leadership, the only funds projects if they
have AI takeed onto the side. Many posts the men
satching the Della's words salard approach and complain of a
lack of bonuses or upward mobility, and an organization focused
(31:06):
on chasing an AI boom that may not exist. And
at the very least, there's a deep cultural sadness there
with the many posts I've seen oscillating between I don't
like working at Microsoft and I don't know where we're
putting so much into AI, and then someone replying with
get used to it, SAJA doesn't get a shit, and
it all feels so ridiculous because there's so many signs
that these products don't have a product market fit. At
(31:28):
the start of this episode, I mentioned an article from
the Information about a lack of adoption of Microsoft's AI features.
Buried within that one was a particularly worrying thought about
the actual utilization of their data centers for this AI
and it said, and I quote around March of this year,
Microsoft had set aside enough server capacity in its data
centers for three sixty five copilot to handle daily users
(31:49):
of the AI system in the low millions. According to
someone with direct knowledge of those plans, it couldn't be
learned how much of that capacity was used at the time.
Based on the information's estimates, Elsewhere, Microsoft has somewhere between
four hundred thousand and four million years of its office
Copilot features, meaning that there's a decent chance that Microsoft
has built out capacity that isn't getting used. Now. One
(32:10):
could argue that it's building with the belief that the
product category will grow. But here's another idea. What if
it doesn't. Huh ah, what do you think? What if?
And this is crazy, Microsoft, Google and Amazon built out
these massive data sentences to capture demand that may never arrive.
I realized that sound a little crazy saying this, But
back in March I made the point that I could
(32:31):
find no companies that had integrated generative AI in a
way that was truly benefited their bottom line, And just
under six months later, I'm still looking. The best that
I can find is that big companies appear to have
done is stapled AI onto existing products and hoping that
that helps them shift them something that does not seem
(32:52):
to be working either. It doesn't work for Microsoft, doesn't
work for Box. It does seem to be working anywhere
as I'm not sure any of these AI upgrades give
any kind of significant business value. Now. While there may
be companies integrating AI that are driving some degree of
spend on Microsoft as your Amazon Web Services and Google Cloud,
I don't know how much it is, considering the last
(33:12):
episode saying about how open ai was only making about
a billion dollars licensing out their models, and I hypothesize
that any of this demand is driven by investor sentiment,
because companies are right now everywhere in the economy being
pushed to invest in AI without really knowing if it
will work, or whether it's useful, or whether their users
will like it. Nevertheless, these companies have spent a great
(33:35):
deal of time and money baking generative AI features into
their products, and I think they're going to face one
of a few different scenarios. Scenario the first, after developing
and launching these features, these companies are going to find
customers don't want to pay for them, as Microsoft's finding
with three sixty five Copilot and if they can't find
a way to make them pay for it. Now, they're
going to be really hard pressed when nobody's telling them
(33:57):
to get in on AI. And there's the same scenario.
After developing and launching these features, these companies can't find
a way to get users to pay for them, or
at least pay extra for them, which means that everyone
is going to have to bake the same thing into
their products. Everyone's going to have to do this because
none of these companies are able to function without copying
their competitors, which will turn Generative AI into a kind
(34:19):
of parasite. Now, just to broaden out what I mean here,
I looked across most of the software as a service
industry and a previous newsletter and I was looking and
most of them are doing much the same thing. It's
document summarization, document search, generation of staff, so emails and
the like, and summarization summarization can be emails can be documents.
(34:43):
For the most part, that's what everyone is doing. The
problem is that everyone doing the same thing means that
no one can really make money off of it. And
Jim Cavello out of Gold and Sex made the same
worrying we had the same thought as me, which makes
probably him smarter than me. I shouldn't think about that
too much anyway. I mentioned previously in the last episode
(35:05):
the commoditization effect of these large language models, and I
think there's going to be a further commoditization of these
effects themselves, of these features. If everyone summarizes email, now
you have to do it too, because otherwise the customer
can go, there's another feature. I'm going to pay for
this one because it's got more stuff in it, except
the feature in questions more expensive. It's very worrying, But
(35:26):
in general, what I fear is a kind of cascade effect.
I believe that a lot of businesses right now are
trying AI, and once those trials end, and Gartner predicts
that thirty percent of GENERAVIAI projects will be abandoned after
the proof of concept by the end of twenty twenty five,
these companies are going to stop paying for the extra
features or stop integrating GENERATEVII into their products. If this happens,
it will reduce the already kind of shitty revenue flowing
(35:49):
to the hyperscalers providing cloud computer or access to models
for generat AI, which in turn could create more price
pressure on these companies. They're already negative margins sour. At
that point, Open AI and Anthropic will almost certainly have
to raise prices. And what's fun is they're already not
making that much money from this. So we're in this
weird situation where it isn't obvious which it's going to be.
(36:13):
Is it that they're going to have to raise prices,
or that no one wants to pay them, or some
combination of both. It's also important to note that the
hyperscalers are also terrified of pissing off Wall Street. I
really mean that one of them will eventually blink. And
while they could theoretically do the layoffs and cost cutting
measures I've mentioned, these are short term solutions that don't
really work against burning billions tens of billions, like more
(36:37):
than half more than fifty billion a year for each
of them. How are you going to cut enough to
bankroll that? But in any case, putting aside the amount
of money they're having to invest, it might be time
to accept that there really isn't money here in Generative AI.
It might be time to stop and take stock of
the fact that we're in the midst of what our
(36:58):
third delusional epop our third stupid idea that everyone claims
the future, But unlike cryptocurrency, in the metaverse, everyone seems
to have joined this pie, and everyone's decided to burn
as much money as he mainly possible on this unsustainable, unreliable, unprofitable,
environmentally destructive bullshit sold to customers and businesses as artificial
(37:21):
intelligence to law may everything without ever having a path
to do so. Because that's the thing, none of this
is even AI. This is an automation. It's generation generation
in different hats, and it burns the world around us
to provide it. But you know, I don't think the
following is going to burn the world. In fact, I
(37:42):
think it could really make your life better. And I
need you to directly and vacifiously engage with the following
advertisements and we're back. See might ask why does this
(38:07):
keep happening? Why do we keep getting these stupid movements?
Why did they tell us that cryptocurrency was the future.
Why did they tell us the metaverse was the future?
Why are they telling us that generative AI is the
future when none of these things from the very beginning
looked like the future. There were signs from GPT through
like oh cool, you can generate entire things, and like
(38:28):
a minute, wow, that's crazy. But past that point, past
that moment of oh you can do that, I guess
what was there? And why does this keep happening. It's
the natural result of a tech industry that's become entirely
focused on making each customer more valuable rather than providing
more value to the customer in exchange for I don't know,
(38:48):
money or attention. The products you're being sold today almost
certainly tried to wed you to a particular ecosystem when
owned by Microsoft, Apple, Amazon or Google as a consumer
at least and internally increase the burden of lead things
said ecosystem. Imagine trying to move all of your subscribe
and save shit off of Amazon. Imagine trying I mean
(39:09):
moving iOS to Android or It's not that easy. And
that's by design. Everything is about further monetization, about increasing
the dollar per head value of each customer, be it
through keeping them doing stuff on the platform, to show
them more advertising, upselling them new features that are only
kind of useful or previously we're free, or creating some
new monopoly or oligopoly where only those with the massive
(39:31):
war chests a big tech can really play, and very
very little about this is about delivering any kind of
real value or utility or thing that you, the customer
might like. Generative AI might not be super useful, but
it's really easy to integrate into stuff and make new
things happen, creating all sorts of new things that the
company could theoretically charge for, both for a customer and
(39:52):
an enterprise customer. Sam Molton was smart enough to realize
that the tech industry needed a new thing, a new
technology that everybody could take a piece of and sell.
And while he might not really understand technology, all one
understands growth and the lust that the economy has for growth,
and he's productized Transformer based architecture is something that everybody
could sell, a magical tool that could plug into things
(40:14):
and kind of connect to an ephemeral concept like AI.
The problem is that the desperation to integrate genera if
AI everywhere has shown a pretty nasty light on how
disconnected these companies are from actual consumer needs or even
running good companies. Like really, I'm not even being facetious.
(40:34):
I would genuinely like it if this stuff was useful.
I like useful things. There would be ethical concerns about
the copyright theft and such, but I would at least
tip my hat to them if I could find something,
anything that I looked at and could say, Wow, that's
really useful in my daily life. I got nothing, and
I've really looked. You can email me easy that's echoes
(40:56):
Abetter offline dot com if you have one. But I've
yet impressed by one of those emails, So please try harder.
And the really worrying part is that other than AI,
many of these companies don't seem to have any other
new products. What else is there? What are the things
do they have to grow their companies? No? Really, what
(41:18):
do they have? The new iPhone? I bought the new iPhone.
I'm a little pig point cooin coin. I bought the iPhone.
I bought the new one, and I've bought it every year.
I am that guy I sell the old one about
the new one. This is the first year. I think
from the beginning where I bought it and being like
why did I do that? Man? What does this do?
And that's because I think we're hitting a wall. This
(41:39):
is the rock combubble. I talked about a few months ago.
They've not got anything. There's nothing, They've got nothing. And
that really is the problem, because when everything falls, when
everyone realizes, when the markets look at tech and say, wow,
you're not going to grow forever. You're not going to
come up with a new wiz bang that you can
market to everyone and make billions in returns. You're not
(42:01):
going to do that. No, they're not going to react
well at all, because when you take away the massive
growth that tech has, you have a very annoying industry
full of annoying young people that will piss off the markets.
They will piss off those with the money. The tech
industry has a terrible rep with the government and a
(42:21):
terrible rep with society. The reevaluation of these companies will
be merciless, and there are very few friends left, and
I think there will be a cascade down to the
other companies in the tech space, just in the same
way that it will hit workers who will get laid
off when all of this falls apart. Despite none of
these people doing anything wrong other than the people up
(42:44):
top having no creativity, no real innovation, and no understanding
of real people's problems. I hypothesize a kind of SUBPRIMEI
crisis is brewing where almost the entire tech industry is
brought in and a technology sold at this insanely discounted rate,
heavily sentralized and subsidized by big tech companies like Microsoft, Amazon,
and Google. At some point, this incredible toxic burn ray
(43:07):
is going to burn through GENERATIVEAI and it's going to
catch up with them. And when the price increases come
or companies realize that these features are not that useful
and they see the lack of user adoption, they're going
to start getting nervous. But right now are in the
piss take section of the economy. Right now we're seeing
the egregious share like Salesforce charging two dollars a conversation
(43:30):
for their new agent Force product. But eventually the markets
will catch up because the money isn't there. And when
these prices go up, I'm not confident that will have
much of a generative AI industry left. And that's assuming
that these companies still have enough money. It's assuming that
open ai is able to raise another six and a
(43:52):
half billion dollar round in the next six to eight months.
How long can they do that? For? How many times?
How many years? A VSA he's willing to prop up
open AI. How many years is Microsoft ready to burn
capital to make what a billion or two on Generative AI?
This is embarrassing. It's bad business and it's bad product.
(44:15):
Satch an Adela, Sundarpishi, sam Ortmon, the whole lot of them.
They should be absolutely fucking ashamed of themselves. They're insult
to innovation and insult Silicon Valley and insult to their consumers.
And what happens, you tell me this when the tech industry,
the entire tech industry, relies on the success of a
kind of software that only loses money and doesn't create
(44:36):
much value when it does so. And what happens when
the heat gets too hot and these products become impossible
to reconcile with, and that everyone realizes that none of
these companies have anything else to sell. I really don't know.
I'm scared. I'm not trying to do fud, doing a fud, fear, uncertainty,
(44:59):
in doubt, told to spell these things out, but I
am worried because really the only other alternative to what
I'm saying is that they magically make this profitable, that
they just keep doing this until it goes into the green,
despite no one appearing to know how, despite their not
being a path there. How willing are you to believe
(45:20):
them after they've lied to you for so many years?
How ridiculous is this really? How ridiculous have you been
thinking this is? How much can you let them coast on?
They'll work it out? Because they haven't. They haven't worked
it out for a while. It's been over a decade
since the last significance consumer tech innovation. It's been a
ton on the chip side. But what is there for you?
(45:42):
And I? Not really much? And I don't think there's
much in this industry either, And I worry that the
tech industry is building towards a really grotesque reckoning, with
a total lack of creativity, enabled by an economy that
rewards growth over innovation, and monopolization over loyalty, and management
over those who actually build things. The people in control
of the tech industry are not the ones who built it.
(46:04):
These people are management consultants. Even Samultman is one of them.
These people are superficially interesting and superficially smart, just like
jat GPT. And I worry, I worry so much, So
promise me, dear listener, then The next time someone tells
you they'll work it out, that this stuff is the future,
tell them some of this shit, send them the podcast,
(46:26):
or just yell at them at the top of your voice.
Don't even need to use words, but I'm so grateful
to have you as listeners. Thank you for listening to
Better Offline. The editor and composer of the Better Offline
theme song is Matasowski. You can check out more of
(46:47):
his music and audio projects at Mattasowski dot com, m
A T T O S O W s ki dot com.
You can email me at easy at Better Offline dot
com or visit Better Offline dot com to find more
podcast links and of course, my newsletter. I also really
recommend you go to chat dot Where's youread dot at
to visit the discord, and go to our slash Better
(47:08):
Offline to check out I'll Reddit. Thank you so much
for listening. Better Offline is a production of cool Zone Media.
For more from cool Zone Media, visit our website cool
Zonemedia dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.