Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Zone Media.
Speaker 2 (00:05):
Hello, and welcome to Better Offline. I'm your surly yet
lovable host ed Zitron. Today I'm going to kick off
by reading something I wrote in March twenty twenty four
and talked about in the episode PKI. What if what
(00:28):
we're seeing today isn't a glimpse of the future but
the new terms of the present. What if artificial intelligence
isn't actually capable of doing much more than what we're
seeing today, and what if there's no clear timeline when
it will be able to do more? What if this
entire hype cycle has been built on hot air ghost
by a compliant media, ready and willing to take career
embellishes at their word. Reading that back, well, I think
(00:51):
I might have been right, And that's kind of what
I'm going to get at today. I don't want to
scream mustard. I'm not going to get smug about it.
But this is what we're getting into today and in
the next episode that will come out on Friday. Now,
I'll be linking to some articles, so check the episode
notes if you want to read them. But I'm going
to get a lot into the spoken word.
Speaker 3 (01:08):
So I warned you in February.
Speaker 2 (01:10):
That generative AI has no killer apps and had no
way of justifying its valuations. I also warned you in
March that generative AI had already peaked, and I pleaded
with the tech industry in April to consider an eventuality
where the jump between GPT four, which is the most
current model, or GPT four to zero to GPT five
was not significant, in part due to a lack of
training data, one of the more obvious things. I shared
(01:33):
more concerns in July that the transformer based architecture underpinning
generative AI things like chad GPT was a dead end
and that there were really not many ways we'd progress
past the products we'd already seen back then, impart due
to the limits of training data are convention and the
limits of the models that use said training data. In August,
I summarized the pale horses of the AI apocalypse events
(01:54):
many that have now come to pass. Unafraid that would
signify the end well being nigh, though it's not quite
here yet and it's not obvious when it will be.
But this can't last forever. But I also add that
GPT five would not change the game enough to matter,
let alone add a new architecture to build future and
more capable models or products of any kind. Now, throughout
(02:15):
the things I've written and the things have spoken, I've
repeatedly made the point that separate to any core value proposition,
training data, drought or unsustainable economics that I've gone over
quite a lot, general IFAI is a dead end due
to the limitations of a probabilistic model that hallucinates. Now,
just to be clear with what that means, it's guessing
what the next thing might be, and it's quite good
at it, but quite good is actually.
Speaker 3 (02:35):
Kind of shit.
Speaker 2 (02:36):
And hallucinations, of of course, are whether you authoritatively state
things that aren't true, like when chat GPT tells you
something like I don't know there are two hours in strawberry.
The hallucination problem is one that is nowhere closer to
being solved. You may remember a few months ago when
you had every tech executive, you had Tim Cooks and
it's satching Adella Sun Dapashai. We'll deal with the hallucination problem.
(02:58):
It'll be all right. But I want to be clear,
they have not solved it, they have not really mitigated
and there's no fixing it, at least with the current technology,
it's not going anywhere, and it makes all of this
stuff kind of a non starter for many business tasks.
I have since March expressed great dismay about the credulousness
of the media about this, and they're weird acceptance of
(03:20):
this inevitable way in which generative AI will change society,
despite the fact there's not really a meaningful product that
might justify any of this bullshit, This environmentally destructive nonsense
led by a company that burns more than five billion
dollars a year in big tech firms that are spending
two hundred billion dollars on data centers for products that
people don't want or even potentially use. And you're going
(03:42):
to need context for everything I'm saying today. So it's
worth going over how these models work and how they're trained.
And I must be clear the reason I'm repeating myself
on so many levels here that it's it's just really
important for you to know how obvious the problems of
generative AI have been since the beginning. It's really important.
Let's go over how they work real quick. A transformer
(04:03):
based generative AI models such as GPT, which is the
technology behind chat. GPT generates answers using inference, which means
it draws conclusions based off of its training, which requires
feeding it masses of training data, mostly text and images
straight from the Internet. And both of these processes require
you to use high end GPUs graphics processing units, and
(04:25):
lots of them, tens hundreds of thousands of them, well
over one hundred thousand. I'll get to that next episode. Now,
the theory was and might still be that the more
training data and compute you throw out these models, the
better they get. And I've hypothesized for a while that
we'd have diminishing returns both and running out of training
data and based on the limitations of transformer based models.
Speaker 3 (04:46):
And wouldn't you know it, I was bloody right.
Speaker 2 (04:48):
I'm not going to do many of these, but this one, really,
this one I'm right on. A few weeks ago, Bloomberg
reported the open AI. Google and Anthropic are struggling to
build more advanced AI and the open ais A right
model otherwise known as GPT five did not hit the
company's desired performance. And that and I quote again, Orion
is so far not considered to be a bigger step
(05:09):
up as it was from GPT three point five. To
GPT four its current model. You will be shocked to
hear that the reason is that it's become increasingly difficult
to find new, untapped sources of high quality human made
training data that can be used to build more advanced
AI systems, something that I said in March. I said
it would happen in March and pissed off that people
said I was a pessimist. Well, who's a pessimist now me?
(05:32):
I guess I don't know. But they also added one
other thing, which is that they believe and I quote,
that the AGI bubble is bursting a little bit, which
is something I said in July. AGI isn't coming out
of this shit. Let's just be honest. And I also
want to stop and stare really hard at one particular point,
and I quote again from Bloomberg. These issues challenged the
(05:52):
gospel that's taken hold in Silicon Valley in recent years,
particularly since OpenAI released chat GPT two years ago. Much
of the tech industry is bet on so called scaling
laws that say more computing power, data, and larger models
will inevitably pave the way the greater leaps forward in
the power of AI. The only people taking this as gospel.
Have been members of the media unwilling to ask the
tough questions, and AI founders that don't know what the
(06:14):
fuck they're talking about, or they intend to mislead you.
Generative AI's products have effectively been trapped in amber for
over a year. It's been blatantly obvious if you fucking
use them and pissed off, I shouldn't swear so much.
There have been no meaningful, industry defining products out of
this because, and I quote Darreness and Mooglu, the economist
that MIT back in May, more powerful models do not
(06:35):
unlock new features or really change the experience, Nor what
you can build with transform based models is really a
worthwhile product. Or put another way, a slightly better white
elephant is still a white elephant. Despite the billions of
dollars burned and thousands of glossy headlines, it's difficult to
point to any truly important generative AI product, even Apple Intelligence,
(06:57):
the only thing that Apple really had to add to
the latest iPhone. It sucks, it's not useful. I can
make a special emoji. Now I now get summaries of
my texts that are completely or vaguely incorrect or just
summarize a giant, meaningful paragraph into a blob of a sentence.
It's so stupid. And just as a side question, what
(07:20):
the hell is Apple going to put in the next iPhone?
I buy one of these every year. I'm a little
big oincoin CoInc but still I don't even know why
ad upgrade again. The camera is already about as good
as it's going to get.
Speaker 3 (07:32):
Anyway.
Speaker 2 (07:32):
There are people that use chat GPT two hundred million
for them a week, allegedly losing the company money with
every prompt, by the way, but there's little to suggest
that there's widespread adoption of actual generative AI software. The
Information reported in September that between zero point one percent
and one percent of the four hundred and forty million
of Microsoft's business customers were willing to pay for its
AI powered Copilot, and in late October, Microsoft claimed that
(07:56):
it was on pace to make AI a ten billion
dollar a year business, which sounds really good until you
think about it for roughly ten seconds. First of all,
Microsoft does not have an AI business unit, which means
that this annual ten billion dollars or two and a
half billion a quarter revenue figure is split across providing
(08:17):
cloud compute services, and Azure selling Copilot. The dumb people
with Microsoft three sixty five subscriptions selling git Hub, Copilot,
and basically anything else with AI on it. Microsoft is
cherry picking a number based on nonspecific criteria and claiming
it's a big deal when it's actually pretty pathetic, considering
that Microsoft's capital expenditures will likely hit over sixty billion
dollars in twenty twenty four with no sign they're going
(08:39):
to slow down. No, that's sticky word. Revenue not profit.
Those are two very different things. How much is Microsoft
spending to make ten billion dollars a year? Open ai
currently spends two dollars and thirty five cents to make
a dollar, and Microsoft CFO Amyhood said that open ai
would cut into Microsoft profits in their last earning score,
losing it a remarkable one point five billion dollars, mainly
(09:02):
because of the expected loss from a company that has
only ever lost money now a year ago. In October
twenty twenty three, The Wall Street Journal reported that Microsoft
was losing an average of twenty dollars per user per
month on GitHub Copilot, a product with over a million users.
If this is true, by the way, this suggests losses
of at least two hundred million a year. They have
(09:22):
one point eight million users. Allegedly, this is based on
documents have reviewed. It's not great either way. That two
hundred million dollars is a lot of money to lose.
I would personally like to make two hundred million dollars
rather than lose it. Don't ask me, though I don't
run Microsoft.
Speaker 3 (09:39):
Now.
Speaker 2 (09:39):
Microsoft is still yet to break out exactly how much
generative AI is increasing revenue in the specific business units
they have. Generally, if a company's doing well at something,
they take great pains to make that clear. Instead, Microsoft
chose in August to revamp its reporting structure to give
better visibility into cloud consumption revenue, which is something you
do if you say, anticipate you're going to have your
worst day of trading in year after your next earnings,
(10:01):
as Microsoft did in October. It's all very good, it's
all going well. Now it must be clear that every
single one of these investments and products, as I've been
hyped with the whisper, that they would get exponentially better
over time, and that eventually the two hundred billion dollars
in capital expenditures would spit out this remarkable productivity improvement,
this crazy new product that would change our lives, fascinating
(10:22):
new things that consumers and enterprise of buying droves and
talk about how much they loved. Instead, Big tech has
found itself peddling increasingly more expensive iterations of near identical
large language models and shitty products attached to them, a
direct result of all of them having to use the
same training data, which they're now running out of. But
if you're running out of stuff and you can't find
(10:42):
stuff to buy, I really recommend the following advertisement. I'm
sure it will totally gel with my beliefs. The things
I'm talking about right now won't be embarrassing at all,
and we're back now. There's another assumption that people have
(11:05):
about these so called scaling laws. That's been by simply
building bigger data centers with even bigger, more powerful GPUs,
the expensive power hungry graphics processing units that use to
both train and run these models, and throwing as much
training data at them as possible.
Speaker 3 (11:22):
They would simply start doing new things.
Speaker 2 (11:24):
They'd have new capabilities, despite their being little proof that
they would do so in any way, shape or form. Microsoft, Meta, Amazon,
and Google of all burn billions, and the assumption that
doing so would create something, you know, a thing, a
good thing, like a human level artificial general intelligence, or
a product that made more money than it cost that
(11:44):
people liked. It's become kind of obvious that that isn't
going to happen. As we speak, members of the media
who should know better are already desperately trying to prove
that this is not a problem the information in a
similar story to Bloomberg's attempted to put lipstick on the
pig of generative AI, framing the lack of meaningful progress
of GPT fivers fine because open ai can now combine
(12:07):
its GPT five model with its one reasoning model, which
is the one that can't count the number of ours
and the strawberry by the way, which will then do something,
something good. Something's gonna happen. Like Sam Altman said, it
could write a lot more very difficult code. You know, Samultman,
the career liar who intimated the GPT five may function
(12:29):
like a virtual brain in may like these people are liars.
They're liars, they're lying to you. They were lying then
they're lying now now I couldn't possibly leave out chief
Ellly cheerleader Casey Newton, who wrote on platform Or a
few weeks ago that diminishing returns in training models may
not matter as much as you would guess, with his
evidence being the anthropic, who he also claims has not
been prone to hyperbole, do not think the scaling laws
(12:52):
are ending now. The original scaling law's paper, partly written
by Dario Amadesi of Anthropic important to know and to
be clear, in a fourteen thousand word ophed that Casey
Newton for no reason wrote two pieces about Anthropic CEO Dario.
Speaker 3 (13:07):
He said that, and I quote AI.
Speaker 2 (13:09):
Accelerated neuroscience is likely to vastly improve treatments for or
even cure most mental illness, which is the kind of
hyperbole that should be Have you tired and feathered and
put in a jail? I'm not seriously saying you put
him in jail? But why are we Why are we
trusting these people? Why are we listening to them? Why
are we treating them as if they're telling the truth
(13:29):
or even that they know what's going on? But let's
summarize the main technology behind the entire and I say
this in quotation marks by the way, artificial intelligence boom
is generative AI transformed based models like open AI's GPT
four and soon GPT five, and said technology has speaked
with diminishing returns from the only ways of making them better,
(13:49):
feeding them, training data, and throwing tons of compute at them,
suggesting that we may have, as I said before, reached PKI.
Generative AI is incredibly unprofitable. Open Ai, the biggest player
in the industry, is on course to lose more than
five billion dollars this year, with competitor Anthropic, which also
makes its own transformer based model, clawed, on course to
lose more than two point seven billion dollars this year.
(14:10):
They just raised another four billion. Every single big tech
company has thrown billions of dollars, as much as seventy
five billion dollars in Amazon's case in twenty twenty four alone.
Are building the data centers and acquiring the GPUs to
populate said data centers, specifically so they can train their
models and other people's models, or serve customers that would
integrate Generativai into their businesses, something that does not appear
(14:30):
to be happening at scale, and these investments could theoretically
be used for other products but these data cent as
a heavily focused on GENERATIVIAI. Business Insider reports that Microsoft
intends to amass one point eight million GPUs by the
end of this year, costing it tens of billions of dollars.
Worse still, many of these companies integrating GENERATIVIAI do so
by connecting to models made by either Open AI or Anthropic,
(14:53):
both of whom are running unprofitable businesses and likely charging
nowhere near enough to cover their costs. As I've said
before in my article the sub MAI Crisis, in the
event that these companies start charging what they actually need
to their real costs, I hypothesize that it will multiply
the cost of their customers to the point that they
can't afford to run their businesses, or at the very least,
we'll have to remove or scale back generative AI functionality
(15:14):
in their products. It's just it's such a waste. The
entire tech industry has become orientated around this dead end
technology that requires burning billions and billions of dollars to
provide in essential products that cost them more money to
serve than anybody ever would pay. Their big strategy has
to be to throw more money at the problem until
one of these transformer based models created something useful. Despite
(15:38):
the fact that every iteration of GPT in other models
has been well iterative, and it's weird, you think at
some point that goes, shit, do we actually.
Speaker 3 (15:49):
Have the ability to build products with this? What are
the products?
Speaker 2 (15:53):
Maybe we should work out the products first before we
throw all the capex at it. But wait, no, oh,
over yonder, I couldn't possibly not do this because the
other big tech company that also has no ideas they're
doing this. And if I don't do this, my investors
are going to be angry at me. And then what
will I do? Oh no, oh no, what could I
(16:15):
possibly do? If the investors I don't fucking know, that's
your problem? Why waste this much money? It's just there's
never been any proof other than these benchmarks that are
really easy to gain and also only showed just this
vague power of these models. It's been obvious that GPT
(16:35):
or other models wouldn't become conscious that they're not going
to do more than they do today or three months
ago or even a year ago. Hesitate to give Gary
Marcus credit, but in twenty twenty three he was saying
this if not earlier. Many people have as well, and
it's just really really, really really frustrating. Better Offline isn't
even a year old. But when we put out our
PKI episode, I got so much flak. I got so
(16:58):
much shit for being a hater. I didn't really understand things.
That my fly was open in my Instagram picture, that
I didn't get it, and that in mere months I
would be proven wrong.
Speaker 3 (17:06):
Well here we are. How wrong am I? Now?
Speaker 2 (17:08):
What happens next? Exactly where do all these hundreds of
billions of dollars go? What happens to Open AI when
it collapses? What does Microsoft do with all of these GPUs?
Because you can't just move them into other shit? You
know from what I hear, they don't really have a plan.
And that's the scariest thing, because what happens to a
stock market that's dependent on big tech companies for growth
(17:30):
when the big tech companies can't work out a way
to grow anymore, and in fact, their big path to
try and to grow more was to burn a shit
ton of money on things that people hate, that destroy
our environment. I know, I know, I'm angry. I know
I should calm down. I should, But as I said
in the Rot Society, this money could go elsewhere, more
(17:52):
things could be done. It would enter a fallow period
of tech. But we don't just have to burn all
this money. We don't have to do that. Why not
make the products you have already better? Because stapling generative
AI on them, I think it makes them worse. But
there are more problems ahead. There are problems around the infrastructure.
(18:13):
And in the next episode, I'm gonna break down these
worrying problems and I'm gonna kind of tell you what
happens next is best I can. I really appreciate your
faith in me. And there are many people who also
contacted me and said, no, you bang on, keep going.
I'm glad they did. I'm very grateful for your audience.
I love you all much like you said in the menu,
(18:42):
thank you for listening to Better Offline. The editor and
composer of the Better Offline theme song is Metasowski. You
can check out more of his music and audio projects
at Matasowski dot com, M A T T O. S
O w Ski dot com. You can email me at
easy at Better offline dot com, or visit Better Offline
dot com to find more podcast links and of course
(19:03):
my newsletter. I also really recommend you go to chat
dot Where's youread dot at to visit the discord, and
go to our slash Better Offline to check out our reddit.
Thank you so much for listening.
Speaker 1 (19:14):
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website cool Zonemedia
dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.