All Episodes

June 15, 2023 34 mins

Robert sits down with Garrison and James to talk about the hype surrounding AI, and how popular AI news stories are nearly all lies.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Ah, welcome back to it could happen here a podcast.
It's a podcast. I'm Robert evans Uh and with me
today is Garrison Davis and James Stout.

Speaker 2 (00:20):
Hello, a Canadian, a Britishman and a text and walk
into a podcast.

Speaker 1 (00:26):
Yeah, welcome to a podcast. Only two of them can
drink in a bar.

Speaker 2 (00:32):
That's not true in Canada.

Speaker 1 (00:34):
We can all drink in a bar now, Garrison, A
moment ago, you were holding your hand above a lit
candle in a way that reminded me of G. Gordon Liddy,
the Nazi who masterminded the Watergate breaking, and in order
to convince people that he was a hard man, would
regularly burn the palm with his hand on a candle
while staring at them and gender great g Gordon Letty,

(01:03):
you don't know enough about We'll talk about g. Gordon Letty,
but today we're talking about something else, problematic artificial intelligence,
which is not a thing that exists anywhere. It is
instead a terrible, terrible error going back to like the
sixties in case of terminology, when we talk about all

(01:25):
of the things that people are like, you know, flipping
out as ais chat GPT and stable diffusion and fucking
all these other sort of like different programs. They're not intelligences.
They're you know, the chat GPT is like a large
language model. They're all essentially like bots that you train

(01:47):
to understand kind of like what the likeliest thing that,
what the likeliest appropriate response is to like a given prompt.
That's kind of like the broadest way to explain it.
It's comp located and they're you know, very useful. But
obviously if you've been paying attention to the world right now,
there's just a whole bunch of bullshit about them, and

(02:08):
I think to kind of make sense of why we're
seeing some of the shit around AI that we're seeing.
And for a little bit of specificity, there have been
like this kind of endless series of articles around this
open letter signed by a bunch of luminaries in the
AI field talking about how, you know, there need to
be laws put in place to stop it from ending

(02:29):
the world. You know, you've seen articles about like oh
x percentage of AI researchers think that it could it
could destroy the planet, destroy the human race kind of.
Most recently, the biggest article, the biggest like viral hype article,
was that the Pentagon had supposedly been testing an AI

(02:49):
like missile system that blew up its operator in a
simulation because the operator was trying to stop it from
from firing or whatever. It was bullshit, like what was that?
What actually had? Like Vice ran with the article. It
was very breakfas Advice would do this flipping out about
how horrifying you know, our AI weapons future is and like, yeah,

(03:09):
we shouldn't give AI the ability to like kill people,
but that's not at all what happened. Basically, a bunch
of army nerds or air force nerds were sitting around
a table doing the D and D version of like
military planning, where you say, what if we did this,
what kinds of things could happen if we did this system?
And another guy around the table said, oh, well, if
we build the system this way, it might conceivably attack

(03:32):
its operator, you know, in order to optimize for this
kind of result, which is like not scary, Like it's
it's just people talking through pot like a flow chart
of possibilities around a fucking table. You don't need to
worry about that. There's so many other things to worry about.
New York City is blanketed in a layer of smogs
so thick you could cut it with a butter knife. Like,

(03:52):
don't don't flip out about AI weapons just yet, folks.
But I wanted to kind of talk about why this
shit is happening, And a lot of it comes down
to the fact that when we're talking about the aspects
of like the tech industry that have an impact on
outside of the tech industry. Right, there's basically three jobs
in big tech. One job is creating iterative improvements on

(04:16):
existing products. These would be the teams of folks who
are responsible for designing a new iPhone every year, right,
every couple of years. Lenovo puts out a new series
of think pads and idea pads every couple of years.
You know, you get a new MacBook every couple of years.
Razor puts out a new blade. This is you know,
these are the folks who kind of move along technology

(04:37):
at a relatively like steady pace for consumer devices. And
then you have the people who are responsible for kind
of what you might call the moonshot products. This is
a mix of the next big thing and doomed failures,
and it's often pretty hard to tell you know what's
going to be what ahead of time. A very good

(04:57):
example would be back in the nineties, Apple a bunch
of resources into launching an early tablet computer called the
Newton that was a fabulous disaster. And then in the
mid oughts they put a bunch of resources into launching
the iPad, which was a huge success. And when you
kind of think about like the folks doing this, like
working on the moonshot products. The most recent example would

(05:20):
be whatever team at Apple, the team at Apple that
was behind putting together these new Apple goggles, which I
don't think are going to be a wildly successful product
in the way that they need it to be, like
a smartphone scale success. But this is an example of
like a thing that didn't exist and a bunch of
people had to invent new technologies or new ways to
combine technologies in order to make it exist. The third

(05:41):
kind of job that the tech industry has, broadly speaking,
are conmen, right, And the state that we are in
in the industry right now is that every major tech
company is run by some form of con man. Right.
Tim Cook is, you know, kind of the least cones
of the conment among them. But like Mark Zuckerberg obviously

(06:04):
is a fucking flim flam artist, you know, and you
can see this with the huge amount of money, like
it's something like eleven billion dollars at least that Facebook
pumped into this bullshit metaverse scheme that like Apple barely
even talked about during their event unveiling like a headset
that has VR potential in it. I'm getting away from

(06:25):
myself here. Kind of the point that I'm making is
that you can often have very real products. There's actual
technology going into the Apple glasses marketed by conman flim
flam artist. This is not always like a bad thing, Right.
Steve Jobs was a con man, and it worked out
pretty well for him because it just so happened that
the tech not He had a decent enough idea of

(06:47):
what the tech was capable of that it was able
to kind of meet the promises he was making in
more or less real time. An example of what happens,
you know, pretty spectacularly when that's not the case is
what we saw with Theronose and Elizabeth Holmes who started
prison last week. Right, You've got these promises being made
by the con man and the people who are responsible
for the moonshots can't make it work. I'm bringing this

(07:10):
up right now because there's a lot of folks, I
think who believe that the the nate like the actual
potential of AI has been proven in a spectacular way
because the tools that have been released are able to
do cool things, and I think those people are missing
some key aspect, like some key things that like might

(07:32):
cause one to think more critically about the actual potential
the industry has and also might cause one to think
more critically about how earth shattering it's all going to be.
It's being taken kind of as red right now by
a lot of particularly journalists and media analysts outside of
the tech and or like outside of you know, the

(07:53):
dogged tech press, that like, well, this is going to
upend huge numbers of industries and put massive numbers of
people out of work. And you know, that may seem
if you sat down in front of this chatbot and
had like a mind blowing experience, that may seem credible.
There's not the evidence behind that yet. If you actually
look at the numbers behind some of these different companies

(08:13):
and like how their usership has grown and how it's
fallen off, one of the things you've seen is that
a lot of these tools had this kind of massive
surge peak in terms of the number of people adopting
them and in terms of their profitability. You saw this
with like Stable Diffusion, right, and then this kind of
fairly rapid fall afterwards, not because people are like giving

(08:36):
it up forever or whatever, but because, like, once you
fucked around with it and generated some images or generated
some stories, there's not a huge amount to do unless
you're someone who's specifically going to be using this for
your job. And most of the people that wanted to
fuck around with a lot of these apps didn't have
long term use cases for them. This is why while

(08:57):
you've got like, for example, Stability, which is the company
or at least the main company behind Stable Diffusion, has
been valued at like four billion dollars I think last
it was checked, but their annualized revenue is only about
ten million dollars. So that's a pretty significant gap. And
it's a pretty significant gap because the actual money in

(09:19):
AI so far isn't with the service providers really like
you've got some that have made in like the one
hundred million dollar range, although it's not entirely clear what
their margins are or what the kind of the long
term reliability of that profit is But the vast majority
of money in AI, like almost all of it has
been made by companies like Nvidia, and Vidia jumped up

(09:40):
to become like a trillion dollar company as a result
of this, because the hardware needs of these products are
so intense, and obviously that shows there's money here for somebody.
But the fact that like a shitload of people got
curious about these apps and use them in in quick

(10:00):
succession and then kind of dropped off is an evidence that,
like we're seeing entire industries replaced as much of it
as it is evidence that like a lot of people
thought this was interesting briefly, and so I think kind
of when you look at the data, one of the
things that suggests is that we're heading towards a point
in AI, and I think we're probably going to hit

(10:22):
it within the next six months to a year that
is broadly referred to as like the trial of disappointment,
And this is what happens when kind of the promises
of a new technology that are being made by the
hypemen or con men as I tend to call them,
meet with like the actual reality of its execution, which
in some areas is going to be significant. There are places,
I think medical research maybe one of them. We'll talk

(10:45):
about that in a bit, where a lot of the
promises people are making about AI will be fairly quickly realized,
and then there are areas where it won't be. I
think content generation is one of those things. But yeah,
so that's kind of like what I'm seeing when I'm
looking at the broad strokes of where this technology is
here and kind of the gap between how people are

(11:06):
talking about it and what we're actually seeing in terms
of monetization. I want to talk a little bit now
about kind of one of the guys I would call
him kind of a con man who's been a big
driver of the current AI push. He's a dude named

(11:29):
Amad Mustock, and he's the founder of Stable Diffusion right,
which is a text to image generator that was kind
of like before chat GPT hit. This was like the
first really really big mainstream AI thing. Chat GPT was
a lot larger, but Stable Deviusion came first, and you know,
was critical behind, among other things, a lot of the

(11:52):
silliest NFT bullshit. And he's a really interesting dude, Like
if you look at kind of his own claims his background.
He says that he's got an Oxford master's degree, that
he was like the behind an award winning hedge fund,
that he like worked for the United Nations and a
really important capacity, and also that he obviously founded this

(12:14):
this a I bought. None of that's true. He has
a bachelor's degree from Oxford, not a master's degree.

Speaker 3 (12:21):
He did well, that's what he's playing off. A thing
that happens where like you can you can get if
you have a BA ox and you can you can
get it to be an MA. Doesn't mean you did
a master's. It's just a wealthy people flex. Yeah, it's
not a master's degree. You shouldn't qute it that. If
you're quoting it now, you're taking the pace.

Speaker 1 (12:40):
Yeah. Yeah, he's taken the piss knowing no one's gonna
call him on it, or at least knowing that people wouldn't,
like at large, like loudly enough for it to matter
for him. He hasn't worked with the UN in quite
some time, and never did in a major capacity. He
did run a hedge fund that was successful in its
first year, but then got shut down in its sex
year because he lost everybody's money. So like this is

(13:04):
this is. But you see with this guy, if you
go through his like history, he's like he's like chasing
hedge funds in the early aughts. He first gets in
with stable diffusion after COVID, and he's kind of like
billing it as this is gonna help with like research
into trying to like you know, fight the COVID nineteen pandemic,
and then he kind of pivots to like, oh, this

(13:24):
is a great way to like make NFTs and shit,
you know when that hit, Like he's he's just sort
of like chasing where the money is. Yeah, any way
he kind of can. And he's not, by the way,
he's not the guy who wrote any of the source
code for this. That was done by like a group
of researchers, and he you know, he essentially like acquired it,
which is usually what happens here. Now, none of this

(13:45):
has stopped him from getting one hundred million dollars or
so in investments from various venture partners, and hasn't stopped
his company from getting this massive violation. It hasn't stopped
the White House from inviting him to talk as part
of like a federal AI safety initiative, But it is
one of those like when I kind of look into

(14:06):
this guy and kind of the gap between his claims
and what's actually happened and the claims that are being
made about the value of his company and what it's
actually like proved to be worth so far. I think
a lot about Sam Bankman Freed because a lot of
like the early writing around this guy was similar, and
a lot of the kind of shit that he's claiming
is similar. And yeah, I'm not sure if this is

(14:29):
a case where because Bankman Freed is one of these
people who, like Elizabeth Holmes, I think, backed the wrong
technology because it's fine in Silicon Valley. It's fine, generally
speaking in capitalism to lie about what a product can
do if you can, you know, fake it till you
make it. And maybe AI is there. He may have
this guy may have made a good bet as to

(14:52):
the future, but that's kind of far from certain yet.
And it's it's just really clear how much of this
industry is being built on, or is being built by.
How much of the people running sort of these AI
companies are dudes who managed one way or another, either
through access to VC funding or kind of like you know,

(15:12):
just being in the right place at the right time
to jump in on the bandwagon in the hopes that
they'll be able to cash out very, very quickly. I
found a good quote from a Forbes article talking about
like a big part of why guys like Mustock are
so interested in AI right now from a financial perspective,
And this is true, not just this was true about

(15:34):
like crypto before, but AI because there's more to the technology.
This is kind of even more so valid quote. Venture
capitalists historically spend months performing due diligence, a process that
involves analyzing the market, vetting the founder, and speaking to
customers to check for red flags before investing in a startup,
but start to finish. Mstock told Forbes he needed just

(15:56):
six days to secure one hundred million dollars from leading
investment firms Coach You and light Speed once Stable Diffusion
went viral. The extent of due diligence at the firms
performed is unclear given the speed of the investment. The
investment thesis we had is that we don't know exactly
what all the use cases will be, but we know
that this technology is truly transformative and has reached a
tipping point in terms of what it can do. Garav Gupta,

(16:17):
the light Speed partner who led the investment, told Forbes
in a January interview. So again they're being like, yeah,
we're pumping tens of million dollars of dollars into this.
We don't know how it'll make money. It just seems
so impressive that it has to be profitable. Now that
line is particularly funny, maybe the wrong word when compared
alongside this paragraph from later in the article. In an

(16:40):
open letter last September, Democratic Representative Anna Essue urged action
in Washington against the open source nature of stable Diffusion.
The model, she wrote, had been used to generate images
of violently beaten Asian women and pornography, some of which
portrays real people. Bashara said new versions of stable Diffusion
filtered data for potentially unsafe content, helping to prove users
from generating harmful images in the first place. So it's like,

(17:06):
part of what's happening here is you've got this thing
that seems really impressive, and that is to some extent
because it's able to like remix stuff that exists in
a way that you haven't done automatically before. But all
of these kind of valuations are based number one, and
ignoring the problems with monetizing this stuff, including like the

(17:28):
still very much unsorted nature of how copyright's going to
affect this, and also like the question of is this
really worth that much money? Like is this actually is
being able to generate kind of weird slightly off putting
AI images a huge business, like how much of because

(17:50):
like from where I'm seeing it, one of two things
is possible. Number one, this replaces all art everywhere and
so there's a shitload of money in it. Or number two,
this remains a way that like low quality websites and
like Amazon drop ship scammers who are like putting up
fake books on Kindle and whatnot to trick people using
keywords like that this is just like a way to

(18:12):
fill that shit out. Like I don't see a whole
lot of room in the middle there. You know, maybe
I'm being like overly pessimistic there, but that's that's that's
where I'm sitting.

Speaker 2 (18:22):
I mean, some of the models we've seen used is
selling like subscription packs for like access to these tools
and access to use them for like commercial reasons. The
other thing we can see is just like corporations selling
to other corporations like basically having Disney and Warner Brothers
be able to use this to generate concept art and

(18:42):
now they don't need to pay concept artists and instead
they just have like pretty pretty uh pretty like nicely
curated tools for them to generate this type of Yeah A,
I imagining those are kind of two of the biggest
use cases that at least I'm seeing right now from
more on like the creative filmmaking art side of things.

(19:04):
Because I mean, I don't think it's going to replace
all all art. I think nobody nobody is actually uh
is actually thinking it's just going to replace all all art,
just like photography did not replace all art. It just
it changes the paradigm. And because this this tool does
seem like specifically useful for the for the way that
we're seeing like corporations make the same a movie every

(19:27):
five years, like it's all it's it's it's it's all
built on all of the same stuff. And I think
that that's how a lot of a lot of it's
gonna get used. It's gonna be a lot of weird
scam artists, people just messing around for fun, and then
people not paying like illustrators as much.

Speaker 1 (19:43):
Like yeah, and I think that's kind of like I
see this being adopted widely, but that's not the same
as it like being a huge success. Like right now
I'm looking at an article that's estimating the current value
of AI in the US is at one hundred billion
dollars and that buy twenty thirty, it'll be worth two
trillion US dollars. And it's like, I don't know, man,

(20:05):
like is.

Speaker 2 (20:06):
I mean the AI is more than just like mid
Journey image creation right there, is just like open AI
and chat GPT, and like AI is in everything we
use now, like yeah, like AI is in your smartphone.
AI is going to be in your refrigerator soon. It's
like it's not just image generation by any means.

Speaker 1 (20:23):
That kind of gets to what I'm saying, because that's
that's when you look at AI as a tool. Is
more of like a paint brush than a painter. Is
a tool that will like augment or be used in
because I think a lot of a number of times
it may be used in a way that makes the
product worse and a lot of existing technologies, Well that's
really different from kind of number one, the doom and gloom,

(20:44):
like this is an intelligence on its own that could
like overtake humanity. I think the worry is more like
this could make get adopted on such a large scale
that it like makes a lot of shit worse. Like
my biggest fear with AI is that it kind of
hyper charges the SEO industry and the way that that
has worked to destroy search and destroy so much of

(21:04):
Internet content.

Speaker 3 (21:06):
Yeah, I think that is very possible. Like if I
look at chat GPT, like, I don't think that's going
to be writing features for Rolling Stone anytime soon, but
what it can probably do because SEO max copy is derivative, right,
like like it's predictable, it's derivative, it's based on other stuff.

Speaker 1 (21:22):
It's supposed to be.

Speaker 3 (21:23):
Yeah, and so it can do that SEO max copy
and some of that ad copy like very well, and yeah,
either really fuck up searches, which is quite possible, and
also make the lowest kind of acceptable tier of that
kind of copy what it can generate. And because you
can just shove that copy in front of people with

(21:44):
SEO max and then have shitty AD copy written by
chat GPT, like that will change how it certainly how
we buy stuff on the Internet, right, But also how
we read news, et cetera. Yeah, absolutely, and I already
see that, Like I've written for some big pub locations.
You have like essentially a side. Do people know what

(22:04):
content driven commerce is? Oh yeah, yeah yeah yeah, it's
why every article about stuff is now the best five
x right.

Speaker 1 (22:12):
Yeah, Like they have affiliate links and the publication will
profit if you buy stuff after clicking the link.

Speaker 3 (22:18):
Yeah yeah. So like in the probably twenty sixteen era,
all of the stuff. So I did a lot of
previously outdoor journalism right right about climbing, gear, bikes, that
kind of thing, and like that whole industry went to
just aft com like just affiliate links, and they kind
of trashed any quality review stuff. And I can see

(22:41):
like a similar change to that happening with this right
where where people will just chase that SEO max copy
and that will become the new cool thing to do
and like a lot of outlets as a result. But
that's not the like earth shattering change that people are
talking about on Twitter dot com or whatever.

Speaker 2 (23:10):
Well, one thing I saw recently is that more and
more students are just using chat GPT to look up
information like as opposed to like as a Wikipedia as
opposed to Wikipedia, are as supposed to Google if they
have a question. The last chat GPT which has a
few problems as soon as you start getting into how
much of the chat gbt output is just AI hallucinations

(23:31):
where it's not actual information, which is honest, that's not
think I should just write my own thing on in
the future. But yeah, it's just it's a really weird problem.

Speaker 1 (23:40):
That's really interesting that the problem of like because I
think it's it's very clear to me at this point
that AI is a more user friendly search experience than
a search engine, right because you can talk to it
like a person and explain what you need explained. That
doesn't mean it's a better option in terms of it
provides people within information more effectively that it that it

(24:02):
actually tells them what they want to know as well.
But it's like easier and maybe like less kind of
an imposing task to like asking AI a question that
it is to ask like a search and to especially
as as much worse as Google has gotten lately. Like
one of the things that I found interesting is I

(24:23):
was kind of doing digging for this. I was looking
at some AI articles that were published in like twenty
nineteen twenty twenty twenty twenty one. This is before the
big you know AI push that like we're currently all
in the middle of before chat GPT you know, got
its its widespread release, and it was talking with like
some people from Google who were like, yeah, we really
see AI like supercharging our search results. You know, there's

(24:45):
a lot of potential and like its ability to help
people with search. And I'm thinking about in twenty twenty
twenty nineteen, Google was a really useful tool and it's
a shit show now, like it's filled with ads, like
arch results have gotten markedly worse. Everyone who uses Google
as part of their job will tell you that it's
gotten like significantly worse in the recent past. And like,

(25:10):
I that's kind of like the thing that I see
being more of a worry And it's one of those things.
It's like on one hand, in the heype machine you have,
like AI could become like our new god king and
destroy us all, and the other like AI is going

(25:30):
to like, you know, create all. There's all this vague
talk about what it could be giving people the tools
to create more art than ever before, to you know,
make more good things faster, And I kind of feel like, well,
what if neither of those things happens, which I and
it just sort of allows us to continue making the

(25:52):
Internet worse for everybody at a more rapid pace. What
if that's the primary thing that we notice about AI
as consumers.

Speaker 3 (26:02):
It's probably a reasonab assumption. I think Garrison's point was
good though, when they said that, like bigger companies will buy,
like companies will just exist to get brought right, which
is the thing that's hapened to tech for decades, because
like it can't fundamentally change things, Like if AI is
another means of production, right, if we want to be
like a grossly materialist, if AI is another means of credits,
a tool for making things, if the same people own

(26:24):
it and benefit from it, then like it's incapable of
fundamentally changing our material conditions. Right, just becomes another way
and for them to churn out shit and say that
like this is fine, this is what you'll get, you know,
like churn out shit content on the Internet or whatever
it might be.

Speaker 1 (26:40):
And likewise, if AI is primarily like if it gets
caught in this kind of SEO loop where it exists
primarily to help advertise and sell products, whether it's as
a search engine or generating mass content, you know, for
like the Internet that's sort of optimized to appear higher
and search results, and it's also being trained on that.

(27:03):
Is there a point at which it kind of starts
to lobotomize itself where it's just recycling shit other AI
is written, which also seems kind of inevitable with that.
This is one of those things. So one of the
more famous moments, and like recent AI research is this
Google researcher timnit Gibru, who no longer works at Google,

(27:23):
and some other very smart people put together a paper
that like it was I think generally regarded by AI
folks as kind of middle of the road, but it
kind of it developed the term stochastic parrot, which is
what people know it for as sort of trying to
describe what these quote unquote AIS do in a way
that's better than an AI, because like, part of what
it was saying is that, like we have to look

(27:45):
at this as kind of like a parrot that if
you say enough like words around it, including enough like
racial slurs, it'll start repeating a bunch of toxic shit.
It doesn't know what it's doing. It doesn't have intention,
it's just kind of like repeating this stuff because that's
what's been fed into it. But one of the things
that point out in that paper is that like when
you have an AI, when you have one of these
lms trained on too large of a model, it becomes

(28:08):
number one kind of impossible to avoid that toxic stuff.
But it also reduces the utility of of the AI
in a lot of ways because like when you have
so much data going in, it's very difficult for the
humans to kind of tell how competent it is. This
is why stuff like chat GPT involves so much human training,

(28:31):
why they had hundreds of people spending tens of thousands
of man hours like going through responses to tell if
they made sense. Because when you've got like it's one
thing if you're like using an if you're for example,
training an AI on a bunch of different like medical
data to try to determine patterns and like antibiotic research, right,
which is a thing that that lllms have been like

(28:53):
shown to be have some early utility in is like
kind of helping to id identify new paths for like
antibiot research, because like we've got a lot of data
but it's also a really focused kind of data.

Speaker 3 (29:06):
Right.

Speaker 1 (29:06):
We're not like training these things on like all of
you know, Wikipedia and you know thousands and thousands and
thousands of fan fiction stories about Kirk and Molder fucking
each other during some sort of like Exile File Star
Trek crossover. We're using a fairly focused data set to
try and analyze it in a manner more efficiently than

(29:29):
people are simply capable of. That's a lot more useful
in terms of getting good data than you know, just
training it on half of trillion different things out there,
a lot of which you're going to be lies. But anyway,
I found that interesting. It's kind of worth noting that,
like Gabrew and a number of other people who were

(29:52):
responsible for that got forced out by Google and kind
of attacked by the industry. Because I think there's a desperation,
and I talked about this in that episode I did
last year kind of about the fundamental emptiness at the
core of the modern tech industry. But I think there's
this desperation and we have to find the new thing,

(30:15):
the thing that's going to be as big as social
media was, the thing that's going to deliver the kind
of stock market returns that social media did and that
doesn't exist yet. And AI is the after especially several
years of disasters with crypto and diminishing returns in social
media and honestly diminishing returns and like traditional tech because

(30:36):
shit like smartphones have reached kind of a point of saturation. Right.
You can make money sell obviously, like you can make
money selling smartphones, but you can't show exponential growth, right,
There's just not that many people who need new ones. Yeah, anyway, yeah,
I think there's I feel some desperation here. I wanted
to kind of close by reading you all. I found

(30:58):
a very funny article in the Financial Times that was
about the potential that the head of Europe's biggest media group, Birtlesman,
sees for generative AI, and yeah, it interviewed a couple
of people, including a guy Thomas Rabe, who works is
the chief executive of the German business that owns Penguin

(31:20):
Random House. And one of the things that he says
in this is basically like I think this is, you know, uh,
going to be super great for authors. You know, there's
a potential for copyright infringement problems, but really like it
would allow you to feed your own work into an
AI and then produce much more content than you were

(31:40):
a raverable put able to put out before like exact
what is it's if it's your content for what you
own the copyright, and then you use it to train
the software. You can in theory generate content like never before,
which I think is yeah, a fundamental, Like you know,
I don't actually even think it's going to be possible
to like train them on airport novels. You've got like
James Patterson and other guys who they're not They don't

(32:04):
write their own books anymore. They have like a team
of ghostwriters. But like having gone through a lot of
AI's stories, they're not books, Like they're not capable of
writing books. They're capable of like producing text and producing
pieces of books that human beings can edit laboriously into
something that might look like a book. But the use
in that is not like filling up airports with kind

(32:27):
of mid grade fiction, because I think that's even beyond
these models. It's like tricking people on Amazon. There was
a really funny quote in this article though, where at
the end of it, Rabe is like I asked chat
GPT what the impact of chat GPT or generative AI
is unpublishing. It prepared a phenomenal text. Frankly, it was

(32:48):
very detailed into the point, which he then presented at
a staff event. So there is kind of evidence that
CEO jobs could be pretty easily replaced by this.

Speaker 2 (32:58):
Like you don't actually have to do anything, comrade chat GVT.

Speaker 3 (33:03):
We agree, it's just spinning Jenny for bosses. I love it.

Speaker 1 (33:08):
Yeah, anyway, that's that's what I've got right now. We
have a We've been doing some research and we'll have
an article out on one of the more unsettling little
site industries that I think AI is going to create,
which is like scam children's books that exist to make
conment on the Internet money and poison the minds of

(33:29):
little kids. But we'll get that to you next week. Yeah,
it felt like it was worth coming back to this
subject because it, I don't know, it's the most apocalyptic
thing people in the media are talking about in a
day in which like the entire Northeast is blanketed in
poison smoke, which seems bad.

Speaker 3 (33:48):
Well, people are talking about that now because they all
live in New York. I'm a fuck out, But yeah,
previous to this, Yeah.

Speaker 1 (33:55):
Anyway, if it's good of hell, it could happen here
as a production of cool Zone Media.

Speaker 2 (34:05):
For more podcasts from cool Zone Media, visit our website
coolzonemedia dot com or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you listen to podcasts. You can
find sources for It could Happen here, updated monthly at
coolzonemedia dot com slash sources.

Speaker 1 (34:19):
Thanks for listening.

It Could Happen Here News

Advertise With Us

Follow Us On

Hosts And Creators

Robert Evans

Robert Evans

Garrison Davis

Garrison Davis

James Stout

James Stout

Show Links

About

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.