Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Also media.
Speaker 2 (00:06):
Hi everyone. Before we get to the episode, I just
wanted to lead in and say we are up for
a webby. I'll be including a link. I know it's
a pain in the as to register for something. I'm sorry.
I really want to win this. Never won an award
in my life. It will be in the links. And
while you're there and registered, look up the wonderful weird
little Guys with Miss Molly Konga. Vote for both of us.
I'm in the Best Business Podcast, Episode one. She's in
the Best Crime Podcast Episode one. We can win this,
(00:29):
we can defeat the others. And now for the episode
Every Day I Am punished and killed and you love
to watch. Welcome to Better Offline. We're live from New
York City, recorded straight to tape, of course, and I'm
(00:50):
joined by an incredible cast of people. To my right,
I have Paris markin Now of the information Hating baris
What's Up? What is Up? Edward and Guaso of the
Tech Bubble newsletter, Hello, Hello, and the flows to Morrow
of the CNN Nightcap newsletter. Hi and Allison, you wrote
one of my favorite bits of media criticism I've ever
read recently. Do you want to actually walk us through
(01:10):
that pace because I think I will link it in
the notestone where everyone.
Speaker 3 (01:14):
I'd be happy to. I wrote a piece. I think
the headline we ended up with was like Apple's AI
is not the disappointment. AI is the disappointment. Yeah, And
this was inspired by credit to where It's due. I
was listening to hard Work with Kevin Ruse on My
husband and I were driving out to the country and
listening to this and just getting infuriated. Yeah, And basically
(01:38):
their premise was, or at least Kevin Ruce's premise was
that AI is failing or sorry that Apple is failing
this moment in AI, right, and Apple has been trying.
It's been like the laggard. You know, that's a narrative
we've heard in tech media over and over, and it's
like Kevin Ruce's point was like, oh, well this he
(01:58):
should just start getting more comfortable with experimenting and making
mistakes and you know, violating everything that Apple brand kind
of stands for and like force the AI into a
consumer product that no one wants. And I was like, respectfully, no.
Speaker 1 (02:15):
So such a funny argument, given that it was a
mistake being made by Apple that resulted in the whole hoo.
They PC small group situation that was specifically how the
editor in chief of the Atlantic ended up in a
secret military signal chat.
Speaker 2 (02:34):
Wait I missed what? How?
Speaker 1 (02:36):
How online? I? Oh, gosh, I should leave.
Speaker 2 (02:46):
I've been reading scrolls.
Speaker 1 (02:48):
So basically, The Atlantic came out a couple of weeks
ago with a article about how their editor in chief
one day was suddenly added to a.
Speaker 2 (02:57):
Signal group chatigny signal game. But how did this?
Speaker 1 (03:01):
So the apple thing was I'm forgetting who exactly reported this?
This was in the last couple of days. That how
it happened was the con like you know that thing
that comes up in your iPhone where it says like, oh,
a new phone number has been found. It was a
suggested contact. And it happened because someone I guess in
the government had copied and pasted an email containing the
(03:26):
editor in chief of the Atlantic's contact information in a
message to uh, I'm forgetting whatever government official, yeah, one
of the guys, And so he ended up combining the
Atlantic ei ces information into a contact for uh, some
government dude. And that's how they ended up in because
then signal when you connected to your content. So I
(03:49):
mean that's that makes me even crazier about the hard
fork take because it's like you can't mess around with
something like your phone.
Speaker 3 (03:56):
Well, in this particular instance, I take it all back. Apple.
AI is a mea. It gave us one of the
best journalism stories.
Speaker 4 (04:03):
Of the year.
Speaker 2 (04:04):
You also made a really good point in here as
a message you this on the way, and you said
you made this point that there's a popular adage in
part of the Sea Circles that the party can never fail,
it can only be failed. It is meant as a
critique of the ideological gatekeepers who may be, for example,
blame voters for their parties failings rather than the party itself.
The same fallacy is taking root among AI's big as
back as AI can never fail, it can only be failed.
(04:25):
And I love this because it's you get people at
Kevin Russ And there was a wonderful clip on the
New York Times TikTok of Kevin Ruse seeming genuinely pissy.
He's like, I can't believe people are mad at AI
because of Siri, And it's like, oh what they think
it's shitty because it's shit, Like they talk about AI
like it's their child. Him and Casey act as if
(04:47):
we've hurt chat Cheep, sorry Claude, their Anthropic boys, and
in'saeing that Casey's boyfriend works at Anthropic. I know he
does the site. Its fucking anyway. It's just so weird
because it's like, we have to work apologize for not
liking AI enough, and now you have the CEO of
Shopify saying, actually you have to use it. You hear
about this?
Speaker 5 (05:07):
Yeah? He said, what that you have to prove your
job can't be replaced by AI?
Speaker 2 (05:11):
Yeah, else it will be.
Speaker 1 (05:13):
And he also said that now it's going to be
Shopify policy to include in all of the employee performance reviews,
both for your self assessment and for your like direct
reports and colleagues assessment, how much this person use AI.
And obviously what's going on there is if you are
not reporting that you use AI all the time for everything,
you could get fired.
Speaker 5 (05:33):
Whyn't I just tried to overhaul hiring presses so that
they could have AI first or AI only, and then
roll it back because they realize you can't replace yes
all these jobs questions?
Speaker 1 (05:45):
Is I mean, this is something that's brought up on
the show all the time. But who are these people
that are encountering the AI assistant suddenly plugged into every
app and being like, yeah, this is actually beneficial to
my life and this works really well done because it
sucks every time I use it.
Speaker 2 (06:00):
Or you made the point in your article Allison's words like,
if it was one hundred percent accurate, it would be
really useful. If it's even ninety eight percent accurate, it's
not right.
Speaker 3 (06:09):
I think that was the point that you know, to
his credit, Casey Newton made in the episode, which is
that AI is fundamentally an academic project right now. And
it's like, yeah, you can have all the kinds of
debates about its utility, but ultimately is it a consumer product?
And no, it's just like it's failing as a consumer
product on all fronts.
Speaker 2 (06:28):
And what's crazy as well, is I'm surprised he would
say that considering everything else he's ever said, because he's
he quite literally has had multiple articles recently being like
consumer adoption is up in He had an article the
other day where it was like, data provided exclusively from
Anthropic shows that more people are using AM. It's like, mom, man,
it's twenty thirteen. Again, we're past this. You can't just
(06:50):
do this anymore unless you're you. And so going back
to mister lutkey of a, mister lud key of shop,
I just want to read my faceavorite part of it. It
says I use it all the time, but even I
feel I'm only scratching the surface. Dot dot com. You've
heard me talk about AIM weekly videos, podcast, townholes, and
summit last summer. I used agents to create my talk
(07:11):
and present it about that. So all this fucking piss
and vinegar and the only thing you can use it
for is to write a slop ridden presentation to everyone
about how good AI is without specifying what it does.
I feel like I'm going insane sometimes with this stuff.
Speaker 5 (07:25):
I mean, in one way, that's great, right. The only
place you should encounter it is maybe the team building retreats,
you know, that's the utility of this shit.
Speaker 3 (07:32):
This reminds me a lot of like Media twenty twelve
twenty thirteen, where it was all pivot to video and
what's our video and our vertical video strategy? And it's like, okay, now,
what's our AI strategy? How are we injecting AI into
everything we're doing? And it's like, well, to what end? Yeah,
this is just the point.
Speaker 5 (07:50):
This is something that has been driving me mad, especially
with partnerships we're seeing between media firms and these AI firms.
You know, these are these are firms in the same
sector that keeps lying to verbs about how if you
integrate artificial intelligence this time it will optimize your ability
to find an audience or to get revenue, and we
(08:12):
can include you in some esoteric revenue share program where
we'll be able to claw back some of the eyeballs
and the attention that you're interested in seeking. But each
time it's actually just used to graph themselves onto services
or to try to gin up excitement about these products.
Speaker 1 (08:28):
Right.
Speaker 2 (08:28):
What's insane is this company has a multi billion dollar
market cap and I'm just going to read point two.
AI must be part of your GSD prototype phase. The
prototype phase of any GSD product should be dominated by
AI exploration. Prototypes are meant for learning and creating information.
AI dramatically accelerates this process. How fucking how Like That's
the thing. I have clients in my p off and
(08:50):
will occasionally bring the AI things, and I'm every time,
I'm just like this better fucking world, Like just every
and to their credit, they do, but it's like I
have clients they turned down all the time. You're like, yeah,
we're doing this, and I'm like, is this just the chatbot?
And they're like no. I'm like, can you show me
how it works? No, I'm like, oh cool, Yeah. I
don't think we're going to be a good fit somehow
because you don't seem to be able to explain what
your product does. But don't worry. This appears to be
(09:12):
a problem up to the multi billion dollar companies as well.
It's just it feels like the largest mask off dunce
moment in history, just these people who don't do any
real work, being like it's the future. I think I
don't do anything real. And the pivots video thing, I
think is actually a really good comparison because I remember
being in New York at that time being like, I
don't fucking like video. I don't like anyone. I don't
(09:34):
think I want to consume video in the way that
what it was like Mike and everyone and they were like, Oh,
we're going to do but this video and this, We're
going to do everything video and now video first, no
written content. It's like, I don't know a single goddamn
humor that actually does that. And also the other thing
that Facebook was lying that Facebook was just over just
claiming like averaging out the engagement numbers and everyone was wrong,
(09:58):
but that was the same kind of thing. It's like,
very clearly the people who have their hands in the
steering wheel are looking at their phone and it's fucking confusing.
But it's so much worse this time. It feels more
egregious somehow.
Speaker 1 (10:09):
Yeah, yeah, because it feels I mean, we've had so
many of these hype cycles, kind of back to back
to back, from even the horizontal video days of Facebook,
to vertical video to whatever the hell the metaverse was
supposed to be. Literally in an ominous moment, as I
was walking into record this, I saw a guy wearing
a leather jacket with board apyacht club on the back,
(10:32):
and I was like, god, well, yeah, I was like
that that guy rocks.
Speaker 2 (10:36):
What a cool dude.
Speaker 1 (10:38):
But it's like, how long is this going to last?
Speaker 2 (10:41):
I have been actually looking at the numbers recently, and
I don't know either, because for soft Bank to fund
open AI might require them to destroy soft Bank, like
S ANDP is downgrading the credit rating potentially due to yeah, yeah,
I know, like we're really at this point where it's
just like we've got we've gone so much further than
like the Meta US and Crypto did, because those weren't
(11:01):
really like systemic things, but this one, I think it's
just the narrative is carried away so far. The people
are talking about a thing that doesn't exist all the time.
Speaker 5 (11:10):
I mean, in some elements it kind of reminds me
at the near the end or near the real peak
is when we started also to see Metaverse and Crypto
Sustainable Refine Ship where they're actually, you know, we can
fight climate change with crypto, yeah, putting carbon credits on
the on the blockchain. And so there was a there
(11:32):
was a moment where the frenzy and the speculatve frenzy
led to like world transformative visions that were bullshit, And
I feel like we are we are heading there. We're
in that direction with artificial intelligence where you know, consistently
we've been fed oh, this is going to revolutionize everything,
but it feels like the attempt to graft it onto
(11:52):
more and more consumer products, more and more government services,
more and more parts of or what daily spheres of
life as a way to like privatize almost everything or
commodify everything. It feels like downstream of the way Crypto's
attempt to put everything on a blockchain blew up.
Speaker 3 (12:11):
Yeah, I was thinking about this in a kind of
like fundamentally cultural way, where I think at some point
in the last thirty years, there was a time when
everything coming out of Silicon Valley was cool. Yeah, whether
it was like useful or world transformative, it was cool,
and there was like an edge to it, and people
were like.
Speaker 2 (12:30):
Ooh, that's neat, it's disruptive.
Speaker 3 (12:32):
Yeah, disruption was everything, And like I think posts like Facebook,
Cambridge Analytica era, like twenty sixteen tech has just stopped
being cool and edgy. It's very corporate, and like, I
don't think the rest of corporate America has kind of
figured out that Silicon Valley's not the cool thing anymore.
Speaker 2 (12:52):
And that they can't lot. They're fully capable of being
wrong and lying, like that's that's the other thing.
Speaker 3 (12:57):
They've gotten very good at fundraising and marketing.
Speaker 2 (13:00):
But they're also not like kids anymore. Like we talk,
I still see people referring to Opening Eye as a.
Speaker 5 (13:05):
Startup, former Lucky as a kid.
Speaker 2 (13:07):
Palmer lucky as a kid who looks like leisures Larry
and sells arms, which is.
Speaker 3 (13:18):
Gy. We refer to them as startups. But also I
think one of the most accomplished parts of AI marketing
has been, like we always refer to them as labs,
so they seem like so academic and like good fundamentally,
and it's like these are companies, Like some of them
might be part of, uh, you know, a research institution
(13:38):
or a university, but a lot of them are startups.
Speaker 2 (13:41):
Lateral companies.
Speaker 3 (13:42):
Yeah, they are companies.
Speaker 2 (13:43):
Like anthropics, public benefit, I believe, And it's it's just remarkable,
And I think what's happened here is that the narrative
has gotten away to the point that we're really dunce
mask off moment I mentioned is people like mister Ludke
from Shopify. It's very clear he doesn't do any work.
(14:04):
Like I think that anyone who is just being like, yeah,
AI is the future and it's changing everything without specifying anything,
doesn't do any work. I just don't. Bob Iger from
Disney said, AI is gonna check. No it's not, But
how's it changing your fucking life? You lazy basted like
you're gonna summarize your worthless emails that someone else reads
as you lie on your Scrooge mcdark money. Yeah, and
it's just it's so bizarre. But it feels like we're
(14:26):
approaching this insanity level where you've got people like Shopify
being like, oh, it's gonna be in everything, as like
open AI burns more money than anyone's ever burned. Anthropic
lost five point six billion last year. Report by the
Information done some incredible fucking work on this, I should say,
And it just doesn't make any sense. And it's getting
more nonsensical. You're seeing like all of the crypto guys
(14:48):
have fully become AI guys now. And that was something
I didn't like talking about at first because it wasn't happening,
and now it's all of them. They all have AI avatars.
This guy called Jamie Burke is real, real shithead. This
guy was like a crypto metaverse guy and he's now
a full AI guy. Another guy called Bernard Maher, who
is just a harmless Forbes like a kind of like
an NPC, like one of the hollows from Dark Souls.
Speaker 1 (15:10):
Walking around Diagram is increasingly becoming a circle.
Speaker 2 (15:13):
Yeah, but he's onto quantum now, which is a bad sign.
That's a bearish sign. When you've got one of the
Forbes guys moving on to quantum, we're cooked.
Speaker 5 (15:22):
What about thermo, Yeah, there's thermodynamics.
Speaker 2 (15:29):
Become a thermodynamics influencer. I know what I know what
that means. I also know what that means. But if
anyone could tell me real quick. But it's I think
the most egregious one I've seen. I sent this all
to you, Ed, I think you and I have talked
about this the most. There was one of the stupidest
fucking things I've read in my worthless life, and it's
called AI twenty twenty seven. Now, if you have not
(15:51):
run into this yet as a listener, it will be
in the episode notes. I'm just just going to bring
it up because it is.
Speaker 4 (15:58):
It is a little thing.
Speaker 1 (16:00):
Throughout it I was like, is this fan fiction? This
is fan fiction? Oh, this is interactive fan fiction.
Speaker 2 (16:05):
It is, and you can hit the bund says what
is this? How do we write it? Our research on
key questions what goals will future AI agents have can
be found here. The scenario itself is written iteratively. We
wrote the first period up to mid twenty twenty five
in the following period, et cetera, until we reach the
ending Yeah, otherwise known as how you write stuff like
you've wrote it writing anl in their fashion. We then
scrapped this and did it again. You should have scrapped
(16:26):
it in all of it. Now, this thing is it's
a is predicting that the impact of superhuman AI over
the next decade will be enormous, exceeding that of the
Industrial Revolution. We wrote a scenario that represents our best
guess that what it might look like. Otherwise known as
making stuff up, not even.
Speaker 1 (16:42):
Over the next decade. It go basically says it's going
to be have superhuman like catastrophic or world changing impact
in the next five years, like by twenty twenty three,
we're either going to be completely overtaken by a robot
overlords or like at a you know, tenuous piece.
Speaker 2 (17:00):
And it's insane as well, because it has some great
headlines like mid twenty twenty six, China wakes up.
Speaker 3 (17:07):
And then I love that China was so far behind,
you know, it's like when.
Speaker 1 (17:12):
Did this come out?
Speaker 2 (17:13):
This came out like a week ago, and I've been
sent it by a lot of people. If you're one
of the people who sent it. Don't worry, I'm not
mad at you. It's just like got sented by a
lot of people. This thing is one of the most
well written pieces of fan fiction ever in that it
appears to be like a Manchurian candidate situation for idiots,
not saying this is about the same thing, but anyway,
Kevin Ruse wrote up the piece in full. He wrote
(17:37):
up a piece about this called hmm, I'm just gonna
say the AI forecast predicts some storms ahead, some stores,
some storms.
Speaker 1 (17:46):
Even an accurate description of all of the storms it predicts.
Speaker 2 (17:50):
And the long and short of this, by the way,
I have read this a few times because I fucking
hate myself. The long and short of it is that
a company called open Brain. Who could that be? Yeah,
it could be anyone, anyone open Brain. They create a
self learning agent somehow unclear how all layout is just that,
how many are terror flops it's going to require, and
(18:12):
it can train itself and also requires more data centers
than ever, how they get them, how those are funded?
No fucking clue isn't explained. Probably the easy actually this
is the fun This just occurred to me. Probably the
only thing they could actually reasonably extrapolate in here is
the cost of data centers. That's the only thing, and
they don't, probably because they'd be like, yeah, we need
(18:33):
an actual trillion dollars to do this made up thing.
Speaker 5 (18:37):
I do also want to add in here that you
know behind the AI in twenty twenty seven is you know,
one of the people connected to it, if I remember correctly,
is Scott Alexander. Who's this guy that's part of the
rationalist community, which is one of the what are the
groups that overlast with the effect of altruist acceleration this Yeah,
you know. So if it feels like it's fra fi
(19:00):
and fan fictiony and hype, that's because these are the
same people that keep that are connected to pushing constant
hype cycles over and over and over again.
Speaker 2 (19:08):
And it's written to be worrying. Yes, well it's written.
Speaker 1 (19:11):
It's written to be worrying. But it also in the
productions for the next two years keeps talking about how
the stock market is going to grow exponentially and so
well is going to be making all of these wise
and formed decisions and having really deep conversations with the
leader of Open Brain, and I was like, are you
That's why I asked when did this come out? Because
(19:32):
I was like, maybe this was written a couple of years. No,
but no, it is why Like literally.
Speaker 5 (19:36):
It's like a Benjeser kind of planning gonna be born.
He's gonna lead us to the promise.
Speaker 2 (19:44):
It's so good as well, because the people who sent
this to me have been very concerned, just because they're like,
this sounds scary, And I really want to be clear.
If you read something like this and you're like, that
doesn't make sense to me, it probably doesn't make sense
to anyone because it's nonsense. This thing is. Let me
read you one cut from it, the AR and D
progress mark player. What do we mean by fifty percent
(20:04):
faster algorithmic progress? We mean the open brain makes as
much AI research progress in one week with AI as
they would in one point five weeks without Who fucking cares, Matt,
what are you talking about? If a frog had wings,
it could fly like you? And what's crazy is and
I know I bag on Kevin Ruse. It's because he's
a nakedly captured part of the tech industry. Now I
(20:25):
am in public relations and I'm somehow less frothy about this.
That should tell you fucking everything. It is insane that
the New York Times at a time when you have
Soft Bank being potentially downgraded by S ANDP, you have
Open Ai raising more money than they ever raised, forty
billion dollars, except they only received ten billion and they'll
only get twenty billion more by the end of the
year if they become a non for profit, which they
(20:46):
can't do. No, no, no, no, Kevin Ruth can't possibly
cover that. He needs to go and take a solemn
looking fucking photo of some aar swipe. I can't even
get my phone.
Speaker 1 (20:56):
I do really love all of the incredibly, So.
Speaker 2 (21:00):
This guy is just I'll put the link in there
for this.
Speaker 1 (21:02):
He got seventy five pockets on his truck.
Speaker 2 (21:04):
Yeah, my man is read.
Speaker 1 (21:06):
Oh that's that's where it is.
Speaker 2 (21:08):
And it's just him Like this guy's sitting with his
hands classed like, staring mournfully into the distance. This is
what you're spending your time on ket And I'm just
going to read some Kevin Ruce. The II prediction, well,
this tone between uptimism and gloom. A report released on
Thursday decided lands on the side of gloom. That's Kevin
Ruce's voice. But my favorite part of this by far,
(21:29):
I'm gonna take a second to get it, because Ed
I sent this to you as well. Where is it?
Speaker 5 (21:36):
So?
Speaker 2 (21:36):
Also a lot of this is oh, here we go.
If all of this sounds fantastical, well it is nothing
remotely like want mister Cocker to tell Joe and mister
Lyffland are predicting is possible with today's AI tools, which
can barely order a Buruto on Dowdesh without getting stuck.
Thank you, Kevin. I'm so fucking glad the New York
Times is on this.
Speaker 3 (21:54):
And that was at the end. Yeah, right, Like you
set up this whole article. And it's like these guys
have these doom predictions. And that's the other thing about
the like the altruistic AI guys all have told themselves
this story and they all believe it, and they think
they are like the Prometheus bringing fire to the people
(22:14):
and like warning the people.
Speaker 1 (22:15):
And it's like, you.
Speaker 3 (22:16):
Guys have sold yourself a story I with no proof.
Speaker 2 (22:20):
I don't know. I feel like they just scam out
to that. Nothing about this suggests they believe in anything.
You can just say stuff.
Speaker 1 (22:26):
Look it literally the second sentence in this is that
in two months there will be personal assistance that you
can prompt them with tasks like order me a brito
on DoorDash. He'll do great stuff. There are so many
things that go into ordering me a burrito on DoorDash.
What restaurant do I want? What burrito do I want?
How do I want it to get to me? Where
(22:46):
am I? It can't do any of those things, nor
will it.
Speaker 2 (22:49):
He gazed out the window and admits that he wasn't sure.
And the next few years went well, and we kept
aie on the control, he said, referring to one of
the writers of the piece. He could envision a future
where people's lives was still largely the same, but where
nearby special economic zones filled with hyper efficient robot factories
would churn out everything we needed. And if the next
few years didn't go well, maybe this guy would be
filled with pollution and the people would be dead. He said, nonchalantly,
something like that.
Speaker 5 (23:11):
You know, one of the things I really really love
about I don't know, It's just it's so frustrating because
we're we're constantly fed these you know, sci fi esoteric
futures about how AI powerful AI, superhuman AI is around
the corner, and we need to figure out a way
(23:32):
to accommodate these sorts of futures we need. And part
of that accommodation means restructuring the regulations we have around it.
Part of that accommodation means entertaining experiments, grafting them onto
our cultural production, grafting them onto consumer goods. Part of
that means just like you know, you know, taking it
on the chin and figuring out how to use TATTBT.
(23:52):
But in all of this just more or less sounds
like you need to the marketing is failing on you
and you need to step up.
Speaker 2 (23:59):
Yeah. One where you need to believe. You need to
believe it.
Speaker 5 (24:03):
You need to do your part, you know, to summon God.
Speaker 2 (24:05):
And that's the thing. It goes back to what you're saying.
It's like you've failed AI by not believing.
Speaker 3 (24:10):
Yeah, and if you're bad at it, it's your fault
and not the machine's fault. In to Ed's point, I
think like all of this like predicting of the future
and this like revolution is like they have told themselves
a story that is this is inevitable and that there
are no choices that the human beings in the room
get to make about how this happens. And it's like, actually, no,
(24:33):
we can make choices about how we want our future
to play out, and it's not going to be just
Silicon Valley shoving it down our throat.
Speaker 2 (24:39):
And on the subject of human choice, if this shit
is so powerful, why have them mighty human choices not
made it useful yet? Like that's the thing. It's and
you make this point in your piece as well. It's
like a I can have a fail. It can be
failed failed by you and me, the smooth brained bloodites
who just don't get it. And it's like, why do
I have to prove myself?
Speaker 5 (24:57):
And listen, you know the ladites they have more goo
on their brain that coverant. So he needs to I
think it's it's worth embracing a little bit, you know.
And look, yeah, I feel like Rob Horning wrote this
(25:18):
newsletter a few weeks ago that I think he was
honing in on this point that MS and these generitive
AI chat bots and the tools that come out of
them are in some ways a distraction because a lot
of these firms are pivoting towards how do we, you know,
create all these products, but also how do we figure
out you know, government products that we can provide, right,
(25:39):
how do we get into defense contracting, how do we
get into arming or integrating AI into arms and and
and increasingly it feels like, you know, yeah, your AI
agent's gonna be able to not gonna be able to
order your burrito. But these firms are also, you know,
at the same time that they're insisting superhuman intelligences around
the corner and we're going to be able to make
your individual lives better, are spending a lot of time
(26:01):
and energy on use cases that are actually dangerous, right,
and it should actually be concerning in generating kill lists, right,
or facial recognition and surveillance.
Speaker 2 (26:11):
Which has already been around us and is generative.
Speaker 5 (26:14):
Yeah, and it isn't generative, but they but the firms that
are offering these generitive products are spending actual you know,
the stuff that they're actually putting their time and energy
into is you know, the sort of demonstrably destructive tools
under the guys in the kind of murky covering of
it's all you know, artificial intelligence, right, it's all inevitable,
(26:34):
it's all coming down the same pipeline you should accept it.
Speaker 2 (26:36):
Yeah, and it's I think the thing is as well.
It's those guys really think that's the next big money makeup.
I don't think anyone's making any money off of this.
No one wants to talk about the money because they're
not making any like no one. Look, I think I've read,
I've read the earning schools. I'm not going to listen
of every single company that's selling an AI service. At
this point, I can't find a single one that wants
(26:58):
to commit to a number of the Microsoft and they'll
only talk annualized, which is my favorite one. Are but ARR.
But the thing is ARR traditionally would mean an aggregate
rather than just twelve times the last biggest month, which
is what they're doing.
Speaker 1 (27:13):
No, that's the classics.
Speaker 2 (27:15):
I refuse to mark client's asses to the ground with
that one, because it's like, you can't just fucking make
up a number unless you're in AI than you absolutely can.
It's just frustrating because and the reason I bag on
Newton Rouse other than all the others I've listed, is
the I feel like, in their position and in the
position of anyone with any major voice in the media,
(27:36):
skepticism isn't like something you should sometimes bring in. It's not.
You don't have to be a grizzled hater like myself,
but you can be like, hey, even if this did work,
which it doesn't, how does this possibly last another year?
And the reaction is no, actually, it's perfect now and
will only be more perfect in the future. And I
still get emails from people because I said once on
(27:57):
an episode, if you have a use for AI, please
email me regret of mine. Every time I get an
email like that, it's like, so it's very simple. I've
set up seven or eight hours worth of work to
make one prompt work, and sometimes I get something really useful.
It saves me like ten minutes, and you're like, great,
and what for It's like, oh, just some productivity things?
What productivity things? They stop responding, and it's just I
(28:22):
really am shocked we got this far. I'm gonna be
honest at this point, I'm I will never be tired
because my soul burns forever. But it's exhausting watching this
happen and watching how it's getting crazier. I thought like
as things got worse, people would be like, well see
and then stepping up. But it's like watching The Times
(28:42):
and some parts of the journal still feed this. Also
the journal has some incredible critical work on that. It's
so bizarre. The whole thing is just so bizarre, and
it's been so bizarre to watch in the tech media.
Speaker 1 (28:53):
I mean, I think part of it is also just
because investors have poured a lot of money into this,
and so of course they are going to want to
back what they have spent hundreds of millions or billions
of dollars on. And much of the tech media involves
reporting on what those investors are doing, thinking and saying,
and whether or not what those people are saying or
(29:15):
doing is it's often not based in reality.
Speaker 3 (29:19):
H yeah, I say, as not a member of the
tech media, so I have like kind of a general
assignment business markets econ. That's kind of my jam. And
when I come when like AI first started becoming the buzzword,
like chat GPT had just come out, I was like, oh,
this sounds interesting. So I was paying attention like a
lot of journalists were, and you know, like we've hit limitations,
(29:41):
and I think part of the reason it's gotten so
far is because the narrative is so compelling. Curing cancer,
We're gonna we're gonna end. It's my favorite one, not
my favorite one. Silliest one is like we're gonna we're
gonna end hunger.
Speaker 2 (29:57):
Nice, okay?
Speaker 3 (29:58):
How how Also, the problem of hunger in the world
is not that we don't grow enough food. It is
a distribution problem. It is a sociologic It is a
complicated problem. What actually is AI going to do? Also
that you're going to need human beings to distribute it?
(30:18):
It just like you push them one step.
Speaker 1 (30:21):
If you read the twenty twenty seven AI thing, it
explains that the AI is going to give the government
officials such good advice they'll be actually really nice and
caring deportment.
Speaker 2 (30:30):
And what's crazy is here's the thing. And I'm glad
you brought up one thing I've learned about politics, particularly recently,
but in historic historic means, when the government gets good advice,
they take it every time, every time. Every time they're like,
this will this is economically good, like Medicare for All,
which we've of course had forever and never and came
(30:50):
close to numerous times decades ago, versus now when we
have him. And I think the other funny thing is
as well, with what you were saying, Allison is like, yeah,
it's going to cure cancer. Okay, can it do that?
Speaker 4 (31:02):
Nah?
Speaker 2 (31:02):
Okay, it's going to cull hunga Can it do that? No? Okay,
it's easy. Then perhaps it could make me an appointment. Also, No,
can you buy something with it?
Speaker 5 (31:13):
No?
Speaker 2 (31:13):
Can it take this spreadsheet and move stuff around?
Speaker 5 (31:17):
Maybe it can write a robotic sounding script for you
to make the appointment yourself.
Speaker 3 (31:22):
Wow, you know, I mean I would even say that
I could give the benefit of the doubt to researchers
who are really working on the scientific aspects of this.
Like I'm not a scientist. I don't know how to
cure cancer, but if you're working with an AI model
that can do it, like God bless. But businesses actually
do take money making advice and money making technology when
(31:46):
it's available. And I think about this all the time.
With crypto, which is another area I cover a lot.
It's like, if it were the miracle technology that everyone
or its proponents have said it is, businesses would not
hesitate to up their infrastructure to make more money and like,
no one's doing it, And it's like, oh, well, they
(32:07):
just haven't. They haven't figured out how to optimize it yet.
It's like that sounds like a failure of the product
and not a failure of people using it. So I
get back to the whole like AI cannot fail, it
can only be failed.
Speaker 2 (32:19):
And it's the same with.
Speaker 3 (32:20):
Crypto and a lot of other tech where it's just like,
this is not a product that people are are hankering for.
Speaker 1 (32:26):
And I think part of the notable thing is when
we do see examples of large businesses being like, oh yeah,
we're gonna change everything about our business and integrate AI.
We're going to be an AI first company. The products
that end up coming out of that are there's an
AI chat bot in my one medical app now cool
that does nothing for me. When I'm trying to search
(32:46):
the Amazon comments on a product, Suddenly the search box
is replaced with an AI chat bot that's not doing
even one tenth of what you've problem.
Speaker 2 (32:57):
The same product every fucking It's.
Speaker 1 (32:59):
Just an a It's a chatbot that isn't super helpful.
Speaker 2 (33:02):
And it's great. I remember back in twenty fifteen twenty sixteen,
I had an AI chatbot company. They took large repositories
of data and turned it into a chatbot you'd use.
I remember pitching reporters at the time and then being like,
who fucking cares, who gives a shit? This will never
be Like a decade later, everyone's like this is literally God.
I cannot wait to go to the office of a
guy who wrote fan fiction about this and talk to
(33:25):
him about how scared I am. Now I can't wait
for AGI. And I've also said this before, but what
if we make Agi. None of them are going to
do it doesn't exist. But and it didn't want to
do any work. That's the other thing, like they're not
they don't. Casey Kagawa, friend of the show made at
this point it's made it to me umrostyle, which is
they talk about AGI and roosted this as well, like
Agi this Agi that they don't want to define it,
(33:46):
because if you have to start defining AGI, you have
to start talking about things like person thegit like is
this a citizen? Is this? Can this thing feel pain?
Because surely your consciousness could feel pain. Oh you could
take pain out that has ever real consciousness? None of
the and hey, how many is one? Is it? If
it's is it? One unit? Is a virtual machine? Like
there are real tangible things and you know they don't
(34:08):
want to talk about that shit because you even start
answering one of those and you go, oh right, we're
not even slightly close, are we. We don't even know
how the fuck to do single one of these things ever.
And I honestly the person I feel bad for this
is your joke is Blake Lemine. I think his name
was from Google. If he'd come out like three years
later and said that he thought the computer this is
the guy from Google, who's.
Speaker 1 (34:30):
Guy who was like the chat but is real and
I love it.
Speaker 2 (34:34):
Yeah, had that come out three years later, he'd be
called Kevin Roos because that's exactly what Kevin Rooss wrote
about being Ai Savria is like being AI told me
to leave my wife. And Kevin, if you ever fucking
hear this, man, you're worried about me dogging you, I'm
gonna keep going do your fucking job. Mate. Anyways, it's
just insane because I am a gadget gizmo guy. I
(34:55):
love my dude, dad's I love my shit, I really do.
If this was gonna do something fun, I'd have done it.
I've really spent time trying, and I've talked to people
like Simon Wilson, Maxwolf, two people who are big L
and M heads who are to disagree with me on
numerous things, but their reaction is kind of I'm not
going to speak exactly for them, is basically, it actually
does this. You should look at this thing. It does
(35:15):
not this is literally God. But it all just feels
unsustainable economically. But also I feel like the media is
in danger when this fuls apart too, because the regular
people I talked to about chat GPT, I pretty much
hear two use cases. One Google Search isn't working and
two I need someone to talk to, which is a
(35:36):
worrying thing. And I think, by the way, that use
case is just that's a societal thing. That's a sign
the lack of community, lack of friendship, but lack of
access to mental health services and also could lead to
some terrible outcomes. But for the most part, I don't
know why I said for the most part, I've yet
to meet someone who uses this every day, and I
(35:57):
have yet to meet someone who really cares about it,
who like because like if this went tomorrow, like if
I didn't have my little anchor battery packs, I'd scream.
If I couldn't have permanent power everywhere. Like if I
couldn't like listen to musical day, that'd be might be
real sad. If I couldn't access chat GPT, I would not.
Speaker 5 (36:13):
Who cares because you haven't tried claude yet.
Speaker 2 (36:15):
I've tried. I've tried Claude so much, And it's just
I don't know. I feel like people's response to the
media is going to be negative too, because there's so
many people that boosted it. There was a Verge story.
There was a study that came out today. I'll link
it as well in the notes where it was a
study found that most people do not trust A like
regular people do not trust AI, but they also don't
(36:37):
trust the people that run it and they don't like it.
And I feel like this is a thing that the
media is going to face at some point and rus
this time, Baby, you got away with the crypto thing.
You're not this dumb. I'm going to be hitting you
with the TV off shait every day. But it's just
I don't think members of the media realized the backlash
is coming, and when it comes, it's going to truly
(36:57):
it is going to lead to an era of cynicism.
True cynicism in society that's already growing about tech, but
specifically I think it will be a negative backlash of
the tech media, and now would be the great time
to unwind this versus tripling down on the fan fiction
that and I have been meaning to read this out.
My favorite part of this by far, i'd say, and
(37:19):
of course flawless they have this ready. Why our uncertainty
increases substantially beyond twenty twenty six. Our forecasts from the
current day through twenty twenty six is substantially more grounded
than what follows. Thanks mother fucker awesome. That's partially because
it's nearer, but it's also because the effects of AI
on the world really start to compound in twenty twenty seven.
What do you mean they don't? That's you. You're claiming that,
(37:42):
And I just I also think that there's the greatest
subtle problem that we have too many people who believe
the last smart person they listen to. And I say
that as a podcast runner, like the last invest that
they talked to, the last expert they talk to someone
from a lab.
Speaker 3 (37:56):
Yes, yes, well I think that gets to you if
you just push the proponents. And this is like I've
I've come into AI skepticism as a true like I'm
interested in this. I'm interested in what you're pitching to
the world. And when I hear and I hear like
CEOs of AI firms get interviewed about this all the time,
(38:18):
and they talk about this future where everyone just has
a life of leisure and where you're lying around writing
poetry and touching grass and like everything's great. No one
has to do hard labor anymore. They have that vision,
or they have like the you know, p Doom of
seventy five and like everything is going to be terrible,
and but no one has a really good concept. And
(38:39):
that's why this is so funny. The fan fiction of
like what happens in twenty twenty seven, It's like no
one has laid out any sense of like how the
job creation or destruction will happen. Like in this piece
they say like, oh, there's going to be more jobs
in different areas, but some jobs have been lost, and
it's like how why.
Speaker 2 (38:59):
Get what it jobs? They get oddly specific on some things.
Then the meaningful things they're like, yep, they'll be jobs.
Speaker 3 (39:05):
Yeah, and the stock market's just gonna go up and
it dude.
Speaker 2 (39:08):
Numbill go up all the time as it is right
now as we report.
Speaker 1 (39:12):
Yeah, I believe they say. In twenty twenty eighth, Agent five,
which is the super AI's deployed to the public and
begins to transform the economy. People are losing their jobs,
but Agent five instances in the government are managing the
economic transition so adroitly that people are happy to be replaced.
GDP growth is stratospheric, Government tax revenues are growing equally quickly,
(39:34):
and Agent five advised positions show an uncharacteristic generosity towards
the economically dispossessed.
Speaker 3 (39:40):
You know what this is We failed to uphold like
public arts education in America, and a bunch of kids
got into coding and know nothing but computers, and so
they can't write fan fiction.
Speaker 2 (39:52):
Yeah, No one's fighting is bad.
Speaker 1 (39:54):
Not enough people spent time in the minds of fanfiction
dot net and it's no one's anyone. Yeah, Like this is.
Speaker 3 (40:01):
Clearly this is just like someone wanting to have a
creative like vision of the future and it's like it's
not interesting or compelling.
Speaker 2 (40:09):
Joy looks.
Speaker 5 (40:09):
I mean, that's why they brought him on. That's why
they brought Scott Alexander on because to write this narrative, right,
because that's what he spends a lot of time doing
in his blog, is trying to beautify or flesh out
why this sort of future is inevitable. Yeah, you know
why we need to commit to accelerating technological progress as
much as possible, and why the real reactionary or you know,
(40:33):
anti progress perspective is caution or concern or skepticism or
criticism if it's not nuanced in a direction that supports products.
Speaker 2 (40:43):
I just feel like a lot of the AI safety
guys at Grift is too. I'm sorry they love saying
alignment just say pay me like I know that we
should have. I get the occasional email about this being like,
you can't hate AI safety. It's important. It is important generally.
If AI isn't AI, it's just trying to fucking accept it.
They're not. If they cared about the safety issues, they'd
stop burning down zoos and feeding entire lakes to generate
(41:07):
one busty Garfield, as I love to say. They would
also be thinking about the actual safety issues of what
could this generate, which they do. You can't do anarchist
cookbook shit it's about as useful. Phil Broughton, friend of
the show, would be very angry for me to bring
that up. But the actual safety things of it steals
from people is to join the environment. It's unprofitable and unsustainable.
These aren't actual These are actual safety issues, These actual
(41:28):
problems with this. They don't want to solve those. And indeed,
the actual other safety issue would be, hey, we gave
a completely unrestrained chatbot to millions of people and now
they're talking to it like a therapist. That's a fucking
that's a safety issue. No, they love that, they love.
Speaker 1 (41:42):
Do you think that one criticism of the AI safety
initiatives that is incredibly politically salient and important right now
is that they are so hyper focused on the long
term thousand, one hundred years from now future where AI
is going to be inside all of us and we're
all going to be, you know, robots controlled by an overload,
that they are not paying attention to literally any of
(42:03):
the harms happening.
Speaker 2 (42:04):
Right well, they deliberately not talking about the harms today
because then they'd have to do something at work when
they men each that.
Speaker 5 (42:10):
You know, It's like when nine seven two mag reported
on how its reel was using or trying to integrate
artificial intelligence until generating gets killed thiss and targets so
much so that they started targeting civilians and use that
to fine tune targeting of civilians. You know, I saw
almost nothing in the immediate aftermath that this reporting from
the I safety community, you know, no almost no interest
in like talking about a very real use case where
(42:32):
it's being used to murders many civilians as possible silence.
You know, that's a real short term concern that we
should have.
Speaker 2 (42:40):
But that's that would require the AI safety people to
do something. And what they do is they get into work.
They're making quarter of a million dollars a year. They
get into work, they load Slack, they load Twitter, and
that's what they do for eight hours, and they occasionally
posting being like by twenty twenty eight, the AI will
of fuck my wife, and everyone's like, God, damn it, no,
not our wives, right, heal Frontier. But it is all
(43:04):
like they don't they want to talk about ten, fifteen,
twenty years in the future because if they had to
talk about it now, what would they say? Because I
could give you AI twenty twenty six, which is open
ai runs into funding issues. Can't pay core We've, can't
pay Cruso to build the data centers in Abilene, Texas,
which requires Oracle, who have raised debt to fund that,
(43:24):
to take a bath on that. Their stock gets here,
cor We've collapses because most of corwy's revenue is now
going to be open ai anthropic can't raise because the
funding climate has got so bad. Open Ai physically cannot
raise in twenty twenty six because SoftBank had to take
on murderous debt to even raise one round. And that's
just with like why to be excited here? No, no, No,
the next newsletter baby and probably a two parts. But
(43:46):
that's the thing. They don't want to do these because
they get okay, they would claim I'd get framed as
a skeptic. They also don't want to admit the thing
in front of them because the thing in front of
them is so egregiously bad. With Crypto, it was not
that big Meta us it was not that big. Do
you like that? Meta burned like forty billion dollars? And
there's a Yahoo financed piece about this just on mismanagement.
Speaker 1 (44:05):
It's just like the like also sick that they renamed
themselves after it's so good.
Speaker 5 (44:09):
Yeah, it's bad.
Speaker 2 (44:12):
I can't go about Yeah, like they should get METAI.
They should just change Oh.
Speaker 1 (44:16):
They should add an eye, they should just add and I.
Speaker 2 (44:18):
At the end, it's just if anyone talks about what's
actually happening today, which is boardline identical what was happening
a year ago. Let's be honest. It's April twenty twenty five.
April twenty twenty four was when I put up my
first piece, being like, hey, this doesn't seem to be
doing anything different. It still doesn't even with reasoning.
Speaker 5 (44:35):
It's just not just a way for Q three. Agent
Force is gone.
Speaker 2 (44:38):
Yeah, agent the Agent zero is going to come out. Yeah,
Actually the information reported that Salesforce is not having a
good time sailing Agent Force. You never guess why. Wow,
turns out that it's not that useful due to the
problems of generative AI. If only someone had said something
which the information. I've begged on the information a little bit.
But they are actually doing insanely good work like Corey Weinberg,
(45:00):
dur Alisa Gardizzi Paris of course, but I'm specifically talking.
Speaker 1 (45:03):
About AAI team as best.
Speaker 2 (45:06):
Of course, and like it's great because we need this
reporting for when this just collapsed, so that we can
say what happened, because it's going to if I'm wrong,
and man, would that be embarrassing, just gonna be honest, Like,
if I'm wrong here, I'm gonna I'll look like a
huge idiot. But if I'm right here, like everyone has
(45:26):
overleveraged on one of the dumbest ideas of all like silly, silly.
It would be like crypto. It would be like if
everyone said, actually, crypto will replace the US dollar and
you just saw like the sea of Shopify being like, Okay,
I'm gonna go buy a beer now using crypto. No,
it is gonna send me fifteen minutes. Sorry, that's just
for you to get the money. Actually, it's gonna be
more like twenty. The network's busy. Okay, well, how's your day?
(45:48):
How you use money? Huh yeah, okay, yeah, you should
let that guy in front of me. That's gonna be
a while. But it's what we're doing with AI. It's like, well,
AI is changing everything. How it's a chap's chat book.
Speaker 5 (45:58):
What if we have an Ubertionnario where maybe they abandoned
the dream of like this three trillion dollar addressable market
that's worldwide. They abandoned the dream of like being monopoly
in every place and and focus on a few markets
and some algorithmic price fixing so that they can figure
out how to juice fares as much as possible, reduce
(46:21):
the wage as much as possible, and finally eke out
that profit.
Speaker 2 (46:24):
What do we see?
Speaker 5 (46:25):
You know, some of these firms they pull back on
the ambition or the scale, but they persist and they
sustain themselves because they move on to some smaller.
Speaker 1 (46:34):
Vision like Oham's razor. That's the most likely situation is
that you know, AI tools are useful in some way
for some slice of people and make a lot of
maybe maybe let's be optimistic, it makes a sizable chunk
of a lot of people's jobs somewhat easier. Like it's great,
it was that worth spending billions and billions of dollars
(46:56):
and also burning down a bunch of trees, saying that
could be I think best case scenario.
Speaker 2 (47:03):
No, I'm not saying you're wrong. I'm just saying like,
we haven't even reached that yet. Because with uber it
was this incredibly lost and remains quite a losty business,
but still delivers people to them from places and objects
from to them from places.
Speaker 5 (47:15):
You know you don't have to, you know, as much
as I hate them, I'll go, okay, you know you
don't have that. Not less drunk driving, you know, and
some transit in parts of cities where there's there's a
there's there's not much in the way of public transit. Right.
Speaker 2 (47:29):
This is like if uber, if every ride was thirty
thousand dollars in every and every car weighed one hundred
thousand times in.
Speaker 5 (47:36):
Manu factor in the externality.
Speaker 2 (47:39):
But that's the crazy thing. I think general if AI
is so much worse as well, polution wise. But even
pulling that back, it's like, I think open AI just
gets wrapped into co pilot. I think that that's literally
they just shut the ship. They're like they absorb Sam
Moltman into the hive mind and he. I think my
chaos pic for everyone is satching the dellar is fired
(48:00):
in Amyhood takes over. If that happens, I think it's Prometheus,
the one who can see stuff. I'm fucking not done.
Read no fire tomorrow technique just spitfire. It's it's just frustrating.
It's frustrating as well, because a lot of lesseners on
the show email me and they took teachers of being like, oh,
they're forcing AI and librarian AI being forced there.
Speaker 1 (48:22):
I mean the impact on the educational sector, especially with
public schools, it's really terrifying, especially because the school districts
and schools that are being forced to use this technology,
of course, are never the private, wealthy schools. It is
the most resource starved public schools that are going to
(48:42):
have budgets for teachers increasingly cut. Meanwhile, they do another
AI contract and off source like lesson the sort of
things that these companies, the ed tech AI things pitch
as their use case is lesson planning, writing reports, basically
all the things that a teacher does other than physically
(49:04):
being there and teaching, which in some cases the companies
do that too. They say, instead of teaching, put your
kid in front of a laptop and they talk to
a chatbot for an hour.
Speaker 2 (49:13):
And that's the thing. And the school could, of course,
I don't know, spend money on something that's already being spent,
which is teachers have to buy their own fucking supplies
all the time. Teachers have to just spend a bunch
of their money on the school, and the school doesn't
give them money, but the school will put money into chat.
JP say, it's just oh, they should ban at universities
as well. Everything I'm hearing there is just like real
(49:33):
fucking bad.
Speaker 1 (49:34):
Like the I mean the issue is from talking to
university professors. It's like impossible for universities to ban it.
I guess professors are The obvious example is like essays,
Like professors get AI written essays most of the time,
and they can't figure out whether they are AI written
(49:54):
or not. They just notice that all of your students
seem to suddenly be doing worse in class well having
similar output of written assignments. There are very few tools
for them to be able to accurately detect this and
figure out what to do from it. Meanwhile, I guess
getting involved in trying to prosecute someone for doing this
(50:15):
within the academic system is a whole other thing.
Speaker 4 (50:17):
But on the.
Speaker 1 (50:19):
In K through twelve especially, it's been kind of It's
been especially frustrating to see that some of the biggest
pushers of AI end up being teachers themselves because they
are overworked, underpaid, have no time to do literally anything,
and they have to write god knows how many lesson
(50:40):
plans and i EPs for kids with disabilities and they
can't do it. Also like, well, why don't I just
plug this into what's essentially a chat GPT wrapper and
that results in worse outcomes for everyone.
Speaker 2 (50:51):
Probably, So I have some personal experience with IEP. I
don't think they're doing it there, but they're definitely doing
it elsewhere. And if you've heard, if you've heard IPA,
that fucking kills me.
Speaker 1 (51:01):
That's one of the things that these tools often pitch
themselves as I want to make sure.
Speaker 2 (51:07):
Something, I want to put my hands around someone's.
Speaker 1 (51:09):
Fucking Can you describe what an IP is? Is A?
Speaker 2 (51:11):
I forget what it stands for Individual education plan?
Speaker 1 (51:14):
I don't wrong, but that's it.
Speaker 2 (51:16):
Is generally the plan that's put for a child with
special needs, so autism being one of most obvious one.
It names exactly what it is that they have to do,
like what the teacher's goals will be, like socio.
Speaker 1 (51:28):
They legally have to do all the things in that document.
Speaker 2 (51:31):
And it changes based on the designation they get, and
so like it's different if you get there's like an
emotional instability one, I believe, and nevertheless, there's like separate ones,
and each one is that the goals of where the
kid is right now, where the kid will be in
the future, and so on and so forth. The idea
that some would use chat GIPT and if you listen
to this and use chat GPT for one of these,
I fucking hate yours so bad. I understand you're busy,
(51:51):
but this is very important. Nevertheless, Wow, how disgraceful as well,
because it's all this weird resource allocation done by and
I feel like the overarching problem as well as it's
the people making these decisions putting this stuff in don't
do work. It's school administrators that don't teach, it's CEOs
that don't build anything. It's a venture capitalists that haven't
(52:14):
interacted with the economy or anyone without a Patagonia sweater
and decades, and it's these and again, these vcs, they're
investing money based on how they used to make money,
which was they invest in literally anything and then they
sold it to literally anyone. And that hasn't worked for
ten years. Alison, you mentioned the thing twenty fifteen ish
that was when things stopped being fun. That was actually
The last time we really saw anything cool that was
(52:36):
around the Apple Watch era, and it was the last,
really the end of the hype cycles, the successful ones,
at least they haven't had one since then. Var XR, Crypto, Metaverse,
the Indie gog and Kickstarter era. He's sharing economy. But
these all had the same problem, which was they cost
more money than they made and they weren't scalable, and
(52:59):
is the same problem we've had. What we may be
facing is the fact that the tech industry does not
know how to make companies anymore. Like that may actually
be the problem.
Speaker 3 (53:08):
Can I have one thing to what you said about
people who don't work? I think there are people in
Silicon Valley and I don't I'm gonna get a million
emails about this, but there are a lot of Silicon
Valley men who are white men who don't really socialize yep,
And I think they are kind of propagating this technology
that allows others to kind of not interact, Like so
(53:32):
much of chat GPT is designed to like subvert human interactions,
like you're not going to go ask your teacher or
ask a excuse me, or ask a classmate. Hey, how
do we figure this out? You're just gonna go to
the computer. And I think that culturally, like I you know,
people who grew up with computers God bless. But you know,
(53:53):
we need to also value social interaction. And it's interesting
that there are these very like small group of people
often who lack social skills propagating a technology to make
other people not have social skills.
Speaker 2 (54:08):
I think there's also a class aspect to that because
totally don't grow up particularly with like add food on
the table. But one thing I grew up was I
don't trust any easy fixes. Nothing is ever that easy.
Is something that kind of a if something seems too
good to be true to accessible, there's usually something you're
missing about the incentives or the actual output. So no,
(54:30):
I wouldn't trust a computer to tell me how to
fix something because I don't fucking like that you made
that up, Like it isn't this easy. There's got to
be a problem. The problem it's hallucinations. And we're back,
(54:57):
so we didn't really lead into that ad break. You're
going to just have to like it. I'm sure all
of you are going to send me little emails, little
emails about the ads that you love. Well, I've got
to got to pay for my diet cokes somehow. So
back to this larger point around chat GPT and why
people using how people use it. I think that another
thing that just occurred to me is have you ever
(55:18):
noticed that Sam Ortman can't tell you how it works
and what it does? You have noticed that one of
these people will tell you what it does you I've
read everything Sam Mormon said at this point, listen to
hours of podcasts. He's quite a boring twerp. But on
top of that, for all is the yappin and yamoring
him and Warrio Ama Day don't seem to be able
to say out loud what the fucking thing does. And
that's because I don't think that they use it either.
(55:40):
Like I genuinely, I'm beginning to wonder if any of
the people injecting AI. Sure, Sam Moltman and Dario probably
use it. I'm not saying it fully, but like, these
aren't people. How the next person that meets Sam Altman
should just be like, hey, how often do you use
chat GPT? Gets back to that. It reminds me the
remote work thing, all these CEOs saying guys should come
back to the office. How often you in the office exactly.
(56:01):
And I think that this is just the giant revelation
of like, how many people don't actually interact with their businesses,
that don't interact with other people, that don't really know
how anything works, but they are the ones making the
money in power decisions. It's fucking crazy to me. And
I don't know how this shakes out. It's not going
to be an autonomous agent doing whatever. Also, Okay, that
(56:22):
just occurred to me as well, how the fuck do
these people not think these agents come for them first?
If the AGI was doing this and they read this
and be like, oh, these people fucking they worked it
all out, I need to kill them first.
Speaker 5 (56:34):
Well, I mean that kind of gets back to what
you're saying, where it's like, you know, if we entertained
the fan fiction for a little bit, what is the
frame of mind for these agents, if they're autonomous or not,
how are we thinking of them? Are we thinking about
like their persons or if they're you know, the bottomized
in someone, do.
Speaker 2 (56:54):
They have opinions?
Speaker 1 (56:55):
You know?
Speaker 5 (56:55):
And I think really it just gets back to like,
you know, part of the old hunt for like you know,
a nice polite slave, you know. Yeah, how do we
figure out how to reify that relationship because it was
quite profitable at the turn of like industrial capitalism, And yeah,
I think, you know, it's not a coincidence that a
good chunk of our tech visions come to us from
(57:16):
reactionarias who think that the problem with capitalism, the problem
with tech development is that a lot of these empathetic,
egalitarian reforms get in the way of profit making.
Speaker 1 (57:27):
You know.
Speaker 5 (57:27):
I think similarly, you know, the hunters automatons for certain
algorithmic systems is searching for a way to figure out
how do we replicate you know, human labor without the
limitations on extracting and pushing and coercing.
Speaker 2 (57:42):
As much as possible, Yeah, with you know, and there's
no agent or something else. And the thing is, yeah, sure,
the idea of an autonomous AI system would be really useful.
I'm sure it could do stuff that sounds great. They're
all massive, as you've mentioned, like sociological problems like do
these things feel pain? If so, how do I create anyway?
But in all seriousness, like sure, an autonomous thing that
could do all this thing would be useful. They don't
(58:04):
seem to even speak to that. It's just like, and
then the AI will make good decisions, and then the
decisions will be even better than Agent seven comes out,
And you thought Agent six was good.
Speaker 1 (58:14):
It's like they don't even speak to how we're going
to get to the point where Agent one knows truth from.
Speaker 5 (58:19):
Falsehood less inevitable of course.
Speaker 1 (58:22):
Yeah, you know, we just need to give it all
of our data and everything that we've paid money for,
required other people to pay money for, and then it
will finally be perfect.
Speaker 2 (58:32):
And it doesn't even make profit of any kind. That's
the other thing. It's like people saying makes profit it
is the profit seeking? Is it profit seeking? It doesn't
seem like we've sought much profit or any Yeah.
Speaker 1 (58:44):
That's also I think a good point of comparison to
what you were talking about earlier, ed with the comparison
to Uber and Lift, these companies that achieved massive scale
and popularity by making their products purposefully unprofitable, by charging
you five dollars for a thirty minute Uber across down
so that you're like, yeah, this is going to be
(59:04):
part of my daily routine. And the only way they've
been able to squeeze out a little bit of profit
right now is by hiking those prices up, but trying
to balance it to where they don't hike it up
so much that people don't use it anymore. And AI
is at the point where, like, for these agents, I
think some of the costs reals something like thousands of
dollars a month, and they don't they don't work, They're
(59:25):
already and it's like, you're still not making money by
spending by charging people that much money to use it.
What is the use case one where this even works?
And if it somehow did manage to work, how much
is that going to cost? Who is going to be
paying twenty thousand dollars a month for one of these things?
Speaker 2 (59:42):
And how much of that is dependent on what is
clearly naked the subsidized compute prices. How much of this
is because Microsoft's not making a profit on a Zeo compute,
open Aiye isn't making anthropic is then what happens if
they need to? What if they need to, they're gonna chuck.
That's the Supperme Mai crisis from last year.
Speaker 5 (01:00:00):
It's just it's it's well, that's when you get the
venture capitalists insisting that that's why we need to you know,
do this air capex rollout because if we build it
out like infrastructure, then we can actually lower the compute
prices and that subsidized when yeah, that's.
Speaker 2 (01:00:13):
The thing, but that's the other thing. So the information
reported the open AIY says that by twenty thirty they'll
be profitable. How Stargate Yeah, And you may think what
does that mean? And the answer is Starguate has data centers.
Now you have to I just have one little question.
This isn't a knock on the information. This is their
reporting what they've been told, which is fine. A little
question with open AYE though, how how does more equal
(01:00:37):
less cost? Because this thing doesn't scale. They lose money
on every prompt. It doesn't feel like they'll make any
In fact, they won't make any money. They'll just have
more of it. And also there's the other thing of
data centers are not fucking weeds. They don't grow in
six weeks. They take three to six years to be
fully done. If Stargate is done by next year, I
will fucking barbecue up my padres hat and eat it
(01:00:59):
live on stream like I that's if they're fucking alive
next year. Also, the other thing is getting back to
twenty twenty seven as well, you're twenty twenty six. Twenty
twenty seven is gonna be real important for everything. Twenty
twenty seven or twenty twenty six is when Warrio Abba
Data is the ANTHROPICALI be profitable. That's also when the
Stargate Data Center project will be done in twenty twenty six.
(01:01:20):
I think that they may have all just chosen the
same year because it sounded good, and they're going to
get in real trouble next year when it arrives and
they're nowhere near close.
Speaker 3 (01:01:28):
I can't wait until all of those companies announce that
because of the tariffs yep, that they have to delay
their timeline and it's like completely out of their hands.
Speaker 6 (01:01:38):
But no, the tariffs.
Speaker 3 (01:01:40):
You understand the tariffs.
Speaker 2 (01:01:42):
I've got a fungth of dine. I got a full
roasted pig. I'm going to be tailgating Microsoft earnings April
twenty third. Cannot wait. Yeah, you should go to like
a data center to have a marching band severance. Yeah,
it's but that's the thing like that, I actually agree.
I think that they're gonna there's gonna be difficult choices. Sadly,
(01:02:02):
there's only really two one capex reduction, two layoffs or
both because they have proven willingness to lay off to
fund the capex. But at this point people are like
they're asking to what end? Like why are we? Why
are we doing this? It just it feels like the
collapse of any good or bad romantic relationship where just
(01:02:24):
one party is doing ship that they think works from
years ago and the other party is just deeply unhappy
and then disappears one day and the other party being
last night and it just happened. This is lost. Lost
is a far more logical show than any of his
AI bullshit. But it's it's not good. No, no, it's
a bad show. It's a bad No. I wouldn't say
(01:02:46):
that either, talking about something that's very long, very expensive
and never had a plan, but everyone talks about like
it was good despite never proving it. Lost. Yeah, sorry,
I really do have some feelings on that. You're going
to get some emails, I'm sure for me sending me
very quiet rob still emotion like a hundred times. Yeah,
(01:03:10):
it's I think it's just I can't wait to see
how people react to this stuff as well when this
because I obviously will look very silly if these companies
stay alive and somehow make a KG I a g
I is killing me first, like the gravedigger AI truck
is going to run me over outside of my house.
It's going to be great, But I can't wait to
see how people explain this. I can't wait to see
(01:03:31):
what it's like. Oh, we never see The tariffs may
be right.
Speaker 3 (01:03:35):
And I talked to an analyst just last week who's
like a bullish AI tech investor, and he said, he said,
already you're seeing investment pull back because of expectations in
the market that there was these stocks were overbought in
the first place, and now there's all this other turmoil
(01:03:56):
external macro elements that are going to kind of take
the feeling, you know, the jargon of like the froth
out of the market. They're gonna it's all going to
deflate a little bit. And so I was asking him, like,
is the AI bubble popping and he says, no, but
tariffs are definitely like deflating it and delaying whatever progress
that we are going to be promised from these companies
(01:04:17):
is going to be delayed. Even if it was going
to be delayed, they were going to find other reasons
this is a convenient macro kind of excuse to just
say like, oh, well we need we didn't have enough chips,
we didn't have enough investing, we didn't have enough compute.
You know, be patient with us. We're going to have
the revolution is coming.
Speaker 2 (01:04:35):
What's great as well, talking of my favorite Wall Street
analyst Jim Kramer of CNBC, so Court Weave's IPO went out.
I just need to mention we are definitely in the
hype cycle because Jim Kramer said that he would sue
an analyst Da Davidson on behalf of Nvidio for claiming
that they were a lazy Susan as in as in basically,
(01:04:57):
what the argument is is the Nvidia funded care Wave,
so the core Wave would buy GPUs, and at that
point core Weave would then take out loans on those
GPOs for Capex reasons Capex including buying GPUs. So very clearly,
and also you take gil over at Da Davison, you
and me Kramer in the ring, but it's we know
we're in the crazy time when you've got like a
TV show host being like I'm gonna sue you because
(01:05:19):
you don't like my snocks. I think that like we're
going to see like a historic wash out of people,
and the way to change things is this time we
need to make fun of them. I think we need
to be like actively. We don't need to be mean,
that's my job, but we can be like to your
point at your article, Alison, it's like say like, hey, look, no,
what you were saying is not even rational or even
(01:05:41):
connected to reality. This is not doing the right things.
Apple Intelligence is like the greatest anti AI radicalization ever.
I actually think, to me, so bad, it's so fucking bad.
Speaker 1 (01:05:52):
And I before it even came out, I like downloaded
the beta. I was like, I'm going to test this
out because you know, I talk about the thing on
my podcast sometimes and it's so bad bad. I like
turns it off for most things, but I have it
on for a couple of social networks, and I mean,
I guess with the most recent update got marginally better,
but it still constantly tells me so and so replied
(01:06:14):
to your blue sky ski. I checked they didn't. That
person didn't even like the ski. I don't know where
that name came from. And this happens like every other day.
It's just completely wrong.
Speaker 2 (01:06:26):
I'm like, how my favorite is the summary takes for
Uber where it's like several cars headed to your location
Kallennick Klannick mode activate it. No, it's it's great as well,
because I usually don't buy into the Steve Jobs would
burst from his grave thing. I actually think numerous choices
(01:06:47):
Team cook has may have been way smaller than how
Jobs sort of done. This is actually like he's going
to burst out of the ground thriller style. Actually did
that zombies pop out? Anyway? Because it's nakedly bad. Close.
It's not a great reference, but it's nakedly bad, like
it sucks, and I've never I have. People in my
life were non techy will constantly be like, hey, what
is Apple Intelligence? Am I missing something? I'm like, no,
(01:07:09):
it's actually as bad as you think.
Speaker 1 (01:07:11):
And I mean it's also small other things beyond just
the notification summaries. The thing that every time I highlight
a word and I'm trying to sometimes might want to
use fine definition or any of the things that come up,
I have to scroll by like seven different new options
under the right click or double click.
Speaker 2 (01:07:30):
If you hit right tools, it opens up a screenway, Yes,
it opens up a.
Speaker 1 (01:07:33):
Thing, and I'm like, who has ever, who is trying
to use this to rewrite a text to their group chat.
Speaker 2 (01:07:41):
Who is this for?
Speaker 3 (01:07:42):
I feel like Apple, to its credit, is recognizing its
mistake and it's clawing it back and like delaying Siri indefinitely.
Speaker 2 (01:07:51):
I mean, I don't know if I agree on that one,
because the thing they're delaying is the thing that everyone wanted.
I think they can't make it work because the thing
that delaying is the contextually a wes are right.
Speaker 3 (01:08:03):
Yes, their quote unquote delaying it.
Speaker 2 (01:08:06):
It doesn't exist, it never existed. Yeah, we'll see Apples washed.
Speaker 1 (01:08:11):
I mean, but that's the thing.
Speaker 3 (01:08:12):
It's the most brand conscious company on the planet. And
I wrote like when they did their June revelation of
the SERIAI is going to come out, and they said
it was going to come out in the fall, and
then it was coming out in the spring, and now
it's not coming out ever, question Mark. But throughout the
whole like two hour presentation, the letters AI were never spoken.
(01:08:35):
Artificial was never spoken. It was Apple Intelligence. We're doing this,
We're doing our own thing. It's not you know, because
they already understood that when you say something is like
that looks like it was generated by AI.
Speaker 2 (01:08:47):
You're saying it looks like shit, you know, And the
suggestions are also really bad too. I've had like, over
the last few weeks, a few people give me some
bad news in their lives and the responses it gives
are really funny. Oh no, it'd be like someone telling
me something bad happen and it's like oh, or like
I'm like, what was the worst one I had? It
was like that sounds difficult, and it's like a paragraph
(01:09:09):
long thing about like a family thing they had, like
and it's not even like got like any juice to it,
like I didn't read too long. Those would be funny suggestion,
but like it can't even. It's it's proof that I
think that these large language models don't actually well, they
don't understand anything. They don't know. AN think they're not conscious,
but it's like they're really bad at understanding words like
(01:09:31):
people like, oh, they make some mistakes. They're bad at
basic contextual stuff. And we had Victoria Song from the
Vergion the other day and she was talking about high
context and low context languages and I said, this is
I can only speak English. I imagine not being able
to read or speak in any others and that it
really fumbles those and I if there's if you're listening,
(01:09:52):
you want to email me anything about this research? How
the fuck does this even scale if it can't like, oh,
we're replacing translators, great, you're replacing translators with things that
sometimes translate, right. Sometimes sometimes it just feels also inherently
like that feels like an actual alignment problem. By the way,
that right there, that feels like an actual safety problem.
(01:10:13):
Like hey, if we're relying on something to translate and
it translates words wrong, and you know, especially in other languages,
subtle differences can change everything. Maybe that stange No, no, no,
We've got the computer wake up in like two weeks,
and then it's gonna be angry. And that's the other
thing we're gonna make AGI, and we think it's not
gonna be pissed off at us. I don't mean Rococo's
(01:10:36):
modern basilisk or whatever. I mean, just like if it
wakes up and looks at the world and goes these
fucking morons, like you need to watch Person of Interest
if you haven't. One of the best shows on actually
on AGI, Like genuinely you need to watch Person of Interest.
Because you will see how that could happen when you
allow a quote unquote perfect computer to make our decisions. Also,
when has a computer being particularly good at decision making?
(01:10:58):
I don't know. I feel like so much of this
revolution quote unquote is based on just the assumption that
the computer makes great decisions and oftentimes doesn't.
Speaker 1 (01:11:09):
It often does not. Why would I think that the
same search function in Apple that cannot find a document
that I know what the name is, I'm searching for it.
Why would I think that that same computer is going
to be able to make wise decisions about my life, finance,
and personal relationships.
Speaker 2 (01:11:26):
Because that's Apple and this is AI.
Speaker 1 (01:11:28):
Oh that's true. That's that's how I'll show myself out.
Speaker 3 (01:11:32):
I don't know how much is AI versus just like
a good translation app, Like I genuinely don't know.
Speaker 1 (01:11:39):
Like well, it's because AI is such a squishy term
that you really don't like in some way, Like I
guess AI could be expanded to include a lot of
modern computing.
Speaker 3 (01:11:51):
Like I can see travel and like emergency situations where
you need where like a good AI translator would be
like a real life saver, just as the small A
side I was just in Mexico and my step kids
were using Google Translate and we were like kind of
remembering Spanish, and you know, blah blah blah. Go into
(01:12:11):
a coffee shop and I wanted to order a flat white,
and so I used Google Translate to say, like, how
would you order a flat white in Spanish? And it's
said to order a blanco plano, which means flat white.
But like across Mexico City there are wonderful coffee shops
and you know what, they call them flat whites.
Speaker 1 (01:12:29):
Like an Australian coffee.
Speaker 3 (01:12:33):
I learned that very quickly with the help of Reddit,
because I went to the barista and ordered a blanco
plano and they were like, are you crazy, gringos? Yeah yeah,
I mean like it's the functionality is very limited on
those things, and it's just like also, it gets back
(01:12:53):
to like, if it's one hundred percent reliable, it's great.
If it's ninety eight percent reliable, it sucks.
Speaker 2 (01:13:00):
And just as an aside, did any of you hear
about like the latest like quasi fraudulent thing with Jony
Ive that's happening. I just saw the herd so Sam
Altman and Jony I've founded a hardware startup that has
built nothing. There is a thing they claim, there's a
phone without a screen. And Open Ai a company run
(01:13:22):
by Sam Moltman, owned forrincipally by Sam Moltman, and Microsoft
is going to buy for half a billion dollars this
company that has built nothing, co founded by Sam Moltman.
I feel like this should be a one law against this.
But it's just like what have they been doing? And
this is just it's kind of cliche to say, like
(01:13:43):
quote The Big Short, but like a big part of
the beginning of that movie is talking about the increase
in fraud and scams, and it really feels like we're
getting there and rip to the Humane pin by the way,
rest in piss. You won't be missed. Motherfucker's two management consultants.
But like in Dignity each year Jesse, Jesse, Lou Rabbit
are one your next motherfucker when you when your ship's gone,
(01:14:06):
all be honking and laughing. Your customers should suit you.
Speaker 1 (01:14:10):
The description so my colleagues, the information reported this journey,
I've Sam Moltman news and the description for the device
really makes me chuckle. Designs for the AI device are
still early and haven't been finalized that people said. Potential
designs include a quote unquote phone without a screen and
AI enabled household devices. Others close to the project are
(01:14:31):
adamant that it is quote not a phone, and they're
gonna spend They've discussed spending upwards of five hundred million
on this company.
Speaker 3 (01:14:40):
Like a bad philosophy class where it's like, what is
a phone that's not a phone?
Speaker 2 (01:14:45):
Semi for beginners, Jesus fucking Christ, Oh my god, And
that's like that's the thing as well. This feels like
a thing that tech media needs to be on as well.
Just someone needs to say I'll be saying it like
this is bordering on fra all, Like it seems like
it must be legal, because otherwise there would be some
sort of authority, right, you can't do anything illegal without
(01:15:05):
anything happening. Hm hmm. But it's like, this is one
of the most egregious fucking things I've ever seen. This
is a guy handing himself money one hand. This is
should be fraught, like this is like how is this ethical?
And everyone's just like this, Kevin Ruce, maybe you should
get on this ship find out what the phone that
isn't the phone is what the fuck? And also household
(01:15:28):
appliances with AI, maybe like something with the screen and
the speaker that you could say like a word to
it and would wake up and play music.
Speaker 3 (01:15:37):
Yeah, a M with AI just declared bankruptcy.
Speaker 2 (01:15:42):
On the blockchain rumored that rumored dead. Why roomber dead?
Speaker 3 (01:15:46):
I think they did. I don't know. Actually, I remember
I read the headline they were to be acquired.
Speaker 1 (01:15:50):
By Amazon, but I think the deal fell through under
Lena Khan's FDC I had assumed. And then also a
one quick note on the Johnny I Have Similton thing.
I guess it's notable that oldman has been working closely
with the product but is not a co founder, and
whether he has an economic stake in the hardware project
is unclear.
Speaker 6 (01:16:12):
Yeah, you know, it just seems to be working closely,
like he's just hanging out there on taking a salary
in an equity position.
Speaker 1 (01:16:23):
I do think it's very interesting all of these different
AI device startups that have popped up in the last
couple of years, and my question for them is always
just like, to what end? People didn't like Amazon Alexa.
Speaker 2 (01:16:36):
And it also lost ton of money.
Speaker 1 (01:16:38):
Yeah, and Amazon's still trying to make it work. Series
never been super popular, and I just don't get like
one of my co hosts on the podcast We're Gonna
Intelligent Machines is obsessed with all these devices just because
he's like one of those tech guys.
Speaker 4 (01:16:54):
Leo.
Speaker 1 (01:16:54):
Yes he's and we love to make fun of you,
but he his late device is this thing called a Bee.
Speaker 2 (01:17:02):
We just have Victoria's song on Tokio about that records.
Speaker 1 (01:17:04):
It records everything all the time and then makes puts
that up in the cloud and then I guess doesn't
store the full transcripts, but does store a little AI
generated description of everything you did and whoever you talk
to that day. And there's no way. I mean, he's
in Leo's in California, which is not a one party recording.
(01:17:25):
You gotta get consent from everybody to record, and the
Bee is not doing that. But it's just baffling to
me because I'm just like, I guess He's like, well,
it could be nice to have record of all of
my days all the time, And I'm like, I guess,
but to what ends.
Speaker 2 (01:17:40):
Record write a diary?
Speaker 3 (01:17:45):
There's literally a Black Mirror episode about that.
Speaker 1 (01:17:49):
I believe first episode.
Speaker 3 (01:17:51):
Everyone has like a recording device, and and then it does.
When I was when you were talking about this on
the show, I was listening thinking, like in this black thing,
it reminded me that like when Facebook started having like
all your photos collected under your photos, and like how
we started reliving so many experiences of time, like you
(01:18:11):
never for all that, and like look at how happy
you were like six years ago, you know, like and
it creates this like cycle. Like imagine if every interaction,
every like romantic interaction, every sad inner, everything, you could
replay back to yourself. It sounds like a nightmare to me.
Speaker 1 (01:18:31):
I do think it's also just a night Like humans
were not built socially to exist in a world where
every interaction is recorded and searchable with everyone forever, Like
you would never have a single friend. Romantic relationships would dissolve.
Speaker 2 (01:18:47):
The spotless. But even then, like memory is vastly different
to the experience of collecting it, like just existing like
we all brainslow. I don't know, my brain just goes everywhere.
But like compared to memory, which can be all crystalline
and wrong, you can just remember something, you can remember
a subtle detail wrong, or you can just fill in
the gaps. Memory sucks, so.
Speaker 5 (01:19:08):
It doesn't having like a device that constantly records everything
a road at the impulse or maybe the drive to
be as present, you know, because you're like, well, it's
refer to it.
Speaker 1 (01:19:17):
But this has also got huge privacy implications where suddenly
the cops could just be like, yeah, we're just gonna
take a recording, We're just gonna subpoenat everybody who was
in this area's b device and then suddenly get a
recording of everyone's days ever that was just happened to
be in this place because we think a crime could
have happened there.
Speaker 2 (01:19:36):
But I think that there's an overarching thing to everything
we're talking about, which is these are products made by
people that haven't made anything useful a little while, and
everything is being funded based on what used to work.
What used to work was you make a thing, people
buy it, and then you sell it to someone else
to take it public. This only worked until about twenty fifteen.
It's not just as a zero in just free era thing.
It's we have increasingly taken away the creation of anything
(01:20:00):
valuable in tech from people who experience real life, Like
our biggest CEOs are Sam Altman Wario Ama Day sund
up As Shy NBA, former McKinsey Sachnadella MBA, I mean
Tim Cook NBA. Like, these are all people that don't
really interact with people anymore. And the problems the people
in power are not engineers. They're not even startup founders anymore.
(01:20:22):
They're fucking business people making things that they think they
could sell, things that could grow the right economy. Of course,
and we're at the kind of the pornographic point where
it's like a guy being like, what could a what
does AI do? You can just throw a bunch of
data and give you insights? Well, what if we just
collect a date on everything happening around us?
Speaker 1 (01:20:40):
Ever?
Speaker 2 (01:20:41):
That would be good. Then you could reflect on things.
That's what people do, right And I actually genuinely think
there is only one question to ask the be founder
and that's are you wearing one of these now? And
how often do you use this? Because if they use
it all the time, I actually kind of respect them.
I guarantee they don't. I guarantee they don't, and they'll
they'll probably say something around a lot of privileged information
(01:21:04):
as opposed to everyone else's not important. And this fucking Joni. Oh,
it's going to be a phone with that without a screen?
What can you do with it? I don't know. I
haven't thought that far ahead. I only get paid fifteen
million dollars a year to do this.
Speaker 1 (01:21:16):
Just as also, who wants a phone without a screen.
The screen's the best part. I love the screen.
Speaker 6 (01:21:21):
They don't.
Speaker 1 (01:21:21):
I love to hate the screens.
Speaker 2 (01:21:22):
But they don't talk to anyone. They don't have human experience.
They don't have friends that like they have friends who
are all like have fifty million dollars in the bank
account at any time. They just like exist in a
different difficulty level. They're all going at very easy. They
don't really have like predators of any kind. They don't
really have experiences, so they're what they experience in life
is when you have to work out what you enjoy.
(01:21:44):
And because they enjoy nothing, all they can do is
come up with ideas. That's why the rabbit are one.
Oh what do people do?
Speaker 1 (01:21:50):
Uh?
Speaker 2 (01:21:51):
Auto McDonald's.
Speaker 1 (01:21:52):
Can it do it?
Speaker 2 (01:21:53):
Not? Really? But it also could take a photo of something,
it could be pixelated.
Speaker 1 (01:21:57):
That could all kind of order an uber through it.
Speaker 2 (01:22:00):
Maybe what was great was the rabbit launch. The rabbit
launch and he tried to order McDonald's live and it
just didn't work. It took like five minutes to fail.
It was And that's the thing, Like, I feel like
when this hype cycle ends, the tech media needs to
just be aggressively like, hey, look, fool me, thrice, shame
on me. Like maybe maybe next time around, we can't
(01:22:23):
ask the questions I was asking twenty twenty one, where
it's like, what does this do? Who is it for?
And if anyone says it could address millions of papers,
like have you talked to one of the motherfucker one
of them? I think we can wrap it up there, though,
I think, Alison, where can people find you?
Speaker 6 (01:22:39):
Hi?
Speaker 3 (01:22:40):
You can find me at CNN dot com, slash Nightcap.
I write the CNN Business Nightcap. It's in your inbox
four nights a week.
Speaker 2 (01:22:46):
Oh yeah, Ed.
Speaker 5 (01:22:48):
I write a newsletter on Substack, the Tech Bubble. I
co host podcast This Machine Kills with Jathan Sadowski, and
I'm on Twitter a Big Black Jacobin on Blue Sky too, Yeah,
Blue Sky at vwudon Guasso Junior dot com.
Speaker 1 (01:23:04):
You can read my work at the Information I also
host a podcast called Intelligent Machines and You can find
me on Twitter at Paris Martino or on Blue Sky
at Paris dot NYC.
Speaker 2 (01:23:16):
And you can find me at at Edzitron dot com
on blue Sky Google who destroyed Google Search? Click the
first link. It's me. I destroyed Google Search along with
public Ragaban. Fuck you dude. If you want to support
this podcast, you should go to the Webbies. I will
be putting the link in there. I need your help.
I've never won an award in my life. It's the
best episode. It's sorry, best business podcast episode. If we
(01:23:38):
are winning right now, please help me. Please help me
win this and if I need to incentivize you further,
we are beating Scott Galloway. If you wanted to beat
Scott Galloway, you need to vote on this. Thank you
so much for listening everyone, Thank you for listening to
(01:23:59):
Better Line. The editor and composer of the Better Offline
theme song is Matasowski. You can check out more of
his music and audio projects at Matasowski dot com, M
A T T O S O W s KI dot com.
You can email me at easy at Better Offline dot
com or visit better Offline dot com to find more
podcast links and of course my newsletter. I also really
(01:24:21):
recommend you go to chat dot where's youreaed dot at
to visit the discord, and go to our slash Better
Offline to check out our reddit. Thank you so much
for listening.
Speaker 4 (01:24:31):
Better Offline is a production of cool Zone Media. For
more from cool Zone Media, visit our website cool Zonemedia
dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.
Speaker 2 (01:25:00):
Yes,