Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
All zone media, mother father, come quick, it's better offline.
The tech podcast I'm Your host ed Zetron What. A
(00:22):
few weeks ago, the Internet was captivated by the work
of one particularly pissed off software developer who claimed, and
I quote the title of his blog, I will fucking
pile drive you if you mention AI again. The writer,
based in Australia, lays out in detail how infuriating a
distraction AI has become from the wider problems riddling many
modern tech companies, as well as the noxious, nasty presence
(00:45):
of grift and management consultant types that are deliberately keeping
it inflated, regardless of whether it actually means anything. One
particular quote stood out to me. You see, while hype
is nice, it's only nice in small bursts. For practitioners,
we have a few things that a grifter does not have,
such as job stability, genuine friendships and souls. What we
(01:05):
do not have is the ability to trivially switch fields
the moment the gold rush is over, due to the
sad fact that we actually need to study things and
build experience. Grifters, on the other hand, wield the omnitool
that they self aggrandizingly call politics. That is to say,
it turns out that the core competency of smiling and
(01:26):
promising people things that you can't actually deliver is highly transferable.
This piece caught both my eye and that of Cool
Zone Media's Robert Evans, who assured me he promised that
I'd be allowed to leave the Cool Zone Media pain
cage if I was able to get the author of
the piece on the show. So that's what I did.
Adjoining us to talk about this incredibly aggressive blog is
(01:49):
Nikol Suresh, who's the director at Hermit Technology and an
excellent blogger. Nick, thank you so much for joining us.
Speaker 2 (01:55):
Oh, thank you for the very kind words. I'm super
excited to be here. When that piece came out, at
least three people said, hey, have you heard of ed
Citron And I was like, well, I'm talking to Tomorrow'll
be excited now I will.
Speaker 1 (02:10):
So you're angry, I think we've kind of got that,
and I share your anger. But what is it about
this that really boils your blood that makes you want
to pile drive folks?
Speaker 3 (02:21):
Yeah.
Speaker 2 (02:22):
So it's interesting because a lot of people read that
piece and they reached out and thought that, you know,
I'm actually super angry about AI specifically, but a pretty
consistent through line through my blog.
Speaker 3 (02:35):
It's on a lot of traffic over the past year
and a half.
Speaker 2 (02:38):
It's not about AI specifically, it's about this massive class
of non technical managers that have entered the IT space
and you're essentially cause playing and doing software engineering. The
thing that is infuriating about the LLLM thing in particular
is that it's uniquely suited for drifting. So if you
(02:59):
look at the average company that they can't ship incredibly
basic applications. And even if you look at something like
the crypto space, which I don't like, but you look
around Melbourne, crypto companies actually have some level of engineering discipline.
You know, they use newer technology, they've got there's things
in the job descriptions to indicate seriousness.
Speaker 1 (03:18):
Right.
Speaker 2 (03:18):
AI companies don't have that, and I don't think at
big corporations CEOs are aware that their stoff for just
typing an import open AI. You know, you're not doing
any complex work. You just call like an API. You
just you just send data to a company in the US.
They do everything, and people down here in Melbourne are
(03:38):
making out like they're the greatest engineers ever, their thought leaders.
You should put them on comper stages. It's not hard.
It's so easy. It takes like a week to make
something from scratch.
Speaker 4 (03:49):
Yeah.
Speaker 5 (03:49):
I really appreciated that part of your piece because it
gets at something that's like frustrated me as well, which
is and I'm not coming at this from the same
kind of technical background you are, but knowing that there's
there's legitimate technology here, but all of the people being
trusted with marshaling it and marshaling these companies that are
supposed to deliver it to us are on their face. Conman,
I'm thinking about Sam Altman today, or at least the
(04:12):
clip came out today where he's like, soon you'll be
able to just ask asking chat GPT to solve physics for.
Speaker 4 (04:18):
You, solveys.
Speaker 5 (04:20):
That's just a nonsense statement. That's just that's absurd. The
way we know that, right, we can all tell that
that's nonsense.
Speaker 2 (04:28):
If if you're not defaming him, that's a crazy thing.
Speaker 3 (04:31):
Like if you guess those are the.
Speaker 4 (04:33):
Woods, I'll find the exact quote.
Speaker 1 (04:35):
But yeah, no, he he, he was very much. That's
what he said. You've spoken of this kind of like
organizational cancer inside tech films? Is it this kind of grift?
Speaker 2 (04:45):
Yeah, yeah, it's not specific to technology. It's just the
technology makes it very very easy. I don't know of
any industry that doesn't have this massive horde of people
who don't really know what they're doing, right, but they
just kind of bulldoze peaceeople socially put it this way.
So I think my blog is done two million hits
in the past year and a million realized. And when
(05:09):
people talked about, you know, offering me work or anything,
they said they would give me a reference at their workplace,
but they wouldn't mention that piece because leadership would be
so embarrassed, like they can't have someone like me around
going to someone yeah somewhat cool things, it says, fuck
sometimes can't have that. But no, it's just because like
(05:30):
the Emperor very clearly has no clothes. I don't even
really get hate mail because people know I'll just point
out that they're drifting and they'll have no defense, so
they just leave me alone.
Speaker 1 (05:41):
Yet, So, how do you think that this has happened?
It clearly isn't something that happened overnight, but it feels like,
especially the tech industry, has just been poisoned by product
managers and management concerns. How did this happen?
Speaker 2 (05:53):
Off the top of my head, something that really influences
this is just very strange ideas on how things scale.
There is this I listened to those two episodes you suggested,
and I think the main one is the Rot Combobble,
which people should go listen to. And there is this
idea that everything should scale up forever, and the only
(06:15):
way to handle that communicatively is like this hierarchical structure.
Speaker 3 (06:19):
But it's really hard to find good people.
Speaker 2 (06:21):
And I think a huge hierarchy can work if you
only have really great people in it, but it's really hard.
Speaker 3 (06:27):
There's one good.
Speaker 2 (06:28):
Software engineering company I know in Melbourne, Flip Down in
the CBD, and they regularly interview forty people for one role,
right because just the quality of candidates not there. If
you insist you need this massive structure and you need
a hierarchy for it, you're just going to have to
insert people who don't know what they're talking about. You
can't find good people fast enough, and then those people
(06:50):
who aren't very good to hire other people who aren't
very good, and soon that's the whole industry.
Speaker 1 (06:53):
Right, and that feels something I'd be writing that will
end up being a podcast that may run before or off.
This is just this idea of shareholder supremacy, that the
companies have been engineered to make a market need work
to hit analyst expectations. Jack Welch, classic bastard. And it
feels like this that llms in particular have become the
(07:16):
ultimate form like the super Saiyan of rock capitalism, where
people just kind of, like you said, just bullshit about them.
It's like, yeah, you can do all this shit, right, Yeah,
it's the future. What is your actual background?
Speaker 2 (07:30):
So I've got a four year qualification in psychology, so
I mostly did cognitive psychology and circadian science, and then
I moved over to a postgraduate qualification at one of
Australia's bigger universities, so that was in data science. I
did Masters with a little bit of a thesis work
in it, so I'm reasonably good. I'm not a PhD,
but I do know what I'm talking about.
Speaker 3 (07:50):
In the space.
Speaker 5 (07:51):
I did want to drill into something because you're coming
at this like from the perspective of somebody who was
working in a technical capacity, seeing the way this group
of people who are talking nonsense are coming in and,
as you said, kind of like bulldozing their way into
conversations with people. Can you talk socially about the way
that kind of happens, because we all sort of see
the way these people try to monopolize conversations about technology
(08:13):
at events, you know, if you're coming at it from
like press at something like CEES, but also just kind
of if you're looking at things on Twitter or looking
at just sort of the way popular conversations go. But
I'm I'm curious about like the social dynamics of these
people who are coming into organizations full of real people
trying to do real things and hijacking that for the
(08:34):
purpose of cashing out.
Speaker 2 (08:35):
Yeah, I think a lot of them are latching onto
some pre existing level of grift that's just been in
the industry. So when I entered around twenty eighteen twenty nineteen,
that's when I got into the Australian tech market, MLMs
were not this massive thing, right. There were a couple
of GPT two demos, but there is no way for
you at a normal company to roll out something like Jenai, right,
(08:58):
you really had to know what you were doing. You
didn't have the resources, like you couldn't just hit an
endpoint somewhere and get a result. And I cannot emphasize
that enough for like non technical listeners, it's like two
lines of Python to use open AI. People making these
chatbots are not serious practitioners. I could teach like my
thirteen year old cousin how to do this, but there's
(09:19):
this kind of this pre existing class of person that talkslot,
gets on stages, doesn't really know what's been going on.
But they were much smaller back then. They did talk
about AI relentlessly. But the difference between twenty nineteen and
twenty twenty four is those people in twenty nineteen didn't
have any working products. They would talk and then they
would leave organizations every two years before they were on
(09:42):
the hook for their failed projects. The difference now is
people are going to those same people, so you just
turn up and you go, hey, I'm going to be
your lead data scientist, your chief.
Speaker 3 (09:52):
Data officer, whatever you want.
Speaker 2 (09:54):
I'm going to do this AI revolution for you, and
then they can very quickly ship stuff that doesn't quite work.
But organizations are really bad at tracking what actually drives revenue,
So the moment you get the flashy product out, people
think you're some sort of superhero.
Speaker 3 (10:09):
Yeah. And if it's broken, yeah, absolutely so.
Speaker 2 (10:12):
I've been I probably shouldn't say this, but I will
and just leave the details out. I've been leaked a
couple of confidential documents from i would say small to
medium businesses on how JENNYI has worked in their chetbots.
I can't name the companies because I'm not supposed to.
The documents like, it's not impressive. Even the ones that
sort of work. You get really weird statistics like interacting
(10:33):
with the chatbot predicts tickets being escalated to human support teams,
like way worse than usually.
Speaker 3 (10:41):
But they're still rolled about, right, that's good.
Speaker 1 (10:43):
So the one thing that they say, this is going
to be able to do customer service, very cool.
Speaker 3 (10:48):
I definitely can't do it now.
Speaker 1 (10:50):
Great Jesus.
Speaker 5 (10:51):
Do you get the sense they're rolling it out even
though they're not saying an immediate like positive effect, because
the kind of effect on their potential valuation by throwing
AI in is higher than whatever they lose by having
a less efficient actual solution.
Speaker 3 (11:07):
I think it's threefold.
Speaker 2 (11:09):
So the first one is it's speculative, which means even
if I if I was dishonest and I thought it
was bad for the company, I might want AI on
my CV specifically, it doesn't actually have anything to do
with the company, right, it's just for me. The second
one is, yeah, there's obviously it's very easy to get funding.
Someone reached out to me a CEO of a company
(11:30):
that does LM work, and this person admitted to me
their product doesn't work, so they don't accept funding because
they're trying to get it to work and people are
still flinging money at them. This thing that, like self
admittedly doesn't function.
Speaker 3 (11:41):
That's very cool. Yeah, it's great. It's great that the
industry is in that position.
Speaker 5 (11:47):
And the last one is good for them.
Speaker 3 (11:50):
The last one is, you know, the chatbot piece.
Speaker 2 (11:53):
I don't work in that domain, but I kind of
wonder is it cheaper to just provide bad customer service,
lay off all your supports offf Like, when I'm trying
to cancel my mobile plan, do they actually care that
I got frustrated and put the call down?
Speaker 3 (12:07):
That sounds good for.
Speaker 5 (12:08):
Them, right, Well, yeah, I mean that's one of the
things that's so insane about this is that this could
work out for them. At least in I still don't
think that when you're looking kind the kind of like
the dollar amount of valuations that like open Ai is seeing,
I don't think that letting companies that don't give a
shit about customer service economize more is going to be
(12:30):
the size of industry that they're hoping for. You're certainly
not going to It's not trillions of dollars in value, right,
that could be a profitable business and just making everything
a little bit worse for everybody like that, that could
be a very good business to be it. Yeah, just Google, No,
I mean yeah, I mean yeah, that's Google's done well
off of that.
Speaker 1 (12:50):
It's just just making things gradually, iteratively worse.
Speaker 2 (12:54):
Well, I will say that I get a lot of
email from people who left Google, and you know, some
of them. It's a huge company, so they do some
useful stuff, but a lot of people were like, I
left because it sucks there now.
Speaker 3 (13:05):
So that's that's definitely a thing.
Speaker 1 (13:23):
So you said the data science field was large but
largely fraudulent. What exactly does that? Does it mean that
most of data science is just grifting or is it
just something a little more deep.
Speaker 2 (13:35):
It's a little deeper than that. So as a as
a technical discipline, machine learning is very very serious, right.
You do see it used and genuinely helpful things. It's
probably being used right now as we're recording this. It's
somewhere in the supply chain, and you'll hear more old
cool practitioners. They get very annoyed at the words AI
and machine learning. They say things like, it's just statistics,
(13:58):
you know it, Ellen are obviously doing something. It would
be wrong to just say they're classical statistics and not
impressive in any way. But when I say it's grifting,
I mean this whole artificial intelligence thing. The people deciding
on the size of the data science market are not technicians,
so when you say AI, they're thinking Skynet, generative AI,
(14:19):
general intelligence. Like they really have no idea what they're
hiring for. But they'll do things like say we need
to hire twenty data scientists tomorrow, right, like that's.
Speaker 1 (14:28):
Right, without really knowing what they do with them, no.
Speaker 2 (14:31):
Idea what they do with them. And it's really hard
to solve problems with data science, right. It is entirely
possible to have a problem that is fundamentally intractable, no
matter how discipline your team is because the data doesn't exist,
or the problem's just too hard mathematically.
Speaker 3 (14:48):
That happens all the time.
Speaker 1 (14:50):
It almost feels like the problem with the tech industry
is that nobody knows what they're doing. You have people
that know stuff, how to do things, and then they
get a job at Google in a pile of people
that can do stuff run by someone who goes all right, dickheads,
put the AI in it now, and then stuff comes out.
(15:13):
It almost like the problem you mentioned earlier of just
they're not being enough good people also appears to be
there are no good managers.
Speaker 5 (15:21):
I wonder kind of pivoting or not pivoting, But building
on that, I wonder how much of it is that
the guys who are really good at keeping the hype
going are also just better socially than the kind of
people who are really good at coding and are really
good at the actual like science, Like if some of
the being bulldozed effect is that the people who really
know their shit just are not used to the kind
(15:42):
of energy that a hype man is capable of deploying
against them.
Speaker 2 (15:47):
Yeah, so this has come up. I've thought about this
a lot. So that's a kind of component of engineering.
I work on a lot not hyping myself up, but
actually being able to hold conversations. But that's not quite
the attribute I see in the managers who do this
kind of thing, because what you're describing is like a
pure conman. And at least and not to be mean
(16:10):
to people who aren't comment but I'm leaking, like at
least there's some skill in that. But what we're talking
about is they.
Speaker 5 (16:15):
Just like this is where Steve Jobs comes in, right,
there's a version of this that requires talent.
Speaker 2 (16:21):
Right right, Like it may not be talented, I respect
like morally, but I'm like that it's an actual skill set.
But we're talking about people who have really just learned
how to like mimic the trappings of success and responsibility.
You know, they're real big on suits, they're real big
on talking very slowly with long pauses, but like they
(16:46):
don't have novel thoughts.
Speaker 3 (16:48):
Their brain has just had.
Speaker 2 (16:50):
Like Forbes magazine flashed onto it like there's nothing else there.
Speaker 3 (16:55):
So maybe talking about social skill and bulldozing, I'm.
Speaker 6 (16:57):
Not talking about the war, right, and you know, we're
not talking about people who are doing this very savvy
political piece where they finesse things, and you know, it's
very machiavellian. We're talking about people who just turn up
and if you point out they don't know what they're
talking about.
Speaker 2 (17:16):
They're a fucking dick to you. And they do this
until you leave them alone, and that's their whole career.
The industry is just full of that. They're not even smart.
I have seen an executive at a restaurant order the
wrong dish, have their family eat the whole thing, then
tell the weight stuff that like, hey, you brought us
the wrong thing, and when a kid piped out, they
(17:37):
shushed the kid.
Speaker 3 (17:38):
You know.
Speaker 2 (17:39):
It's it's not like a talent that they have cultivated
in the menagerial space. It's like personality disorder that just
leads into everything.
Speaker 1 (17:47):
In ruins in a reality.
Speaker 3 (17:51):
How I want.
Speaker 5 (17:52):
It's some of it is because like I talk about
jobs a lot, and I think you have to because
a lot of these guys, if not all of them,
at some level wish they were him, right with the
exception of the not you know, the medical issue, but
like they's he's like mythologized to a significant degree. Especially
I'm recording from San Francisco right now. I mean He's
(18:13):
almost like a saint down here, and I feel like
the act of copying him, you get these people who
are like carbon copies of just the personality disorder without
any of the actual insight. And I think that's like
further exacerbated because they're all like, yeah, read the Art
of War, read like one of three Malcolm Gladwell books,
Read The Four Hour Body, and do not read anything else.
(18:37):
There's no humanities, there's nothing like because all you need
to learn how to do is be the most optimized
asshole you can be.
Speaker 2 (18:44):
I like, how you just pick the If I had
hit Man budget, like that would be my list as
a data sideist. Malcolm Glenwell, Like I might need him
more than anyone on the planet.
Speaker 5 (18:57):
Oh, I'm right there with you, buddy. He's he's been advertise,
I got our shows and oh man, I don't love it.
Speaker 4 (19:04):
I don't love it.
Speaker 1 (19:07):
So with that in mind, Nick, what do you think
of Sam Oltman.
Speaker 2 (19:11):
I don't really know a whole bunch about him because
out here in Melbourne, I don't really play around in
the fang super huge space.
Speaker 3 (19:18):
Right.
Speaker 2 (19:18):
Our clients are relatively small. I work at like medium
sized businesses, tiny compared to the size of Google and
open AI. All I will say is, if he said
what you just said about physics, like, he needs to
come out with the greatest innovation I've ever seen in
the next two years, or he will be one of
(19:39):
the most insane people I've ever heard of, right, Like,
that's he really has to deliver on that statement, or
he's obviously full of shit.
Speaker 1 (19:47):
It just it blows me away that he's even able
to say it without someone just being like, what the
fuck you talking about? Sam Oltman? Sam Oltman, did you
hit your head somewhere on the way here? Because it's
not even the first insane thing he said. He has
said so many times. I was super smart. It's going
to be a super smart friend that knows everything about you?
The fuck are you talking? What does any of this mean?
(20:08):
And my own frustration I've shared about the media is
that no one ever seems to say the appropriate thing,
which is wait, what was that? Excuse me, mate? What?
But it almost feels with everything you're talking about, and
in your excellent blog that also feels like the problem
You've just got these guys who just say things raise
money go up in organizations and don't actually do anything like,
(20:32):
Samulman did not make gpt someone else did that. Mirror
Morati did not do it. Cto of open Ai didn't
do it either.
Speaker 4 (20:41):
It's just so strong. It feels like we're living in.
Speaker 1 (20:44):
An alternate reality at times, and it's freaking me out.
I wan to be honest, it's I'm putting on the
clown makeup, the joker makeup almost every day now.
Speaker 2 (20:54):
Was Miramuradi the one who couldn't answer that question on
whether they used YouTube to try, Yes, I mean.
Speaker 5 (21:01):
Yes, yes, she sure couldn't. And I found at least
I watched the video with captions while we were doing
this with sam Altman, and the exact statement was basically,
someday you'll be able to ask chat gpt U discover
physics for me. It'll do it, discover all the physics
something like that. I've got the link in there, but
(21:22):
like just just a nonsense statement, like at least at
least go to make it. You could make like star
Trek claims, right, like that's the rational end of this.
I don't know that it'll actually be that good, but
be like, one day I'll have a computer and you
can tell it to do all these things and it'll
it'll organize your life and answer questions, it'll run complex systems,
and it'll let us explore space. That's nonsense too, probably at
(21:43):
least in any kind of near term, but it's at
least a thing that theoretically could happen, as opposed to
just telling a robot to discover physics for you.
Speaker 1 (21:51):
And he says, computer solve all of physics for me,
which is so funny.
Speaker 5 (21:57):
Right, And it's not what anyone Also, it's just not
I don't know, I don't need to go into.
Speaker 1 (22:02):
The physics works. You don't solve physics.
Speaker 5 (22:04):
You don't solve physics. And it's not what's appealing about
the Star Trek dream of a computer that works the
way the ones on those ships do, right, Like it's
the the enticing thing is the idea of like technology
that is hopeful and still needs humans at the center
of it. And I don't know, he doesn't seem to
know how to like promise anything but the elimination of
(22:25):
human creativity interest action, which I guess is part of
what I find disturbing about him.
Speaker 2 (22:31):
It also, you know, that's obviously something that is said
to drive stock prices at various companies, right, it is
like no one believes that, Like, it can't be the case.
Even the people investing don't believe that, because if you
could get a computer to solve physics, stock prices wouldn't
matter anywhere, right, So that can't be why you're investing.
Speaker 1 (22:53):
I just don't know that statement. I haven't really thought
about too much before right now, but I can't get
over the oder of soul physics. Like it's a statement
said by someone who doesn't know what they're talking about
for other people who don't know what they're talking about
to go, damn, this boy's smart.
Speaker 5 (23:08):
It kind of makes me suspect ed that he thinks
about like writing a novel that way too well. The
author could just solve you know, Wuthering Heights, right, he
could have the chet GPT do it.
Speaker 4 (23:19):
She could have Jeff PT.
Speaker 1 (23:21):
He would have survived. In my book, yeah, those kids
would have been safe. In The Brothers Karmels.
Speaker 5 (23:28):
Of Jed GPT accidentally pulled too much from dev into it,
and the Gatsby ends with some Captain Kirk and Spock
im preg fanfic.
Speaker 3 (23:39):
The most curs ever heard.
Speaker 1 (23:44):
But that's the thing. It doesn't feel like anything's being
done about this shit? What was it the end of
twenty twenty two GPT popped up and now like what
was like a year and a half into it, I'm like, oh, great,
I can get a pregnant picture of Garfield with an
AK forty seven. Great. Now what And you actually kind
(24:05):
of made not an identical point. I don't think the
Garfield npreg stuff was in there. I have to look again.
But you made a similar point where it's like, as
a company, you probably don't need AI or you're already
using it.
Speaker 2 (24:17):
Yeah, yeah, I mean it's just baked into the software
you use.
Speaker 3 (24:21):
Right.
Speaker 2 (24:21):
So we did a little bit of work for a
managed security provider down here in Melbourne. They have AI
in their supply chain, but they don't need data scientists
and specialists doing it here. Right. They've got some vendor
way upstream, very small company that hires a couple of PhDs.
They write some algorithms which are not llm's, just like
all school statistical computing to detect when someone's like trying
(24:43):
to get into your network.
Speaker 3 (24:45):
You just buy the product from them, right, you.
Speaker 2 (24:46):
Just sit there And it's the same way we get
benefits out of Zoom and YouTube and Twitter or whatever
the heck, right, like there's just some stuff baked in
the back, like you know, have to stress about it.
It is insane how many companies that have no business
playing in the space are doing this stuff instead of
figuring out things like, you know, staff retention averages one year,
how do we get it up to seven? They don't
(25:06):
worry about that because they're just fixated on this A
I think, I'm I need to calm down because I
bus say I'm going to strangle the next person to
mention it, but it might come across as more of
a legal threat.
Speaker 1 (25:17):
Calming down is not how better offline works. My Aura
ring tells me, I'm working out during the show, so
I don't have it on today, sadly. But you mentioned
that they should be doing other things, and you actually
mentioned in the piece as well, the companies aren't like
checking their backups to make sure they work. What should
organizations actually be doing instead of AI.
Speaker 2 (25:41):
Just turn their brains on for like twenty seconds, you know,
like it's not hard. They're just so hyper fixated on this.
It's not AI. If it's not lllm's or JENAI. It
was like older AI. I mean, it's not that it's blockchain,
it's not that it's quantum. I went to a talk
at a big conference called I would really put them
on last year. It went to something Digital Queensland. Last
year it was dystopian.
Speaker 3 (26:02):
It was just.
Speaker 2 (26:03):
Entirely filled with like non technicians. The stage is just
filled by people lying to them and half the products
are about quantum. And that whole talk, they never explained
what quantum was, right. They were just like, quantum is
going to revolutionize everything, and I'm in the audience being
what the hell does that even mean? Right, what do
you mean quantum? But yeah, what they should be doing,
it's just like really basic fundamental stuff, right, It's not
(26:25):
even interesting, which is why no one wants to do it,
because you can't like rocket ship your way to CEO
by doing this stuff. But like you need to figure
out how to like get your teams out of more meetings.
You need to actually spend time with people on the
ground and ask them what needs changing in the business.
You don't need AI to figure all that stuff out.
Just ask a guy who's been there for a while.
(26:46):
It's all pretty common sense, and AI is really pulling
away from that. We talk a lot about the investment
in AIS a waste of money, but there's probably like
just as much unrealized opportunity cost in people just wasting
my time meetings about it, right, like we could just
be doing something else.
Speaker 3 (27:04):
I just got back from a trip to Fiji.
Speaker 2 (27:07):
And I was volunteered for a tiny bit with my
girlfriend and animal's Fiji so they can treat a dog
for about thirty Australian dollars. Can you imagine how many
dead dogs Open Aye has produced?
Speaker 4 (27:20):
Like you could just oh my god, you could just.
Speaker 2 (27:22):
Away them, right, it's gonna be hundreds of tons like that.
Speaker 3 (27:26):
That's not the way way.
Speaker 1 (27:27):
Way be a little more specific. How did Open Eye
kill the dogs?
Speaker 3 (27:31):
Well, because we could have just spent the money on
the dogs, right, Like you just say, you do the math, right?
Speaker 4 (27:38):
This many dogs worth of potential?
Speaker 1 (27:40):
Yeah? Life putting that in the document of if I
ever get the Sam Oltman interview, Yeah, Sam, talk to
someone who said thirty bugs I can help a dog?
How many dogs would you kill to bring KGI to life?
Speaker 4 (27:56):
Atally?
Speaker 5 (27:57):
That's a great one of those things because people there's
that talk about how much power does it? Used to
ask a question of chant gpt. How much water are
we basically pouring on the ground every time we use
these models? And it's you could say the same things
about every useless thing humans do, but like we mostly don't,
(28:18):
right Like. I can say that about like warhammer models,
but at the end of the day, you get a
warhammer model, right Like, that's why people keep buying that
shit as opposed to like at the end of the day,
I think a lot of the the ultimate result of
a shitload of what's going on in ai AI right
now is that briefly a lot of people, hopefully briefly,
a shitload of people lose their jobs and customer service
(28:39):
gets wildly worse, maybe forever, right Like, That's.
Speaker 4 (28:43):
That's what we're giving it up for. I would rather
have little models.
Speaker 3 (28:46):
Yeah.
Speaker 2 (28:47):
Also, no one's turning up in the public service and
going I need ten million dollars to hire someone to
paint ultramarines.
Speaker 3 (28:53):
Right like that right? Not right?
Speaker 4 (28:54):
They should be maybe in the UK that.
Speaker 3 (28:57):
Would be a better use. At least someone's having fun.
Speaker 1 (29:15):
You kind of hinted at this in the piece, but
where does this actually go? Because it feels like we're
hitting a wall with what lllms can do, and there's
all of this hype, but there's not that much stuff
that's come out of it. What happens. You named a
few scenarios, the first being like magic, like just it
(29:36):
gets better in two years.
Speaker 2 (29:37):
But otherwise, Yeah, I think realistically the hype just dies
down and a lot of people are very very embarrassed
by their overinvestment in the space. You'd hope that's the case.
It's very possible they'll just kind of get away with
it and drift their way into something else, especially you know,
if all their CEO friends did the same thing, I
(29:58):
don't know who holds them accountable, especially if they're the
board was you know, it's a bit like arguing with
people who are like late stage Trump supporters, and it's
not that they necessarily agree with things he's saying at
this point.
Speaker 3 (30:11):
I hope that's not too political for this podcast.
Speaker 2 (30:13):
Like yeah, it's just that they've said so many embarrassing
things that it's like too late to back out now
that you can't even if you write theory, you're like, Okay,
you know what he just said something insane. You can't
admit that you've been saying insane things for four years,
you know, so you're kind of locked in. Yeah, I
think the magic scenario is possible. You know, Open Eye
(30:36):
has tons of money and a lot of PhDs, but
you know they've got they've got people there who know
more about the field than I do. Maybe they've got
something in the wings that's incredible, but I mean we
would there's no need to speculate on that, right, They'll
just show us.
Speaker 3 (30:50):
If they've got something, Yes, do it, Yeah, just do it.
We don't need to speculate.
Speaker 2 (30:55):
And if they do it, I don't even know if
investing makes sense because I don't know if like in
that universe, stock markets make sense, right, because they're really
talking about a level of innovation that would do away
with all human labor, So it's all doesn't make sense
to put money into them. Then it's all very very strange.
What I'm hoping for, to quote my co director ash Leally,
(31:19):
the people who did this are going to be the
ex body spray of CEOs, Like we'll just be done with.
Speaker 3 (31:26):
We'll just remember the investment.
Speaker 2 (31:28):
We'll have all the snippets of them getting on their
ted X stages and then hopefully we'll never work with
them ever.
Speaker 3 (31:33):
Again, Yeah, Boob.
Speaker 1 (31:36):
Will like, well, it feels like they'll find another grift.
But I think that's really the question. So it's not quantum,
it's not blog chain, it's not AI. What the hell
is next? Like? What now? Where does all of this
money and hype go? This is the thing that I
feel crazy about. A number of things drive me crazy.
(31:57):
It's just where does it go from here? What is
the next thing that people even can invest in organizations?
Speaker 3 (32:04):
Yeah?
Speaker 2 (32:04):
Well, I mean you know, quantum and blockchain were not
very predictable, nor were not the elms. There would there
be something, yeah, nerdifiable, And I suspect it's very possible
the bubble will not be allowed to pop until there
is a new thing because I think the people making
this investment also control when.
Speaker 5 (32:21):
Yeah, the end, it's a load bearing bubble, right, like yeah,
when this things fall down.
Speaker 1 (32:28):
But when you say they won't allow it to what
do you mean?
Speaker 2 (32:32):
I just have no idea how long the kind of
investor class involved in this can just keep pumping money
into it, right, I have no idea if they're on
their last legs. I don't have a background in international finance.
I don't know where the money's coming from it's coming
from somewhere. I don't understand because I know they're not
making profits right so right, So I don't know if
(32:54):
they could keep this going for twenty years if they
need to, or if we're six months from some sort
of massive collapse. Like you'd have to at an economist
in here to comment on that. But I do know
it's very likely it will collapse at some point. It's
why at Hermit we don't do AI consulting. We could
get a lot of work doing it. I just don't
think it's you know, we're thinking in a five to
ten year time span. I don't think it's sustainable to
(33:16):
be in that game.
Speaker 1 (33:17):
It all feels very nihilistic. It all feels just likes
grifting each other at this point. Like Microsoft is now
the largest customer of Oracle because Oracle is now building
data centers for Microsoft to build AI. Yeah. My experience,
by which I mean reading a great deal of blogs
(33:38):
and earnings is it doesn't seem like anyone's making any
money from this, and that seems to be your experience too, right.
Speaker 2 (33:47):
I haven't seen any medium sized company that's stabbled in
AI make any money off of it. Most of our
clients fortunately aren't too deep into it, but they bring
us on and ask things like, hey, you know, we
just bought this Microsoft product. Can we get it to
do the thing it's supposed to do? And the answers
(34:07):
almost always know, like there's just no. We mostly help
people get out of the AI game, right. They bring
us in to help them finish the project, and we
mostly point out that they should just cancel the project,
and then we go do boring stuff like fix their
databases because that's what they should have done five years ago.
Speaker 1 (34:23):
And that's kind of what you've been suggesting in the
piece as well, that the real problem is organizational failure
almost like like I mentioned earlier though unchecked backups. Yeah, yeah,
what are other things that people are just not doing?
Like what is the actual thing being ignored?
Speaker 2 (34:39):
Well, the thing is, it's like a constellation of things, right.
It would be like asking a writer what is the
one thing that writer's ignoring. It's about like developing your
judgment across like the entire field holistically, but that's really
hard and takes a lifetime. So people just fixate on
the latest thing and they build their whole career off
of that. It takes like thirty seconds of skin being
(35:00):
someone's cv to tell when they're full of shit, Like
it's really really obvious, who's serious?
Speaker 1 (35:04):
And how do you do it?
Speaker 2 (35:05):
Tell me again? Even that's a judgment piece, right, But
you know, here's a good example. Most organizations I know
are really into I don't know if this resonates for
non technical people, but there's this thing agile. Everyone's just
obsessed with being maximally agile all the time.
Speaker 1 (35:20):
Please please define that. I know what it is, but
I'm looking forward to hearing.
Speaker 2 (35:25):
It's it's meant to be a set of like it's
literally like a four or five sentence thing which says
you value things like people over processes. Right, it's it's
literally five pages. And we've built this entire manic industry
of like books and specialists and this all built around
these five sentences which just say things like, you know,
(35:48):
let people work and pay attention to the human And
when you talk to these consultants, you find out they
haven't even read those original five sentences.
Speaker 3 (35:55):
Right, that's very very common. I hope it's five sentences.
Speaker 5 (35:58):
It might be that it's again, it's like that copy
of a copy right at this point, like the we've
gone to Kinko so many times you can't even really
tell what's supposed to be written on the sheet. But
they all know it's deeply important to run a good.
Speaker 2 (36:12):
Business like Camino. But all the clones have eight heads,
you know, like they'd be right.
Speaker 5 (36:18):
They no longer look like that very handsome stunt man.
Speaker 4 (36:23):
Yeah, it almost.
Speaker 1 (36:24):
Feels like the whole DevOps thing that happened. Remember the
DevOps boom where I was like, yeah, DevOps is important,
and then when I had clients, I was like, what's
stevops meaning? To be like wor It's like the developers
talk to the business people like why do you need
software for that? And they would kind of freeze.
Speaker 2 (36:39):
That's that's not over. It's just that it's been overshadowed
by the AI thing. But so you're agile. We have
this whole industry of people who turn up and tell
you how to run meetings every day, and it's like,
to talk to people, what the hell are you talking
about DevOps? People still say that constantly, and it's really
just code for like automate your things sensibly, right, there's
(37:00):
something useful in there, and the moment someone realized it
was useful. We just got another gigantic industry around it.
That's where most that's what most of my blog is
actually about. The aipiece was just like I've never even
written about it before because it was so obviously kind
of bullshit. It didn't warrant it until someone really annoyed
me last week.
Speaker 1 (37:17):
Yeah, what actually caused you to write that? Was there
just a guy?
Speaker 2 (37:22):
I think, I, you know, it's been building up for
like four years. And then finally I think I put
a picture in there of that scale AI survey. Yes,
and I just saw that line that was like, was
it eight percent of companies have not seen gains from
gen AI?
Speaker 3 (37:37):
And then I chose violence.
Speaker 1 (37:39):
You know, this is done some really inventive threats as well.
I was very happy to see you say it was
you need to you need to replace your sweaters with
full plate to survive my onslaught.
Speaker 4 (37:54):
I I appreciate that.
Speaker 5 (37:55):
I can't think one of the problems we're having, and
this isn't just a problem in the tech space, but
you've got a bunch of people who get by on
not being challenged, or the fact that if you are
going to have a debate with someone, the act of
like arguing with them about something lends some sort of
credibility to their position. When there are certain people that
what they're saying is just such horseshit, potentially dangerous horseshit
(38:17):
that what you need to say is if you say
that again, I'm going to pile driver you right like,
I'm gonna give you a fucking stone cold stunner if
you keep doing that shit.
Speaker 4 (38:28):
And I do appreciate that tone.
Speaker 2 (38:31):
Yeah, And you know it's we talked about that bulldozing.
A lot of it is just the fact that you
know they're basically just committing a low level psychological violence
on a societal level.
Speaker 3 (38:42):
Right.
Speaker 2 (38:42):
They may get you into an office and then they
will talk at you for like a hundred hours, like
they will not drop the point. They will just talk
about AI twenty four to seven. And if you're a
little bit pessimistic of question your credentials, say you're not
forward looking, blah blah blah blah blah, no idea, and
they'll just do this until you go, I just need
to paycheck, all right, invest in whatever you want.
Speaker 3 (39:03):
I give up, just do it.
Speaker 1 (39:04):
Just do it. And this is a common is this
like a what kind of person does this?
Speaker 2 (39:09):
Is this?
Speaker 1 (39:10):
Like I see is this an executive level person, the
product manager, what's the art or like a consultant.
Speaker 2 (39:15):
I think it happens at like in every position. I've
seen programmers do that. I've seen artists do that. I've
seen executives do that. I have I can't respond to
my emails anymore because so many people have written in
with stories that I can't navigate that inbox, right, And Yeah, I.
Speaker 3 (39:31):
Just see it everywhere I go.
Speaker 2 (39:32):
And then occasionally I see like these really small functional
places where they don't have these issues stops usually anywhere
between like five to one hundred people, and it's really
hard to get in there because they don't scale up
because they.
Speaker 1 (39:46):
Really don't help to hire more people.
Speaker 3 (39:48):
Yeah.
Speaker 2 (39:49):
Yeah, they get to one hundred and they're like, well,
we're all really well paid and happy, and the more
people we add, the higher the odds are we just
get someone like that.
Speaker 3 (39:56):
So we're just going to stop hiring.
Speaker 1 (39:58):
In my experience, people like that, And I'll run a
tech p off from my day job, and I do
occasionally run into people at this, and I've learned that
there's just a really easy way of saying it is
I'm not done talking, like they really hate that. When
I run into this kind of sociopathogic, you say I'm
not done talking, and they get really mad because they're like, wait,
I can't just attack. I've told peg to fuck off.
Speaker 3 (40:21):
I don't care.
Speaker 1 (40:22):
The thing is, these people, perhaps they only need to
be pile drive, but they need to be yelled at.
Samultman can't apparently walk around San Francisco anymore. And I
think he'd want you to believe it's because he's famous,
but I like to believe it's because people fucking scream
at him. I hope that that happens.
Speaker 3 (40:38):
I'd probably do.
Speaker 4 (40:42):
I'll tell you if I run into him today.
Speaker 1 (40:44):
Yeah, please do give him my number.
Speaker 2 (40:46):
I should probably clarify I don't know that much about
Sam Altman, so I don't want to. I'm gonna I'm
going to read about him, and then I'll kick you
an email to see if I feel better or not
about slamming. It's great, it's cool, dude, He's a real
he's a real charm. It's got your best interest at heart.
You should listen to them.
Speaker 1 (41:02):
But always has so okay, I think good question to
wrap up on is assume you are God really easy
one here? Yeah, yeah, how would you actually fix the
current culture in tech? You've kind of hinted at it.
It's like organizational stuff. What would you see done with
like a Google or a fake like what the bigger companies?
(41:23):
What is the problem then?
Speaker 2 (41:24):
Yeah, So I think it's some inherent issue to do
with scale, because fundamentally we're talking about the fact that
there's a class of person who will either believe people
in day to day interactions or they find ways to
attach themselves to people who are very very credulous or
don't have the technical skill to push back on what
they're saying. I suspect that's a lot of why journalists
(41:46):
don't push back on, you know, people saying insane things,
because there's of course a fear in the back of
your mind that they're going to come out with like, yeah,
but you're not a technician. That was a stupid question.
You've just been embarrassed on your own show, right, Like,
that's that's very, very possible. I think a lot of
those issues are so systemic that I view it from
like a small scale disruption standpoint. So I started Hermetech
(42:09):
with just a couple of my friends. Right, They're just
guys I met who behave very honestly in the space.
Some of them are AI guys, but they just do
like actual serious work that gets published in places. Yeah,
and we're just taking all the business from places that
actually need results. Right, So most of the people rolling
out Opening Eye stuff are not interested in working with
(42:31):
us because they're just doing some weird Ponzi scheme where
they convert the investor money into status and they use
the status to.
Speaker 3 (42:38):
Get a higher paying job.
Speaker 2 (42:39):
Like, they don't need us because they don't actually need results,
They just need to grift. I'm just going to let
that market explode. That's not my problem. I think it
would be dangerous to do anything to it. It's just
going to blow up on its own. I'm just going
to hang out in the corner with some sane people
and just find people who actually need stuff to work.
Speaker 7 (42:58):
Right.
Speaker 2 (42:58):
I'm going to go in and just make sure all
the database backups get tested, and then I feel like
that's going to be a bajillion dollar industry.
Speaker 3 (43:06):
Right, Like for the rest makes things work, Yeah, just
make things work.
Speaker 2 (43:11):
That's how people have made money for like the history
of time. Just don't be insane about it.
Speaker 1 (43:16):
Nick, thank you so much for joining us. Where can
people find you?
Speaker 4 (43:21):
Yeah?
Speaker 3 (43:21):
Thank you?
Speaker 2 (43:21):
Yep, so I bought a lunic dot blog so they
can find my personal website, and there is hermitdash tech
dot com and that's where they can get us for consulting.
Speaker 3 (43:31):
If you need someone to not lie to you, that's
that's us.
Speaker 1 (43:35):
And of course I would never like to you. My
name's Ed Zititron. I run this podcast. You I need
to hear from me. But of course Robert Evans joined
us today as well. Where can they find you?
Speaker 3 (43:43):
Robert?
Speaker 4 (43:44):
Thank you? Thank you.
Speaker 1 (43:45):
Ed.
Speaker 5 (43:46):
You can find me occasionally on your show Better Offline
and on my own show Behind the Bastards.
Speaker 1 (43:52):
So yeah, thank you for listening to Better Offline. The
editor and composer of the Better Offline theme song is Matasowski.
You can check out more of his music and audio
projects at Matasowski dot com, M A.
Speaker 4 (44:12):
T T O S O W s ki dot com.
Speaker 1 (44:16):
You can email me at easy at Better Offline dot com,
or visit Better Offline dot com to find more podcast
links and of course my newsletter. I also really recommend
you go to chat dot where's youreaed dot at to
visit the discord and go.
Speaker 7 (44:28):
To our slash Better Offline to check out our reddit.
Thank you so much for listening. Better Offline is a
production of cool Zone Media. For more from cool Zone Media,
visit our website cool Zonemedia dot com, or check us
out on the iHeartRadio app, Apple Podcasts, or wherever you
get your podcasts.
Speaker 4 (45:00):
The Welsh Spanish School