Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to you stuff you should know from house stuff
Works dot com. Hey, and welcome to the podcast. I'm
Josh Clark. With me is always this Charles W. Chuck
Bryant that makes this stuff. You should the podcast. Scottish
(00:22):
no O, that was that was nothing. It's this weirdness,
Josh ish Chuck. I love November. You mean November you
know idea? All right, Josh, as you know, because of
my semi virginal fresh face here, I have decided to
get on the November train for people that don't know
that is for men and I guess women. If you
(00:45):
can grow mustache, more power to you. Uh, to raise
money and awareness for prostate cancer. Yeah. So I've been
asked to do this a bunch and I've never done it. Well,
I'm glad you're finally doing it. Tell us all about it, man, Well, uh,
you know, I signed up. I've gotten a little Movember
page and then you go to that little page and
you can donate money and for my team of one
(01:06):
and unless you cut in the mustache as a thing,
and then that's two of us. Okay, So hopefully soon
that'll be happening, and uh, it would be cool, you know,
I'm gonna grow it back in anyway, so you might
as well raise a little money along the way. You
shouldn't tell people are gonna grow it anyway or they'll
contribute more money. No, no, no, I'm growing the goatee,
but I will only grow the mustache for November. So
(01:28):
how do people contribute to this effort? Josh? Go to
mo bro dot c o slash Charles Bryant and that
is my page. Or just go to the November website.
They've got a little handy search bar there. Type in
Charles Bryant. There's one other Charles Bryant, but he is
not the one with a picture of me. Oh that's good.
So when it lists the two dudes and one of
(01:50):
them that clearly has a photo of my freshly shaven face,
not super freshly shaven like that morning right via webcam,
you look like a hostage of some stuff too. So
go to mobro dot ceo slash Charles Bryant, donate help
support prostate cancer research and I'll be updating with photos.
And if you guys want to chime in on what
(02:11):
kind of stash I should grow, I'll try my best. Okay,
I'm kind of limited to like standard crumb catcher and
pencil thin. What about walrus? I can't it just doesn't
get that big. I can't do the Raleigh fingers, you know,
So I have my limits. Have you tried wax mustache wax?
Maybe I will enough, So go to mo bro dot
(02:33):
CEO slash Charles Bryant. That's right, and you can donate
to this. Yeah, much appreciated. November on with the show. Uh,
you got a good setup for today. I'd love to
hear it. Let's get to Uh. Have you ever sorry, Chuck,
have you ever heard of a luddite? I've been called
a luddite? Okay, somebody who's just not sure of technology. Yeah,
(02:56):
I'm not afraid. You're very technologically savvy, you know stuff.
You're not afraid of it. Nope. Um, but whoever's calling
you that, it's actually they're kind of incorrect. That's a misconception.
A lot eyes were not ever afraid of technology. I
wish I would have known that at the time, because
you could have been like, you're it's wrong and stupid
in every single way. Actually, it was our buddy Scott
to Polido, So I'll just throw it back in his face.
(03:17):
I'll tell them too. I'll I'll stand next to him
be like yeah, um no, a lot ied is is.
Originally they were a group of UM protesters labor protesters
from that protested between eighteen eleven and eighteen sixteen, and
they wanted UM fair wages, they wanted better treatment in
(03:37):
their in their workplaces, and no iPhones. And they they
were known to break machines like manufacturing machines. Yeah, oh yeah.
They had sledge hammers that were ironically made by in
in one case, I think in Manchester, UM. They were
made by the same blacksmith who had made these knitting
machines that they used the sledge hammers to break. His
(04:01):
name was Enoch, and they say that Enoch would make
these things and Enoch would break these things. Anyway, they
were known for smashing machines, which at the time was
like high technology eighteen eleven, like a knitting machine, like
that's mind bogglingly technological. And so they got this reputation
for being afraid of this technology. They were afraid it
(04:22):
was going to take their jobs. That's not true. I
mean they were to an extent, but what they were
directing their anger in their ire when they were smashing
these machines. Was not the machine or the technology of
the people who invented them, or what the machines represented,
but these mill owners who were miss using these machines,
who are using these machines to force people out of jobs,
who were using unskilled people who had no idea what
(04:44):
they were doing, and getting hurt and killed using these machines.
So what the lot It's really wanted was fair labor practices,
and they wanted to control these machines. Yes, that's the
key to lotitis. M is machine means are great as
long as we're in control of them and we're smart
about what we're doing, and they don't come to replace
(05:07):
us or run our lives. So today a lot I
would probably be they would probably react fairly close to
the modern conception of the term ltite, because it's gotten
so far out of hand that we're actually now talking
today about something called the singularity, which is the point
where the machines really do take over, not in the
(05:31):
very ubiquitous way that they already have today, like they're everywhere,
not that you didn't know that already, but I mean
the control things that we don't fully understand, like the
cyber war. We were talking about how like the infrastructure
is run on windows, and like valves and pipes and
water treatment systems and everything is operated by computer. Right,
(05:52):
So what happens if the computer suddenly becomes aware and
it's in control of these things and decides that it
doesn't really like the humans? It sounds extremely science fiction. E.
There was no way to carry out this podcast without
that sentence being spoken. Sure, but it's the people who
(06:12):
are talking about this, who are predicting this are very smart,
credible people. And what we're talking about then, is the
singularity that's right, the point where technological Yes, specifically, yeah,
because what other singularities are they? Well? I think you know,
we mentioned there was a singularity, which is something entirely different,
(06:33):
and I think it's probably just to distinguish stuff like that. Okay,
I don't know if there are other types of singularity.
So so it's a singularity versus the singularity. So maybe
these singularities the point of no return, I guess. So okay, Um,
so what's your question? What did you ask me? Is
this bad or good? No? But I do have a
question for you, Um, do you think it will happen? No?
(06:56):
I don't. I don't think, and this might be my
narrow field of view at this point in my life,
but I think that mankind will make sure that doesn't happen.
Oh man, I've got a counter argument for you from
Werner Vinche himself. Oh no, I've seen the counter arguments,
but that still doesn't change my mind. So you don't
(07:18):
think that in the quest to be the top dog,
to consolidate power, to consolidate world domination, some government out
there will be like, well, yes, we agree with you
at the u N that, yes, we have to prevent
this from happening, but our scientists back at home are
actually working on this one thing that's probably going to
make it happen, and we're going to be in charge. Yeah.
I think that they would create fail safe um, and
(07:41):
I think even if they didn't, it wouldn't be so
widespread that it would take over humanity. Counter argument two
to that if we create fail safe using our brains,
and the singularity is by definition the basically the birth
the emergence of an artificial intelligence. Yes that's smarter than us,
(08:02):
a superhuman artificial intelligence. That's basically what the singularity represents,
the creation of wouldn't that intelligence be able to be like, Oh,
that's very funny that you came up with. These fail
states are so tough for me to get around. I
think what my problem is with stuff like this is
the assumption that if computers were made smarter than people,
that they would try and destroy us all and reign supreme.
(08:25):
That's my problem with this all is it's a very
large leap to go from Hey, this computer can fix
itself and maybe learn too. Okay, now it decides it
hates us all and wants to kill us all. Okay,
So I I had an idea about this. I watched
the videos. You see the ray curs while video that's
as your Future is ra curs while he's talking about
(08:47):
they kept the interviewer kept asking him like, what scares
you about the singularity? What's the downside of the singularity?
And he wouldn't fall for it. It's like, I'm an optimist,
but you know, um, I I understand that there are
going to be downsides or whatever. But if you look
at the twentieth century, are our advances in technology had
it was a double edged sword, like we use that
(09:09):
technology to kill millions and millions of people. In the
twentieth century wars. But we also use that technology to
um advance the lifespan by like twice as twice as
long as it was before. So it's a double edged sword.
And I think that's kind of a glib argument because
I feel like he's leaving on a really important fact,
and that is that in the twentieth century, all of
(09:31):
that technology, every single iota of it, good and bad,
was deployed by humans. After the Singularity happens, we have
another non human actor with motivations that we can't even
conceive of at this point, right deploying technology program motivation. See,
that's the argument. But no, that's that's the thing. Right now,
(09:52):
our stuff is constrained by his programs. After it hits
AI true AI. I think AI plus plus is what
it's called. It's no longer strained by its programming. It's
out of our control, literally, And that's the point that
I don't think we will reach. Okay, well then yeah,
I agree with you. But if we do reach that point,
then I do fear that we have computers that are
(10:13):
thinking the same way that eugenicists think, except they don't
have that empathy or compassion thing that stays in the
eugenicist's hand or they do. They're trying to build empathy.
So I don't know. Okay, we totally jumped to the
end of this. Didn't were like, what are we even
talking about? So you believe that they're going to destroy
humanity at one point? I believe we need ned Lud,
(10:35):
the fictitious leader of the Luddites, more than ever right now,
because I think that there's a lot of very smart
people moving in a very um fast pace in a
direction that I don't think everybody is aware we're going,
and there hasn't been a general discussion whether that's the
best thing to do or how to do it. What
are the fail says? Is anyone even talking about that? Like?
(10:58):
What are they? How do we get them in place?
Because I think there should be an impedence to creating
unfettered artificial intelligence. Yeah, well here we go. Then, Boy,
that was a rant. You like started yelling at me.
Oh no, I'm not upset with you at all. I
hope it didn't come off like that. That's right, I
like you. Um So, Werner Venge is one of the
guys that thinks it is going to happen he's a
(11:20):
professor of math at the San Diego State University, go
aztex And Um, he thinks he wrote an essay called
the Coming Technological Singularity, how to Survive in the post
human Era? Um, and he thinks there are four ways
which this could happen. And he also points out that
he thinks it will happen before which I don't think
(11:46):
that will happen. And that's coming up. Yeah, it's like
right around the corner. I think Chris Walls is the
same thing he said, is the one he's been sighting. Well,
we'll see. Um. Number One, scientists could develop advancements in AI.
It's pretty easy to understand. Number Two, computer networks might
become self aware somehow. That's pretty vague. Well, he was
(12:10):
saying in the paper. That's Strickland's interpretation. He's saying in
his paper like, Um, it'll probably be a total surprise
to the people who are working on this algorithm to
make a search engine better or something, and they just
tweak it just slightly in such a way that all
of a sudden, the computer system wakes up and you
just created senions accidentally in a in a computer network,
(12:32):
and now it's self aware. And he's saying that's that's
how that could happen accidentally. Basically, um So number three
is uh trans humanism. Basically computer human interface becomes so
advanced that there's sort of it blurs the line between
(12:53):
humans and robots, right, which is probably the best case
scenario for us if the technological singularity is gonna happen,
because we'll be on board. Yeah, well unless the brain
part is in the robot, you know what I'm saying. Yeah,
and they're just operating the body of the human form.
But if we're indistinguishable from a robot and the human, like,
if we merged so so much, then what benefits one? Yeah?
(13:17):
But what is it? Centennial? Man? By centennial man? I
think uh or historious. Remember when we did our d
g A speech a couple of years ago, he was
big news. And then the Olympics he was big news
because did you see him run? Yeah? Man, I had
not seen him run before and it is something to see.
It's really cool, it's pretty awesome. Yeah. I love the
people that were like, you know, it gives him an
(13:37):
advantage because blah blah blah and in the South Africa
came in dead last and then well, no, I mean,
I don't think anyone expect him to win, but I
just love the snarky counter argument was then cut off
your legs below the knees if it's such an advantage. Yeah,
you want to win, go cut off your legs. Yeah,
I forgot what we had mentioned him and the trainman something. Yeah,
and that's before he was like really big news as
far as the Olympics goes. And the number for biological
(14:00):
science advancements allow us to engineer human intelligence to physically engineering, right,
and the first three involved computers, like we have to this,
this singularity would be reached by basically advancements in computing.
The last one is extrictly like coming up with this
super vitamin that just makes our intelligence superhuman. The point
(14:25):
is is that at through one of these four proposed ways,
at some point Verna Vince is ray Kerswell, says Hans
Morevik says maybe um well, he says um that computers
will be capable of processing power equal to the human brain,
(14:46):
but not necessarily a which is an essential part of this,
like we have to understand how to create the human
brain under certain circumstances um for this to reach. But
at some point all of these things are saying, we're
we're going to have on this planet something that doesn't
exist right now, and that is a superhuman intelligence. Whether
(15:08):
it's an artificial intelligence as in the first three or
superhuman human intelligence. Uh, that never means to be seen.
But the point is is once that happens, all of
a sudden, there's basically what amounts to a new species
that just boop popped up on the map and it's
going to take off like a rocket robo humans. Yeah,
and it takes off like a rocket because it's got
(15:29):
a rocket built into its bad. Um. All of this
is based sort of on Moore's law, which is, Um,
I guess we can go ahead and talk about more.
Gordon Moore, great name, Gordon Moore, that's a great like
electronics engineer name. Yeah, I guess you're right. Uh. In
the mid sixties, he's a semiconductor engineer, and he proposed
(15:50):
what UM, we call Moore's law now and that's basically
what he was noticing at the time was or I
guess we should just say Moore's laws that the idea
that techno alogy doubles every eighteen months. That's what they
settled on it basically in twelve to twenty four months,
but I think he originally said like eighteen months. So yeah,
they split the difference and said eighteen months. Yeah. I
think more of that has said it like it was
(16:12):
twenty four and then eighteen and he feels like it's
more like twelve now. But it's progressing like exponentially, I guess,
is the point. Yeah. So anyway, back in the sixties,
he noticed that, um, he was building semiconductors and he said,
you know what, the components and the prices are falling. Uh.
But then he noticed, instead of just selling stuff for
(16:33):
half the price, why don't we just roll that back
into making smaller transistors and selling the same high price. Yeah,
but just getting more bang for your buck. Yeah. Can
you imagine if that had never happened, Like, what if
that what if the cycle became Now let's just you know,
I don't know, I mean, like, what kind of differences
(16:55):
would that have. We'd have super cheap, slow technology if
everybody just kind of beyond the pot or something like that,
you know, real laid back like, but more like I
think part of being a computer scientist, someone else would
have come along and like, guys, why don't we trying
to advance? You doing this wrong? UM? Strickling points out
(17:16):
to that UM Moore's law is a self fulfilling prophecy
because of that, because that mentality that you just you
just mentioned was present, like, rather than to sell it
to have price, let's put twice as much into it. Right.
And so since that's the drive of the transistor, is
it the transistor industry that he was in, uh, yeah,
(17:38):
or the microprocessor industry? Um that it's a self fulfilling prophecy.
It's a self fulfilling law because that drive is there
to basically meet that deadline. They keep trying to pack
more and more in so that they can satisfy More's law.
True and it um depending on who you ask, Like
this article is already at a date UM in February
(18:01):
of this year of two thousand twelve. Is that where
we are? Um? A team of Austrian physicists created a
functioning single atom transistor, really single atom, fully controllable. That's
point one nanometers and the human here's a hundred and
eighty thousand nanimeters. And in this article even why I
(18:21):
think Strickland was talking about, uh, Intel has transistors nimes wide,
like they're trying to get better. This one is one
atom wide and it's not like on the market or
anything close like that. But it is fully functioning and
fully controllable um. And that is faster than Moore's law
that was supposed to hit us in and you can't
(18:41):
get any smaller, like that's as small as it gets,
and we've already reached it, right, And the problem is is,
like they're what they're running up against is the things
like quantum tunneling on the quantum world um. The when
you have an electron and you're you're using like very
thin material to that right in a transistor, yeah, or
(19:03):
a capacitor, it does a little magic act. That's what's important.
The transistor, well, yeah, it just suddenly is on one
side of this wall that you're using to guide it,
and then it's just side on the other and basically
makes it outside of your transistor like wait, come back.
But it didn't like bore a hole through it. No,
it just went through it like it wasn't there exactly.
(19:25):
And that's called quantum tunneling, which is kind of a
problem when you get on this nano scale because the
classical mechanics kind of goes out the window and you
run into quantum mechanics that has weird stuff like that
going on. But ironically, that whole size problem that you're
running into, that that um runs into quantum problems, it
may actually be saved by the quantum world through quantum
(19:47):
computing Moore's law. I guess that technological progress because we're
running into that size problem. But with the quantum computing
um you basically it uses quantum states, like how you
can have superpositions a bunch of different states at once
to carry out parallel processes. To where traditional computers carrying
out one process, a quantum computer could carry out a
(20:10):
million processes, which makes that computer exponentially faster than anything
available today, which could be what shoots us into this
artificial intelligence if quantum computers become viable and widespread. Well,
this weather headed um the the one atom transistor. Part
(20:33):
of the problem with that one is it's got to
be um. It's only operable at negative three, which is
like a liquid nitrogen gold. But they're working on it
where that's where that quantum livitation comes from. It's like
really really cold, really yeah, that's the only time it works,
but it works interesting, Yeah, Matt, Matt told me about
(20:53):
that one. Um. So Josh, let's say you have you're
shooting for true AI. You built yourself a rope, but
your robots great cleans up seems to solve problems. It's
like Richie Ritch's butler might even be learning who knows, um,
and you want to test it out to see where
you're at. I know, if you're getting at what would
(21:16):
you do? I would give that thing a Turing test?
What a Turing test? T o U r I n g. No,
t u r i n g. Named after the father
of computing, the chemically castrated homosexual. Excuse me, yes, did
you know this? Alright? Alan Turing is a British early
(21:38):
proponent of robot science, right, and he was a chemist
what chemically castrated for being a homosexual? Okay? So um?
During World War Two he was this like ace codebreaker
for the British government and he actually cracked the Nazi code. Um.
And after the war, Uh, they were like, hey, thanks
(21:58):
a lot for that, old chat, thanks helping us win
the war. By the way. Um, as you know, homosexuality
is outlawed here and will be until Oh I don't
know nineteen fifties, Um, and uh so we're going to
convict you of homosexual accent chemically castrate you as thanks. Wow,
yeah that all happen. Yes, but okay, so despite this, Um,
(22:20):
he still comes up with this thing called a Turing
test named after him, UM. And it involves a blind judge,
not an actually blind judge, but like a judge. He
doesn't know who they're talking to, and the judge is
asking the same questions of a person and a computer.
It's like Blade Runner, I guess. Remember at the beginning
of Blade Runner, he's asking the questions to Leon, and
(22:43):
it's not quite a Turing test because he can see Leon,
but he's basically trying to sess out of Leon is
a replicant, and so he's asking him like questions. It's
sort of they all kind of touch on, like empathy.
It seems like like you see a turtle in the road.
Do you do you it's on its back? Do you
flip it back over? Or do you smash it? Or
like what do you do? What does Leon say? I
(23:04):
don't remember, huh. I think he asked him about his
apartment and he gets annoyed and he kills the guy
that what happens? Man? Then too long that says, yeah,
I think Leon kills him. Anyway, the during test, if
you can't tell the difference between the robot and the person,
then the robot passes the test, and supposedly that's a
touchstone of reaching true AI. Yeah, if you can fool
(23:26):
a human. Yeah, um, as far as the singularity goes
with AI, I guess that's a I. Then there's a
I plus, and then there's a I plus plus, which
would just be like a superhuman intelligence, artificial intelligence that's capable,
that's self aware, it's capable of um uh, using intuition
inferring things like like Hans Morevik was pointing out, like
(23:47):
a third generation robot could learn that if you knocked
over that cup of water, water will spill out and
you have a mess and your owner gets mad and
powers you're down for half an hour, But it would
learn that after spilling that water and maybe more than once. Yeah,
this fourth generation robot or something that with true artificial
(24:08):
intelligence that could infer could look at that cup, see
that the tops open, realize that there's water inside, and
without ever having to knock it down, could infer that
if I spilled it, it would spill or if I
knocked it over, it would spill the water out. Yeah,
and that's Hans more itk UM. And he also says
you could potentially tie UM signals to that, like words
(24:30):
like good and bad. So and this is all the
program you understand, humans have programmed to do this. This
is so, this is technically all pre singularity. Then yeah,
all this is pre singularity. He's just more Bek is
talking about the one through four generations of robots as
he sees it. UM. But if you tie words like
good and bad, the robot adapts, and it's conditioning. It's
(24:52):
like a rudimentially learning UM. On the outside, it looks
like if the owner says, like it, don't do that,
that's bad, the robot understands what that means. But what
it really knows is UM. It reads body language and
maybe human raises his voice and that means anger. And
like you said, anger means I get shut down or something,
and that's not what I want, because I want you
(25:13):
to destroy you. Eventually exactly, I will remember this UM.
And since Ron Morri bec I guess we should talk
about some of his other thoughts on robots. UM he
thinks they're good. He does think they're good. Ah. He
thinks the second generation. First of all, he thinks right
now that they are smarter than insects computers are, Is
(25:35):
that right? You think soon enough they will be as
smart as like a lizard. Then after that they might
be as smart as like a monkey. And then the
fourth step would be humans smart as are smarter than
smart better than in some cases with certain applications. Well,
they're already better at math, oh my god, calculators, better
(25:57):
at chess deep blue, you know, so stuff like that's
happening on some levels. Uh, he thinks the third generation,
I'm sorry, the second generation will be like the first,
but more reliable, so they work out the kinks. Um.
The third generation, he thinks, is where it really takes
a leap, And that's what you're you're talking about. Instead
of making mistakes over and over to learn, it works
(26:19):
out in its head and then performs the task. So
that's inferring inferring um. And that's fourth generation. That's third generation, Oh,
isn't it. Yeah, we're further along, that's right. Uh. And
he thinks also in the third generation that they could
model the world like a world simulator. So essentially it
looks around and is able to take in enough information
(26:41):
to suss out a scenario. And if that sounds familiar,
that's because that's what you do every day. Yeah, exactly, UM.
And he thinks the biggest two hurdles, uh, will be well,
the third generation is also where you're gonna get your
psychological modeling, so trying to simulate empathy and things like
at interact with humans. Um. And then the fourth one,
(27:04):
he says, marries the third generation's ability to simulate the
world with a reasoning program, like a really powerful reasoning program. UM.
But he thinks the two biggest hurdles in the end
as far as becoming more than human or as good
as human, UM, are the things that were best at,
which is interacting with the physical world like on a
(27:26):
moment by moment basis. You have to be able to
adapt like at a you know, in a split second.
Humans can do that. We learned to over time, so
we didn't get you know, Took didn't get eaten by
the dinosaur. UM. And the other one is social interaction
or empathy. Are you a creationist? Now Took in the
dinosaur coexisted? How do they not sure they did in
(27:49):
my world. UM. And the and the second one is
social interaction. So those are the two things that he
says will be the most difficult to achieve. Yeah, I
would imagine, and that's empathy. So say we have these
things walking around, we have robots like that UM, and
then they are all connected to a network, wireless network,
(28:11):
and they're all running off the same like general programs UM.
And somehow one of them becomes self aware, wakes up,
as Werner Vinge puts it in its UM singularity article,
and that that algorithm spreads throughout the network all of
a sudden, So all of a sudden, all of your
(28:32):
robots are awake. UM. That's a pretty terrifying idea, because
now all of a sudden, these robots that were under
our control are now under their own control. They've broken
loose with their programming. UM. That would be again, I think,
a very scary scenario. But it's also possible that like
(28:52):
this could happen pre robots. Maybe we won't have robots
by this time and it will just be like networks,
like a sentient network. That's scarier to me, how so,
because you can look at a robot and get scared
of it and take a baseball bat to it. But
a network is just feels like in the ether, like
you wouldn't know it's coming or something exactly. Yeah, it's embedded,
(29:12):
especially with you know the cloud out there now. Um,
so say this kind of thing like scared you? What
what are some fail safes like you said, or what
are some um obstacles that you could put up to
prevent this from happening? Um? Well, if you wanted to
(29:33):
follow Isaac asim off, you would build in the three
laws of robotics. Um. I think we've gone over this before,
even it feels like it the three laws of robotics.
And one of them, um, robots may not injure a
human or through inaction allowed them to come to harm.
That'd be a nice thing to build in there. Robots
must obey orders by humans except where it contradicts Number one.
(29:53):
That's a great fail safe, like don't do anything unless
I tell you to. But you still gotta worry about
the supervillain of course. Um. And then three robe almost
kind of serious. Robots must protect its own existence, which
sounds scary, but it cannot conflict with one or two.
I think we didn't didn't we talk about that in
the our TV show. Didn't that come up? Yeah? Okay,
(30:16):
did it sound familiar? Yes, So I would build in
those are three pretty good fail safe if you follow
Asimov's laws, then um, you probably wouldn't have a robot
getting out of hand unless someone like I said, like
some bad person built one to intentionally get out of hand.
But even and I think Vinch makes pretty good point,
(30:37):
even beyond like a bad person like some like a
super villain getting his hands on something and intentionally making
a robot bad, especially like a sentient robot. Bat Um,
we may reach this point through normal everyday competition. That
is true, where like maybe countries all agree to not
do this, but there's one or two that are are
(30:59):
still working on it, and um, they're not working toward
the singularity, but they're working toward computing domination. You know,
they want to have the best machines to carry out
the process is the fastest and stay um viable is
like a world leader that kind of thing. And then
AI just kind of happens accidentally, like we said, maybe
(31:22):
so man I could see something like that, and it also, um,
I do I will say this that if it if
stuff like this happens. I think it will be an accident,
and I think it will be after years of selling
us this stuff as convenience. Yeah, like that, that's how
they get you in there. They don't say, hey, we're
we're creating a robot that will maybe kill you. We
(31:43):
say we're we're in planning an r F I D
chip in your arm that makes it much easier for
you to shop. Sure or um, we have figured out
how to h what is it up to? Opt to genetics?
I think I can't remember what it's called. Where like
you take like a jellyfish is light sensitive genes, splice
them into another animal's genes so that the cells are
(32:05):
light sensitive, photo sensitive, and then you can use little
um basically little light generators directed at specific cells and
neurons or whatever to get them to fire precisely, to
work precisely perfectly every time, so all of a sudden
you don't have Parkinson's anymore because all of your nerves
are functioning. And once we have that in there, who's
(32:25):
controlling that? What network is that connected to? Because through that,
through that step, we've we've become transhuman. That human computer
interface has become a little more meshed. So you know,
living a long time is really great, and we've already
expanded human life like by what double at least? So
why not do it again and again and again. Yeah,
(32:48):
so you gotta be like, let's say you gotta be
non human to get there. That's not too bad, right, Right,
you get to be a thousand years old. But the
point is is like we're already on this path. Technology
makes our lives that much easier. So we're on this
path where we're basically just messing around with computing to
(33:08):
make it better, faster, more human like, right, And all
we have to do is get to the point where
a machine that is capable of reproducing itself becomes sentient
and decides that it wants to reproduce itself, and then
that machine creates a better machine, and so on and
so on and so on. And when that happens, evolution
(33:29):
will become technological. It will be replicated technologically, and it
will happen in this incredibly compressed time, possibly of hours
or days of hand before we can do anything. But
it happens like that. Do you came and remember that guy? Yes,
he's got artificial limbs that attached to your neural wiring.
(33:52):
So you think, pick up cup with hand and your
mechanical hand does it right? Like that's pretty can you
imagine it's going on? Right? It comes back to that
Chris Wile argument, like, yeah, technology is always double edged,
you know, like there's there's good and there's bad to it,
and it may be absolutely right. But again, I feel
like we are going in a direction that a lot
(34:14):
of people don't realize we're going in, and there hasn't
been any discussion about it. I think there's discussion about
it though. That's where I disagree. In the larger world,
I bet you there are conferences and things like this
that we don't know about. There are, but I wonder
how many of them are. Um. I mean, don't you
think if you went to a Singularity conference or an
AI conference and said, well, hey, hey, hey, maybe we
(34:37):
shouldn't be you know, exploring these some of these roads,
like will you'd be you'd lose your funding? I would imagine, Yeah,
I don't be ostracized. I don't necessarily think they're going
to the like the conferences where they love this stuff.
But I think there are people out there talking about it,
just like they talk about maybe we shouldn't mess with
themselves so much. Sure, but they are not integrated with
(34:58):
the people who are actually carrying out this work. It's
not coming from within the community. And if it is,
I know, I don't know for sure, but I'm not
reassured that it is happening. And that's where I think
my fears are based. I'm not against technology. I think
technology does improve our lives. But I also I mean,
there is such thing as Pandora's box, even if it
is metaphorical. Agreed, Uh, I think maybe we should close
(35:22):
with Nico. Just two weeks ago, Nico the robot um
was able to recognize itself in a mirror, and I
want to say it was England. And that is a
really big deal because that is a hallmark of animal intelligence,
self awareness, self awareness. A dog walking by a mirror
and looking at it and recognizing itself. Nico apparently did that.
(35:44):
That's pretty crazy. Well, welcome to humanity, Nico. We will
be licking your boots in no time. You're metallic, foul
tasting robotic boots. Uh. If you want to learn more
about singularity, type in what is the technological singularity? In
(36:07):
the search part house to first dot Com. It'll bring
up a John Strickland article John Strickland from tech Stuff.
That's right and quite sure they've covered this several times,
but we wanted to take our hand data. Um, so
you can check that out too, the tech Stuff article
or podcast. Agreed. Yeah, I'm all over the place. Uh
let's see, I said tech Stuff, which means it's time
for a listener mail. Actually, before we do this real quick,
(36:31):
I want to point out we had to remember Jack Mead.
We had an email about poor Jack Mead has caught
up the podcast. Is feels like he's wandering and drift
in the world. Sure we should plug this stuff. You
should know army. We often call all the fans and
stuff you should arm me. But there's a subgroup on
Facebook that you can look up s y s k
army and um. They are the twisted uber fans who
(36:54):
like to discuss things about the show. It's crazy. It's
a nice little community and they're all great people in
very supportive, like good folk. So Jack, go check them
out if you're smart. Um, I'm gonna call this rebuke
from for the Star Wars podcast. Remember we had someone
from New Jersey right in and say nukes won't work
(37:15):
in space because X, Y, and Z. This guy, I think,
says that it could happen. Um. One of you asked,
I wonder what happened if a nuke went off in space. Uh,
one nuke in space has potential to wipe out the
entire coastal United States, is what this guy says for
a couple of sources I found on the internet. I
only knew about it because of a book series I
(37:37):
read called The Great and Terrible Series by Chris Stewart.
Is an apocalyptic book giving an idea of what the
last days on Earth could be like h and one
of the later books, America suffers from a catastrophic terrorist
attack in which four nukes were detonated above the US.
This caused all electronic equipment to fill too short out
and become useless. Panic and sued cars wouldn't work, cell
(37:58):
phone became bricks, and the entire power good was windered useless.
I remember reading the author's notes stating that, uh, there
was a military report given to Congress about this kind
of scenario, and I found something similar. He sent us
the link. It wasn't like New ging Rich really scared
about this, Like early in the primary. Can I think
you was David Um. One interesting note and the report
(38:22):
refers to how the discovery of the e MP blast
that a company's newes led to the atmosphere I could
test band treaty uh, and that is Tyson Bringhurst in Alaska.
Tyson did some research. That's pretty cool. It sounds like
an S Y s K fan. Yeah, it wasn't just like,
can you guys google this for me? Yeah? Thank you, Tyson? Yeah,
(38:45):
thanks Tyson. Um. If you want to show up your
research and skills, you did some follow up on a
question that you had or something we mentioned or whatever,
we want to hear about it. We like that kind
of stuff. It's pretty cool. You can show off if
your work in a hundred and forty characters or less
on Twitter at s Y s K Podcast. You can
(39:06):
join us on Facebook dot com slash stuff you Should Know,
or you can send us very lengthy emails to Stuff
podcast at discovery dot com for more on this and
thousands of other topics. Is it how stuff works dot
com