Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Get in touch with technology with tech Stuff from how
stuff works dot com. Hey there, welcome to tech Stuff.
I'm your host, Jonathan Strickland. I'm an executive producer at
how Stuff Works and I love all things tech. And yeah,
I am at the IBM Think two thousand eighteen conference,
(00:25):
which is why this sounds a little different than normal.
I am in a hotel room over at the Excalibur Casino,
and I wanted to talk a little bit about what
I saw and some of the talks that I went to,
and I learned a lot of interesting things. Now, one
thing to say is that the THINK conference it's all
(00:46):
about IBM and ibm S partners and customers. And unlike
a lot of companies that we deal with on a
day to day basis, IBM doesn't really have consumer facing businesses.
In other words, it's not like you go to the
store and you go buy IBM stuff. IBM mostly makes
things for other companies and as such, we don't necessarily
(01:11):
have to uh, we don't necessarily encounter it directly. We
encounter IBMS products because they are inside other things that
we are using. So uh, it's interesting to go to
these events and to hear these talks, because a lot
of it is stuff that is very much relevant for
business leaders or for I T. Professionals, or for UH
(01:36):
infrastructure engineers that kind of thing, but less so for
the general public unless you step back a little bit.
Even so, there were some really interesting talks that talked
about UH where the future is headed as far as
very big, broad technologies, and I thought that that would
(01:56):
be the best way to kind of tackle this, to
talk about these sort of trends that have been identified
and these predictions that have been made about these kinds
of tech, because those are the sort of things that
are going to affect us moving forward, us being, you know,
the average person as opposed to people who are running
a tech company. One of the things that they talked
(02:17):
about UH both at the the keynote speech that was
technically the very first big keynote speech that was a
Jenny Romti. Jenny Rometti is the CEO of IBM. She
got up and spoke very directly to IBMS partners and customers.
She talked about how there are different laws that we
(02:39):
have created more like observations really that UM that have
described the way technology has developed over the years. Now.
The most famous one is one I've talked about numerous
times on this show. That would be Moore's law, Moore's law,
which was proposed by Gordon Moore. Of course he didn't
call it Moore's law. He just made an observe. Sian
(03:01):
was about how every eighteen months or so year and
a half to two years, the number of discrete components
meaning transistors at that time on a microchip we're doubling.
And this observation wasn't about necessarily our technological capabilities, like
(03:22):
the ability to make things that small. It was more
about the fact that economics demanded that this was the case,
that there was enough of a demand two in to
give an incentive to manufacturing facilities that made these microchips
to try and make ever smaller components to make more
(03:43):
powerful processors. So, in other words, it wasn't so much
that we had this these egghead scientists locked in the
laboratory coming up with new ways to make transistors smaller.
It was more like we had money in wheelbarrows out side,
and we can only get that money if we made
(04:03):
smaller transistors. And so it was really an economic driven law.
But the effect that we have on us, it doesn't
really matter. The economic part we can kind of ignore.
What we look at is the fact that our processing
power effectively doubles every eighteen months or so. So every
year and a half to two years, the machines were
(04:24):
using are twice as powerful as the ones that were
two years ago. Uh, And that's kind of cool. It
means that we keep getting these incredibly sophisticated machines on
a regular basis, and a lot of the technology sectors
businesses depend upon the continuation of Moore's law. Later on,
(04:48):
I was at a talk with Dr Michio Kaku, who
is a famous physicist and futurist. He talked a little
bit about the end of the era of Moore's law.
He did not give a specific prediction as to when
it would end, but he did say that based just
purely on physics alone, it will end. What he meant
by that is, More's law depends on us shrinking these
(05:12):
components down more and more and more. Once you get
to the point where the quantum world comes into play,
this gets really tricky and I've talked about this before,
to the fact that if you were to create logic
gates that are so thin that an electron could potentially
exist on the other side of a logic gate, then
sometimes an electron is going to be on the other
(05:35):
side of the uh the electron gates sort of like
it had tunneled through, except it had not physically tunneled
through the wall. It's just that it had the probability
of potentially being on the other side of that wall.
And as long as there's a probability, it means that
sometimes that does happen. Even though that you know, in
the classical world we would say, well, there's a barrier there.
(05:57):
You can't just go through. A barrier didn't go through.
It just appeared on the other side because there was
a chance it could. And if there's a chance, then
sometimes that does happen. Well, even beyond that, even if
you say, well, we'll keep figuring out ways to counteract
this quantum effect so that we can keep having microprocessors
(06:18):
that are accurate even with quantum tunneling being an issue,
you eventually get down to the point where you're at
the atomic scale, meaning the components you're creating are made
out of atoms themselves. At this stage, you really it
would be really difficult to counteract those quantum effects and
you would have to abandon this particular approach to computer
(06:43):
science and computer architecture, or else it would just collapse
in on itself. So Moore's law, while it was incredibly important,
and it continues to be incredibly important right now. Um Ever,
since you know the transistor was invented, it it it
only represents the first kind of wave of laws. The
(07:04):
next law that they talked about was one they called
Metcalf's law. Uh. Metcalf's law is actually pretty commonly referred
to law, just not necessarily among you know, regular folks
like me and you. But Metcalf's law is about the
value of a network. So how do you measure how
(07:27):
valuable a network is? Like if you look at a
network of devices, and then you look at a different
network of devices, how could you say which one is
is quote unquote worth more. Metca Cat's law gives you
that that measurement. It states that the value of a network,
of a telecommunications network is proportional to the square of
(07:48):
the number of connected nodes in the system. So, however
many nodes are there, and the node can be any
connected device. They could be a computer, it could be
a smartphone, could be a table, it could be a
game console. Those nodes collectively end up determining the value
(08:09):
of the telecommunications network. When you square the number of
those nodes. It's those interconnections that make the network of
a valuable. This is incredibly important again in the world
of business, less so probably for for me and you.
The third one, the third law that they were proposing,
would be what they were cheekily referring to as Watson's law. Watson,
(08:32):
of course, is not just and artificially intelligent platform for
IBM and for IBMS customers and partners. Watson also refers
to the founder of IBM what was his name, his
last name, But Watson's law would be about how the
(08:53):
amount of data in a system can be leveraged to
get the amount of knowledge out of that data. This
sort of as as data grows exponentially, our ability to
leverage knowledge from that data grows exponentially. So what the
heck does that mean? Well, think of data as just
points of information that are not necessarily connected to one another.
(09:17):
They're not structured necessarily. This would be as if I
recorded a podcast and I just started to say random
words into the microphone, and I did that for forty
five minutes to an hour. And Okay, smart Alex, you
might think that's how I do it now, but you're
you're just mean, you're meaning heads. That's not how I
(09:39):
do it. I actually think this stuff out and I
structure my data so that I create a foundation and
then I build upon it. That's a very easy way
to get knowledge, right. You have the structured format, you
can digest it, you can synthesize it. You can then
use that yourself. But if the data is unstru shirt,
(10:00):
and the data is about a lot of seemingly unconnected things,
and it's spread across multiple types of files, Let's say
that you've got an enormous folder, uh, and that folder
contains files that are video files, they are documents, their presentations,
(10:20):
their spreadsheets, they're all these different things that that on
casual glance don't have any connectivity to them. How can
you make that useful so that you can actually leverage
that data and do stuff with it? And that's kind
of what IBM was focusing on. And that's really where
they were talking about Watson quite a lot. It wasn't
(10:41):
a lot of people think of Watson as this, uh,
this the supercomputer that played on Jeopardy, which is not accurate.
Watson is not a supercomputer. The machine that ran Watson
was just a machine. It was not the entity itself. Uh.
If you want to do to get a little metaphysical
(11:02):
with this, you could actually think about a human being
and you ask, well, what is the human Is the
human being the body, the physical form, or is it
the mind? The person, the personality, the emotions, the memories,
the things that are that inhabit the body and also
that control the body. Is that the person? And you
(11:23):
might argue, well, it's actually the collective. It's the body
and the mind, and I think that's a valid argument.
You could also argue that Watson ultimately is a platform
and the physical machine that runs that platform. I probably
wouldn't argue with you too much there either, except I'd
say that the platform is more important than anything else
in this in this particular case, And by platform I
(11:45):
really just means set of rules, set of algorithms that
Watson uses in order to process information, to look for meaning,
to look for results. So let's take that Jeopardy example, Uh, Jeopardy.
In Jeopardy, Wat's and played against two former champions, one
of whom now Records podcast for How Stuff Works. So
that's kind of awesome. And Watson was playing by looking
(12:08):
at a clue. We're looking quote unquote. It was the
clues being fed to Watson and then going through its
massive amount of data and trying to use that to
figure out what the answer is. And it wasn't just
looking at a list of trivia or facts. It's not
like it's looking at an enormous table and every cell
(12:29):
in that table is filled with a different fact, like
George Washington was the first President of the United States. Instead,
it's looking at a massive library of information and pulling
bits and pieces of information together to formulate an idea
of what the answer is. And if that formulation reaches
(12:50):
a certain threshold of confidence, Watson would then ring in
and present that answer. So it's it's not that it's
looking at, uh you know, a very long trivia book.
It's looking at all this information and drawing conclusions from
it the way similar to how a human being would
(13:10):
not not completely analogous, but similar and uh so, using Watson,
you could leverage your unstructured data. You put Watson into
work at this, and Watson would start to look for
meaningful connections between data points and pulling relevant information about
(13:32):
any given query. So then Watson becomes an agent that
you could interact with. And this agent's job is kind
of like a reference librarian. It's to go to the
massive amount of information that's at its disposal and return
to you the relevant points of information. This is not
that different from the way people were thinking about web
(13:54):
three point oh when that was a big discussion. H
you may remember that like people to talk about how
right now? If you use a search engine, typically the
way it works as you type something in the search
engine and it pulls up a list of websites that
may or may not have what you're looking for on
those websites. So if you might you might be looking
(14:15):
for a let's say it's a um A history of
the Crusades, and you type that into the search engine
and it pulls for you a bunch of different sites
written by different people. Some of them might be very
easy to read and understand. Some of them might be
less easy to read, but they might be more accurate
(14:36):
and more uh unbiased. With the information you don't necessarily
know at the top of it. You have to go
through and read all that yourself. But the web three
point of search engines. This was something that Will from
Alpha was trying to be would pull the relevant information
not websites, but the relevant information from those websites and
(14:58):
present it to you. And that way you could look
over the important bits of information, you skip over everything
else you're given the correct context. In theory, you could
even have an agent like this that could learn about
you and your learning styles and thus present the information
to you in a way that is most helpful to you.
(15:19):
So it's a very big difference between the way we
do searches now and the way that this proposed method
would work. And that's kind of what Watson is doing.
So you've got this this user facing aspect of Watson.
It's kind of like a chat bot, and you can
send that chat bot requests and then the chat bot
will try and pull the information for you, or you
(15:42):
can use it to generate reports. Let's say that you
are a business owner and you want to look at
some information that's gonna pull things from presentations, predictions, results.
Maybe you've got like a end of the quarter report.
Maybe you want to take a look at formation from
reports from your supply chain. All this kind of complicated
(16:04):
stuff and Watson could go out, curate and present this
information in a way that has meaning to you, that
where you can understand what's going on and you can
draw conclusions. Uh. This actually was a pretty interesting concept
to me. I mean, I've seen some implementations of Watson
(16:26):
that do this, and they do it in such a simple,
seemingly simple way that's deceptive. You start to forget that
there is a very powerful computer algorithm that is controlling
all of this because the implementation itself might be pretty straightforward.
So for an example, I went to the Weather Company
(16:48):
last year in TV and while I was there, I
had a chance to talk to a team that was
using Watson in a lot of different implementations, and uh,
you know, they were using it as is the basis
of a customer service platform or to respond to requests.
And when you first look at that, it looks deceptively simple.
(17:08):
You're asking, well, what's the weather going to be like?
And you get results, Uh that that doesn't seem like
it's that hard. You would figure that, oh, well, they're
just gonna pull whatever the record is for my location
for tomorrow and present it to me. But a lot
more could be going on behind the scenes, and I
think that's part of the problem that IBM has been
dealing with and kind of one of the reasons why
(17:31):
they've made such a big deal of it at this conference.
It's because the perception of what Watson is maybe a
little too narrow, a little too uh uh focused on
little aspects of what Watson does and ignores the big picture.
So they've they've definitely doubled down on that. I went
(17:56):
to a talk called Journey to AI that was really
all about this, and they talked all about the the
different variations of artificial intelligence, and uh one of the
things they mentioned was the very different views of what
AI is. For example, you've got simple AI. Simple AI
(18:18):
would include some of the stuff I talked about in
a previous episode about the little aspects of intelligence that
are very very narrow, just to slice the pie of intelligence,
but they do represent what intelligence is in in just
a very specific application. So image recognition is an example
of that, or voice recognition or natural language processing even
(18:40):
as part of that. These are all aspects of intelligence.
You would not call a machine that lacks one of
these things truly intelligent, but you also wouldn't call a
machine that only has one of these things truly intelligent.
So if I have a smartphone and the smartphone is
able to recognize uh images, so i'm I'm I point
(19:02):
my smartphone at something and it even labels what that
thing is. Maybe it says, oh, well, that's a specific
model and make of car, or maybe it says that
building is a historic landmark, or this park is going
to have a concert uh at at it the next day,
(19:22):
or something along those lines. That's cool. That image recognition
is really cool, but I wouldn't call my smartphone intelligent. Similarly,
if my smartphone happens to have one of those digital
assistance on it, and it does, I've got an Android phone,
so I've got the Google Assistant on there. Um, I
can talk to that and it can retrieve information for me.
It can do tasks for me. I can use it
(19:43):
to make calls, I can use it to send text messages,
or I can use it to search for information on
my phone or on the internet. I still wouldn't call
my phone intelligent. It has an aspect of intelligence. Similarly,
if I had a supercomputer that could listen to voice commands,
respond in natural language, and do these other things, but
(20:04):
it couldn't do any image recognition. I would feel I
would I would notice that lack, and I wouldn't call
that intelligent. On the other side of the scale, you
have general AI, where you know, the classic image of
this is you've got a big machine. They can do
uh that can do general thinking, like thinking that's analogous
(20:25):
to human thinking. It can process information, it can draw conclusions,
that can synthesize data. It can um innovate. It may
even be self aware, although the weather or not self
awareness is directly tied to intelligence is a matter of
philosophical debate. Talking about general AI, I mean, that's that's
a hard, hard goal to hit. We honestly don't know
(20:49):
what it will take to get there. It may be
that we are thirty years away from having a true
general AI, It may be much longer than that, it
maybe a century away, or it may even be impossible
for us to do based upon our technological abilities. Right now,
most technologists think that it is attainable, but they don't
(21:13):
know exactly what it's going to take to get there.
So there's some argument about the timeline, But there are
a lot of interesting things that can happen between those
simple versions of AI, and that that crazy general AI
that that you know, science fiction writers write about and
warn us about. And that's where this this ability to
(21:36):
deal with unstructured data comes in and h designing AI
is part of that problem. But as they mentioned in
multiple presentations here at IBM, it's not just building the
artificial intelligence to do this that's a challenge. It's also
incorporating that artificial intelligence into existing work practices because, as
(22:00):
most businesses have existed for a while now, it's not
like you can just slot AI n necessarily. It's not
like a module you plug in and everything works properly.
You might have to reevaluate and redesign work processes in
order to make this happen. And again, this gets a
little little dry and technical if you're not really into
(22:21):
the business side of things. But when you start thinking
about you realize, yeah, it's not enough to just build
a tool. You have to figure out how's the best
way to use that tool with respect to the things
you're already trying to do that. They started talking about
impotence match. The engineers were chatting, chatting all about impotence
(22:42):
match between man and machine to get machines to process
human language and commands and to return information that would
be useful to humans, and to eventually get rid of
that boundary between man and machines so that decisions can
be made together and implemented together. So this gets into
that concept of augmented intelligence, not that we are trying
(23:05):
to create a supercomputer that is incredibly intelligent, and we
will then reference the supercomputer as if it were an
oracle or a deity, instead talking about creating machines that
would work right alongside people, and the machines could help
fill in the gaps that would be there because of
(23:26):
the human failings that are in all of us, and
humans could provide all the bits that machines are not
good at, and together we could be better. And that
we have to get to a point where we have
to trust the machines as a an assistant, and the
machines have to quote unquote, trust us as teachers. By
(23:47):
trust us, they don't necessarily mean that the machines are
going to be harboring doubts, but rather that humans are
the ones designing these machines, and we have to make
certain that we do so in a way that is responsible,
that is ethical, that is inclusive. Otherwise we end up
with bad machines. And it's not that the machines themselves
(24:09):
were inherently wicked, but rather they were poorly designed. I've
got more to say about the Journey to AI presentation
at IBM THINK, but before I go into it, let's
take a quick break to thank our sponsor. The folks
(24:31):
over at IBM are arguing that every single industry across
the world is going to be affected by this sort
of transformation of of data and knowledge. They started referencing
things like retail optimization, or the oil industry, or automotive
(24:52):
industries shipping. All of these things they said, we're going
to transform dramatically over the next few years due to
this kind of technology. Uh. And they talked about how
the one field you can look at right now that
is undergoing such a transformation is healthcare. All healthcare is
is transforming because we are seeing not just advanced tools
(25:15):
come into hospitals and doctors offices, but also these programs
like Watson where a doctor can actually turn to Watson
as a colleague, almost like someone up here who can
provide more information a second opinion, if you will. In fact,
IBM brought up some representatives from the American Cancer Society
(25:40):
and some very prestigious cancer research hospitals to talk about
this and about how cancer is a really really difficult problem.
It is, uh, it is a complicated disease. Really, when
you think about cancer is a family of diseases. It's
(26:00):
not just a single illness, but rather a whole, a
whole suite of illnesses. There are hundreds of different types
of cancer. Now, to make it more complicated, there are
different methods for diagnosing and treating all these different types
of cancer, and that obviously means that you have to
(26:22):
be very careful when you're an oncologist, a cancer specialist
to correctly identify, to diagnose, and to treat specific types
of cancer, because a treatment for one type may not
be effective for a different type, and not every place
in the world has access to incredibly gifted, educated oncologists.
(26:45):
If you happen to be fortunate and a lot enough
to live in a major city in a well developed nation,
then you may live close to a teaching hospital, in
which case you have the access to incredible specialists who
have dedicated their lives to learning and fighting cancer. But
if you live in a small town and you don't
(27:08):
have that access, then you your your options are severely limited. Well,
IBM and Watson. One of the first problems they were
looking at tackling outside of you know, once developing the platform,
was using Watson to help doctors treat cancer. And the
way Watson works, the way it's effective, is that you
(27:32):
have to feed it information. Without the data, Watson is useless.
Watson is good at analyzing data, curating data, and producing results,
but in order to do that, you have to give
it data. So what the IBM did was they reached
out to the American Cancer Society and they talked with
(27:52):
them about feeding Watson data about cancer. American Cancer Society
had millions of data sets and clinical records that they
used to help train Watson to understand how the diagnosis
and treatment processes for different types of cancer actually went.
So this was like Watson getting a crash course in
(28:16):
oncology and from that information which is constantly being refreshed
with new research, with new experiments, with new treatments, that
also can then go to Watson. Watson is able to
look at a huge set of data points and look
at the effectiveness overall of any given diagnosis method or treatment. So,
(28:44):
in other words, you might have conducted a series of
experiments and determined that one particular approach is the most effective,
and that's why you that's your go to approach for
looking at that type of cancer. Watson, however, can look
across the higher set of data points, not just from
your experiments and your work and your research, but everyone
(29:05):
else is that has been part of the American Cancer
Society's work, and then Watson can say, you know, yeah,
that that method, out of all the ones you've tried,
has worked best for you. But there's this other methodology
that is even more effective that you have not yet tried,
that you didn't even know about. But because I have
(29:26):
access to all the information, which is far far greater
than what any human can navigate, I can tell you that,
based upon the success rate of all those cases, this
is something you should try. And thus Watson becomes that
cancer specialist who can provide a second opinion. Uh, this
(29:46):
is a very powerful tool, something that can legitimately save lives,
and it is of a real consequence to those of
us in the audience who are not just trying to
create a business or I shouldn't say just but are
trying to create a business or trying to figure out
how to uh streamline our our back end processes as
(30:09):
we try to do whatever it is we do. This
is life and death for millions of people around the world. Uh,
it's a really interesting case study too. I mean that
so far Watson is being used in more than two
hundred hospitals across the world. More than ten thousand patients
(30:31):
are able to take advantage of this using Watson to
help make decisions. Really, it's the physicians who are using
Watson to kind of guide themselves and get that second
opinion which may or may not confirm what the original
physician had concluded, help refine approaches, help give options to patients,
which obviously is also really important. And when you consider
(30:53):
that this year alone, in one point seven million Americans
will be diagnosed with cancer, you realize this is a
very big deal. And of course that's just the United States. Obviously,
global numbers will be much higher. And again, if you
happen to live in a country like the United States
and you're near a learning hospital, you then might have
(31:14):
access to people who are the leading practitioners, the leading thinkers,
leading researchers in cancer. But if you live in a
developing nation where you have a much worse ratio of
doctor to patients, then you would really want to have
access to this deep level of expertise. That's the whole concept.
(31:35):
So uh they all the folks up on stage, the
representatives from Memorial Sloan Kettering, which is a cancer treatment center,
and also of the American Cancer Society. We're citing some
really interesting uh um statistics. So in the United States,
where we have a lot of oncologists, a lot of
cancer specialists, on average, every oncologist has about one patients,
(32:02):
which you know, that's that's a lot of patients. But
if you think about you realize, well, that might be
manageable for a single oncologist. But in other parts of
the world, it's more like the number. You look at
the number of oncologists versus the number of people who
are dealing with cancer, and it becomes ten thousand patients
to one oncologist. At that scale, it is impossible, no
(32:25):
matter how gifted and intelligent and educated you are, to
be able to handle that enormous amount of of work
without help. And so again that was where they were
citing use of Watson as a way to help offload
some of this this very difficult work that the oncologists
(32:47):
do and get guidance from expertise from around the world.
And again, this is not Watson coming up with new treatments.
This is an artificially intelligent platform for a very narrow
definition of AI looking at an enormous data set that
(33:09):
was generated by humans, by human beings. So we're not
saying that there's a computer doctor out there that's better
than human doctors, that it's smarter than we are. Moreover,
it's more like saying we have the world's best librarian
that is looking at the mass collected knowledge base on
(33:30):
a very specific subject and returning the results that are
relevant to any given query to help with human decisions.
So that's where that augmenting intelligence comes in. It's not
that you've got a robo doctor. It's that you've got
a robo reference librarian who is able to reference all
(33:51):
the human doctors and see what has worked the best.
That's a good way of looking at Watson in general
when you want to understand what it does and what
it could do in lots of different contexts. It's again
something that could help with handling any large set of
data points. It wouldn't have to be medical, although that's
(34:13):
an easy way to understand how that could be an
effective use. Another possible use of Watson would be for
the purposes of augmented reality, where you are using something
like a smartphone, let's say, to take images of whatever
(34:34):
it is you're looking at, and you're asking Watson to
give you guidance on how to deal with the situation.
So imagine that you are an auto mechanic and you
have a vehicle come in that is not not frequently
found in your area, so you haven't had a lot
of experience working on it. You you know, you have
good working knowledge of automobiles in general, but you don't
(34:55):
know the particulars of this specific make and model. And
you lift up the hood and you're looking at the engine,
and you're looking at different parts, and you see one
particular part that you believe is the problem, so you
take up photo of it, and then you have a
Watson assistant that's working with you on an app that's
specifically written for your line of work. So, in other words,
(35:18):
Watson is really just looking at a data set that
is relevant to auto mechanics. It's not like it's the
world's it's not looking at all the information across the
Internet or anything like that. This is a specific implementation
of the platform. And then Watson references it's information, returns
(35:39):
the results to you, and explains what that part is.
What are some of the common problems, what is you know, basically,
what was the problem that you have encountered specifically, how
do you address it? Do you have repairs you can make?
Do you need to replace the part? If you do
need to replace the part, where would you get it?
How long will it take to get there? Essentially all
(35:59):
the amation you need as a mechanic in order to
fix the problem and also to alert your customer. Hey,
here's what's going on. Here's how much it's gonna cost.
Here's how long it's gonna take. Um, And you can
even answer why. You could find out where the delays
are if it's gonna be something that's gonna take like, well,
it's gonna take two weeks. Why, Well, because here's the
(36:21):
obscure part that I need to order, and here's the
really complicated supply chain of how it's going to have
to get to me. And I can't speed that up
because I have no control over it. If you're able
to actually explain that to the customer, then you can,
you know, maybe take some of the heat off. And
you can also probably say, hey, next time, buy a
car that's not so uh, you know, exotic. It's something
(36:44):
that I can work on. No, no, don't victim blame.
That's not cool, but you could at least explain the
context of what's happening. And I found this really interesting.
They also talked about how Watson could also work with
companies that have much smaller data sets that you know,
obviously you have different scales here. If you look at
(37:05):
all the information on a consumer facing business where they're
collecting information about the people who use the product, then
the data sets could potentially be enormous. A good example
of this would be Facebook, which of course is is
going through a massive scandal right now due to a
company that collected data and then tried to leverage it
(37:26):
in a way that was unethical at best. So Facebook
has more than a billion users, and people use Facebook
a lot. People who are using Facebook a ton are
sharing a lot of information about themselves, either directly or indirectly.
So you have this massive amount of data that Facebook
(37:47):
is collecting and sitting on top of and using a
device like or a an API platform like Watson to
go through all that data and pull meaningful information from
it could create ate some really powerful strategies. You could
figure out trends and be able to leverage them, and
(38:08):
you could do them in ways that were maybe helpful
or maybe exploitative, probably the second. But you would have
a huge amount of data. That's really the point I'm
getting at is because you've got an engaged user base
that is enthusiastically handing information over, you would have an
enormous data set. But you could also use a tool
(38:29):
like Watson for internal processes like let's say that you
are a company, and let's say that you're part of
a shipping company. So you need to be able to
keep track of all the suppliers, the destinations, the the
way that you're actually moving product from point A to
point B. It's a lot of moving parts, law logistics,
(38:50):
but it's on the whole. If you look at all
the data and you were to say, like let's fill
up you know, two containers with raw information, it would
be a fraction of the size of something like Facebook. Like, yeah,
there are a lot of data points and it's complicated.
It's too complicated for humans to navigate easily. But it's
(39:11):
not like it's the huge amount of data that's generated
on a daily basis from Facebook. Watson still, however, has
the capability of learning even from smaller data sets. So again,
this was IBM talking to their partners and their customers saying, Hey,
I know that we're talking about using Watson for these
really really big ideas and these really world changing applications
(39:35):
that are relying upon millions and millions of records, but
Watson could also work for you. That was kind of
a message, uh, and you know that was a very
compelling one. They were. They they brought up several people
to talk about how this has been used. For example,
they brought up the CEO of Orange Bank. Orange is
(39:56):
a telecommunications company, and the telecommunications company to I did
that they were going to create a financial institution as well,
so an actual bank, and they the bank had decided
that one of the things they wanted to do was
create a an interface for their customers that would make
it very easy to deal with routine sort of problems
(40:19):
and questions and uh and provide information without the need
to reference that customer up to a human customer service representative,
which is a delicate thing to do. You want to
make sure that you are serving your customers properly. You
don't want to turn them off. You don't want them
to log again. They see a chat bot and they say, oh, well,
no one cares about me. They just put me in
(40:42):
touch with a robot. Uh. But at the same time,
you don't want to have to deal with uh, you know,
customer service representatives answering the same mundane questions over and
over again. That makes it hard to have an engaged
and and happy workforce. So there's a delicate balance here.
What Orange decided to do was create a virtual advisor.
(41:04):
They named the virtual advisor Jingo d J I, N G,
G O, and Jingo uses Watson as the the foundation
for what it does. And as the CEO explained, it's
the customer's first point of contact for the bank, and
Jingo can respond to a lot of different common queries
(41:25):
and they could be very general ones that are sort
of bank wide kind of questions, or they could be
very specific to the individual. And they said that Jingo
is the most effective agent they've seen, and that Jingo
also never has to take a break. Jingo can work
seven and is never tired and can respond to most
(41:46):
requests without the need to funnel customers to other agents.
So this was an example of an industry that has
a relatively small data set compared to something like Facebook,
and a bank, even with a lot of customers, is
going to be dealing with the same volume of information
as a social media network would. What else can we
(42:08):
expect when AI starts to insinuate its way into our
daily lives. Well, I'll tell you about it in just
a minute, but first let's take a quick break to
thank our sponsor. IBM also chatted about how AI could
(42:28):
help out in the field of human resources. That HR
is another one of those those departments in most companies
that has to field a lot of the same questions
over and over, and it may be that there are
lots of different policies that the HR representative has to
go through and find the relevant information. And while the
HR representative might have access to all that, he or
(42:52):
she may not automatically know the answer, and so it
takes time and effort to hunt down to and serve's
that employees might have. For HR professionals, so IBM had
also kind of I mentioned that Watson would be an
ideal tool for that as well. So if you need
to ask about specific forms or policies or uh compensation packages,
(43:17):
all the sort of things that HR folks have to
deal with, you could have an artificially intelligent platform do
that on your behalf. Which was also kind of interesting.
So there were several other folks that they brought up
on stage to chat about, you know, their experiences implementing
Watson in different ways. It was very much all about here,
(43:41):
here's what this this API is really for and how
you might use it, and not you know, trying to
get away from Watson is the the computer program that
one on Jeopardy or Watson was this quirky platform that
could come up with dynamically created recipes based upon the
(44:03):
ingredients who fed to it. The whole idea was to
create something that would have multiple use cases on multiple scales,
and I found it. I found it helpful to get
a better grip on exactly what Watson is and is not.
Um It was a fascinating discussion. We saw a lot
(44:23):
of interesting people. We saw the CEO of Nvidio come
out and talk about partnering with IBM to pair GPUs
and CPUs together to create the most powerful machines that
are able to process enormous amounts of information in a
very short amount of time. They talked about how uh,
(44:45):
this is the sort of of technology that's powering the
next generation of machines like autonomous cars. They also even
acknowledged the fact that this is still a young field
and a knowledge the the tragic accident that happened in
Arizona when a an autonomous suv that was that belonged
(45:09):
to Uber struck and killed a pedestrian as she was
walking her bicycle across the street. They took some time
to actually talk about this and say, this is a
horrible tragedy and nothing should distract us from the fact
that you know this, this person passed away and her
family is dealing with the the aftermath of that, and
(45:30):
it's terrible, and it also forces us to acknowledge that
these things were working on our life and death situations.
They are not trivial, They're not something that are It's
not just an engineering problem, it's not just a kind
of a hypothetical situation. These are are technologies that could
(45:51):
potentially save or end lives if the technology is implemented
one way or another, so it behooves us to be
extremely careful to figure out how to do it properly. Uh.
The CEO of Nvideo also talked about just how complicated
this whole process is for for vehicles and mentioned that,
(46:13):
you know, some people might think that a car is
just sort of processing one big stream of data and
making decisions on how to proceed based on that, because
that's kind of how humans do it, right, Like we
perceive stuff and then we have to respond to it,
We have to react to it. But machines do this
in a different way. They're they're collecting different individual streams
(46:34):
of data, and each of those streams needs to be
analyzed and processed, and then the collective information needs to
be analyzed and processed so that the right reaction can
take place. So it's it's almost like you can think
of each sensor as sending its information to a centralized location,
and then all of those collective information streams from all
(46:56):
of those sensors has to be synthesized and analyzed, and
then the reaction has to take place. So it makes
it sound way more complicated than you might originally imagine,
I certainly felt that way. We got to watch a
video of a an eight minute drive of an autonomous
(47:17):
car down country roads in New Jersey, showing how it
would navigate down the roads, even properly navigating when there
were no road signs available, making certain that the car
was behaving the way it was supposed to. And as
they were pointing out, like even the in this scenario
it was nice weather, it was during the daytime. Uh,
(47:37):
even in that scenario, it's a complicated thing to make
a machine do that properly. And then you start imagining
all the different additional complications that could arise, like bad
weather or night driving, or heavier traffic, and or even
things like wildlife running across the street. We realized this
(47:59):
is a lot more difficult than just sensing a potential
obstacle on the road and taking the right course of
action to avoid hitting it. In fact, according to the CEO,
he said that every car needs about a hundred servers
to process all the information. And uh they were using
(48:19):
a fleet of around a hundred cars, so or two
hundred cars, so they had a thousand to two thousand
servers dedicated just to processing information in order to develop
this technology in the first place, so it becomes an
incredibly difficult thing to do well. That was kind of
the overall story of the journey to AI. This this
(48:40):
discussion of being in this this middle period between developing
these very hyper focused tools and artificial intelligence and the
goal of getting general and artificial intelligence. The idea of
using AI as kind of an assistant to performing very
(49:01):
complicated tasks, complicated from a computational standpoint, also complicated from
just just from how much data is there. Again, if
you if you put a human being in charge of
going through all that information to find the most relevant
and useful information, it would take hours or days or years,
(49:22):
depending upon the data set, whereas artificially intelligent, properly designed
program can do it in a fraction of that time,
and do it dynamically, request after request after request, and
can continuously update its answers based upon fresh information coming
into the data set. I found it really interesting and
it gives me a lot of hope for the future
(49:44):
for various implementations of this type of technology, whether it's
Watson or some comparable technology. I really think it's going
to be interesting for all sorts of different applications, some
of which we as consumers will interact with directly, whether
it's a customer service agent or maybe it's a personal assistant,
something that gets to know us and our routines. We're
(50:07):
starting to see that a little bit in some of
the personal assistants like Google Home, uh Sirie, Alexa, that
kind of thing. You see a little bit there, But
it'll continue to grow more sophisticated and more proactive to
the point where we can have kind of like a
It's almost like having an AI life coach right at
(50:30):
your disposal. So I found it all very fascinating and
I hope to learn a lot more about lots of
different topics while I'm here at the THINK conference. I
can't wait to chat with you guys more about quantum computing.
I actually got to see a a model of what
a quantum computer looks like, and boy, halldy, it does
not look like a normal computer. But I'll definitely do
(50:54):
an episode about that to talk more about what quantum
computers are, how they work, why they are important, and
where we might be going with it, and maybe talk
a little bit more about some of the the stuff
Dr Michio Kaku said, maybe some of the stuff that
Neil deGrasse Tyson said. I went to his talk as well,
and uh, they were very fascinating. They weren't quite as
(51:14):
tech oriented as I would like to do a full
episode like a recap on them, but I might touch
on some of the themes they talked about and their
meaning to me as just a person who loves tech
and the tech sector in general, because they both gave
very fascinating presentations. If you guys have suggestions for future
episodes of tech Stuff, whether it is a technology, a company,
(51:38):
a person, maybe there's someone you want me to interview,
let me know. Send me a message. The email address
for the show is tech Stuff at how stuff works
dot com, or you can drop me a line on
Facebook or Twitter. The handover both of those is text
Stuff hs W. Remember you can follow us on Instagram.
That account is always showing interesting behind the scenes information,
(52:00):
so make sure you go check that out. And on
Wednesdays and Fridays typically I record live. I stream my
recording sessions on twitch dot tv slash tech Stuff, so
you can come and watch me record one of these episodes.
There's a chat room there. You can jump in there
and chat with me live as I'm recording, although I
don't respond until I hit a break because otherwise I
(52:22):
find it too distracting and I ramble and that does
not make for good podcasting. But please come on buy
say hello. I would love to see you there, and
I'll talk to you again really soon. For more on
this and thousands of other topics because at how stuff
(52:44):
Works dot com