Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:04):
Welcome to tech Stuff, a production from I Heart Radio.
Hey there, and welcome to tech Stuff. I'm your host
Jonathan Strickland. I'm an executive producer with I Heart Radio.
And how the tech are Young. Well, since it's been
in the news quite a bit so far this year,
(00:24):
I thought today we would look into open ai, both
the for profit company and it's parent not for profit organization. So,
for those of y'all who have managed to dodge all
the hubbub, open ai is the company behind chat gpt.
That's the chat bot that's been making headlines for everything
(00:46):
from offending the musician Nick Cave of Nick Cave and
the Bad Seeds Fame, to worrying teachers that their students
are just going to use a chat bot to cheat
on assignments rather than actually bother to learn something. But
what about the company that made this thing in the
first place. Well, the history of open ai dates back
(01:07):
to twenty when a bunch of very wealthy tech entrepreneurs
got together and said, you know what, maybe we should
create an organization that aims to make helpful artificial intelligence
before someone opens Pandora's box and Unleasha's malevolent you know,
or at least uncaring super intelligence upon us all or
(01:31):
something to that effect. Essentially, the goal was to develop
AI and AI applications in a way that would be
beneficial to humanity and try to avoid all the scary
sky net terminator kind of stuff. But to talk about
this requires us to define some terms, like terms that
(01:54):
you might think are obvious on the face of it,
but I would argue are not. So. The big one
here would be artificial intelligence. There are certain words and
phrases out in the world that have lots of different meanings,
and this can sometimes cause confusion and miscommunication. I would
argue artificial intelligence is a real doozy among these. You
(02:15):
hear about someone working in AI and you start immediately
getting preconceived ideas of what that means, and you're probably wrong. Actually,
now that we're just talking about even just the word
intelligence has some ambiguity to it. So what do we
mean when we say that something is intelligent. Well, let's
take a look at what some dictionaries say. So Webster
(02:39):
defines intelligence as the ability to learn or understand or
to deal with new or trying situations, or the ability
to apply knowledge to manipulate one's environment or to think
abstractly as measured by objective criteria such as tests. Thanks
Webster Oxford defines it as the ability to learn, understand,
(03:02):
and think in a logical way about things. The ability
to do this, well, it's a little more succinct. But
then if we really want to boil it down, the
American Heritage Dictionary defines it as the ability to acquire, understand,
and use knowledge. That's what intelligence is, according to those.
Dr dah lyoel Lee, and I apologize Dr Lee for
(03:24):
butchering your name is a professor of neuroscience and author
of Birth of Intelligence, and he defines intelligence as the
ability to solve complex problems or make decisions with outcomes
that benefit the actor. Dr Lee also acknowledges that intelligence
is actually pretty hard to define, and that there are
many different definitions, which you know, we've just seen. Like
(03:47):
even though all the definitions I mentioned have significant overlap
between them and they all seem to be dancing around
the same kind of concept, you might feel like none
of them quite get it right. And that's where some
of these challenges come from. Is that just defining intelligence
before we even get to artificial intelligence is hard. All right, Well,
(04:11):
let's let's say that intelligence generally is the ability to
learn and to acquire knowledge and then to use that
knowledge in new situations. Let's just use it by that
and say that, you know, it's got an element of
problem solving that goes with that, which I think is
pretty much implied. So artificial intelligence, then will artificial suggests
(04:32):
that it's something that's created by humans rather than found
in nature. Oxford Languages defines artificial intelligence as the theory
and development of computer systems able to perform tasks that
normally require human intelligence, such as visual perception, speech recognition,
decision making, and translation between languages. So that's a fairly
(04:54):
decent definition. Uh, But here's where we run into some
more ambiguity. When we talk about artificial intelligence. We're not
necessarily using the word intelligence to mean the exact same
thing when we apply it to a human context. You know,
a person working in artificial intelligence isn't necessarily trying to
(05:15):
make a machine think or appear to think like a
human does. In fact, they're probably not doing anything of
the sort. They might be working on something that, when
collected with the work of countless others, ends up contributing
to that kind of machine but that's different, so AI
(05:35):
involves a lot of different disciplines and technologies. Facial recognition
is a type of AI. Speech recognition is a type
of AI. Text to speech is related to artificial intelligence.
Robotics share a lot of features with AI, although you
could also have robots that are fully programmed to complete
(05:57):
precise routines and and that cases they're just following a
list of instructions and there's no decision making component right there,
just literally following step one, step two, step three, step four, repeat.
So those kinds of robots aren't really in the artificial
intelligence realm, but there are other robots that are now
frequently I find that the general public associates the concept
(06:20):
of artificial intelligence with a machine that appears to have
knowledge gathering and problems solving capabilities, usually paired with some
method to put solutions into action, so often in the
form of a robot or a computer system that's connected
to stuff that can actually get crap done. I almost
said the other phrase, but this is a family show,
(06:42):
so they're thinking about what is often referred to as
strong AI. These are machines that have a form of
intelligence that is to all practical purposes. Indistinguishable from human intelligence.
Now that's not to say that it's processing information the
exact same way that we peep process information, but that
(07:02):
the outcome is the same that at the end of
the day, if the machine and the person were to
come to the same conclusion, doesn't really matter what steps
in the middle were taken. Now, if such a thing
as possible, we're not there yet. We aren't at the
point where we have this. But the work done in
AI right now, which is really in the field of
(07:24):
weak AI, that is, artificial intelligent solutions designed for specific purposes,
is contributing toward the creation of strong AI. Now there's
another phrase for strong AI that we need to talk about,
which is artificial general intelligence or a g I. And
I know there are a lot of initialisms, that's always
(07:46):
the case when we talk about tech. But a g
I general intelligence that kind of tells you, Okay, this
is an AI that's meant to do lots of different stuff. Right,
It's not designed to do a specific task and just
get better and better and better at doing that task.
It's meant to handle lots of different things, maybe any thing.
(08:09):
And it's just like if you put a human and
you have that human go into a situation they've never
experienced before, how do they cope? Well, it's the goal
is to create an artificial intelligence that would be able
to handle new situations in a similar way to the
way humans do. That's the artificial general intelligence. Again, no
(08:29):
one has made one of these yet, but that would
become open a eyes. Primary goal is to create an
a g I, the first to create an a g I. Now,
week AI does not mean that artificial intelligence is bad
at its job or its inferior in some way. In fact,
week AI might be much better at doing its specific
(08:51):
task than humans are at completing that specific task. It's
just that this is all the week AI can do.
It can't do other things things it's operating under constraints.
So as an example, let's just think of something that's
really simple that you wouldn't even think of as being intelligent,
like a basic calculator, not even a scientific calculator, a
(09:13):
basic calculator like one that might be handed out by
a bank, and you can enter a pretty tough mathematical
problem into the calculator and it will provide a solution
in a fraction of the time it would take your
average human to do the same work, but that same
human could do other stuff like maybe that human can
(09:34):
play the guitar or juggle or paint or play a
video game or any of an endless number of other tasks.
But the calculator can't do that. It can just calculate.
That's all it can do, and it can do it
really well, but it's unable to extend this capability to
anything beyond that purpose. Now, sometimes when we encounter a
(09:56):
really good week AI, we can fool ourselves into thinking
that the AI is doing something really magical, or that
it's matching our own capabilities to think. It can actually
be pretty easy to fall into this trap. A sufficiently
sophisticated chatbot might fool listen to thinking that the machine
we're chatting with is actually thinking itself. But it's not,
(10:18):
at least not in the same way that people do. Now,
why did I go through all of that trouble to
define all these things? Well? The founding principle of open
AI is to create artificial general intelligence and AI applications
and technologies through a responsible, thoughtful approach, and that implies
(10:39):
that there's an irresponsible way to do this, and that
following such an irresponsible way could lead to disaster. And
that's where we get to our science fiction stories, and
that certainly tracks. You know, I'm not here to tell
you that that's an unreasonable fear. That fear is totally reasonable.
In fact, we've been seeing how weak AI can and
(11:00):
does cause problems, or maybe how I should say, are
our reliance upon week AI can cause problems. The AI
on its own may not be able to cause a
problem by itself, but because we rely on it, then
we go and we create these problems. So let's go
with facial recognition for this one. It has been shown
(11:21):
time and again that many of the facial recognition technologies
that are actively deployed in the world today have bias
built into them. They are fairly reliable at identifying people
within certain populations, like white people primarily, but then with
people of color, these systems aren't nearly as accurate. So
what happens is that these facial recognition systems can generate
(11:45):
false positives more frequently for say, black people. And because
we have law enforcement agencies that are making active use
of facial recognition technologies when looking for suspects, this means
that police can and do end up harassing innocent people,
all based off of this misidentification. So imagine one day
(12:06):
you're just going about your business and then suddenly law
enforcement swoops in and arrests you for a crime not
only you didn't commit, but you also have no knowledge
of this crime. And it's all because a machine somewhere
said this is the person you want. Now, imagine how
your life would be affected. What if it happened while
(12:27):
you were at work or at school. How do you
think the people around you would react when police come
in and arrest you. How many of those people would
treat you differently even after hearing that the whole thing
was just a mistake. What kind of stress would that
put on you and the people in your life? Now?
The reason I'm really nailing this home is because this
(12:48):
stuff is happening right. This problem is a real problem.
This is not a theoretical it's not a hypothetical. Real
people have had their lives up ended because police have
relied upon faulty facial recognition technology and saying, oops, it
was our mistake doesn't fix your life when it's been
(13:09):
turned upside down. Or as Matthew Grissinger of the Institute
for Safe medication practices has put it quote. The tendency
to favor or give greater credence to information supplied by
technology e g. And a d C display, and to
ignore a manual source of information that provides contradictory information
(13:30):
e g. Handwritten entry on the computer generated m a R,
even if it is correct, illustrates the phenomenon of automation bias.
Automation complacency is a closely linked, overlapping concept that refers
to the monitoring of technology with less frequency or vigilance
because of a lower suspicion of error and a stronger
(13:51):
belief in its accuracy end quote. So in other words,
we have a tendency to trust the output of machines,
and that trust is not always warranted. This can get
us into trouble. We can trust that the machines know
what they're doing and that the way they process information
is reliable and even infallible, and by acting upon that
(14:15):
we can create terrible consequences. Mr griss Singer's context was
within the field of medication prescriptions, which, obviously, if you
were to rely solely upon automated output and that automated
output was wrong, could result in terrible consequences. But I'm
sure you can imagine countless other scenarios in which an
over reliance on technology could lead to disaster. We'll talk
(14:39):
about another one when we come back from this quick break.
We're back, and before the break, I was talking about
how we have a tendency to put too much trust
(14:59):
inte knowlogy in general and AI in particular, and how
this can come back to haunt us. So an example
that leaps to my mind is autonomous cars. And I'm
going to be the first to admit I jumped on
the autonomous car bandwagon without applying nearly enough critical thinking.
I was really considering just the surface level of what
(15:21):
it would mean to have autonomous cars. So here's how
my flawed logic went. This is why I was so
like Gung Ho on autonomous cars several years ago now
and have subsequently changed my my thinking. So the way
I originally thought was computer processors are wicked fast, right
like a CPU in your computer can complete calculations so quickly,
(15:46):
millions of them every second, billions in fact, depending upon
the the sophistication of the of the operations. And then
you have parallel processing, right like if you have a
multi core processor could have lots of functions all being
performed simultaneously by this processor. Then, on top of that,
(16:08):
you could have sensors on your car that cover three
sixty degrees of view around the vehicle, so you would
be able to have the system pay attention in every
single direction simultaneously, whereas a human driver can only pay
attention within their field of view and then with the
help of some mirrors, get a little extra you know,
(16:28):
awareness around them. You could have mechanical systems that could
react immediately upon receiving a command from the processors with
no delay, so you don't have that delay of action
between when you sense something happening and when you are
able to act on that. So, surely such a system
with incredible processing power, with three sixty degrees of awareness,
(16:52):
with this immediate ability to react, would be able to
engage in defensive driving faster, more effectively, and safer than
in a human ever could. Clearly, machines are superior. We
should all be in autonomous cars. This is where I
ran into the problem of overreliance on technology. Sure, in
isolated cases, everything I was thinking might be at least
(17:14):
partly true, but when you take it together and you
start to apply it in the field in a vehicle,
things are far more complicated than I ever gave it
credit for. And as we have seen with advanced driver
assist features, if we rely too much on this technology,
it can and does lead to tragedy. So we've seen
(17:36):
this play out where people have depended too heavily upon
this tech and have paid for it with their lives.
So we know that this is more complex than what
I initially thought of back in my naive days of
being so, you know, flag bearing for the whole autonomous
car our movement, and I still believe in autonomous cars
(17:59):
and how they could contribute to greater safety, but I
also recognize that it's a far more complex problem than
what I originally imagined. All right, so we have thoroughly
defined the problem at this point. Right. Artificial intelligence has
the potential to help us do amazing things, but only
(18:22):
if we develop and deploy it properly. Otherwise it could
exacerbate existing problems or even create all new problems. So
there's a need to be thoughtful about design and application
and deployment and distribution. So who decided to codify this
philosophy of being careful about AI and create an organization
(18:47):
dedicated to doing that. Well, the two people who are
frequently cited as the co founders for open ai are
Elon Musk and Sam Altman, though I would hate and
add there were many other people who are really co
founders as well, but these are the two that, you know,
everyone says, these are the guys who started talking and
(19:10):
kind of generated the initial idea that became open AI.
So let's start with Musk. So years before he decided
to drop billions of dollars in an effort to troll
the Internet whenever he wanted to, Mr Musk was something
of an AI doomsayer. You know, he was warning that
artificial intelligence could potentially pose an existential threat to humans.
(19:35):
Kind of this idea of we create a human level
or even superhuman level strong AI, and then it turns
on us and wipes us out. And certainly bad AI
can be a huge issue. We just talked about how
even weak AI can be a really big problem. Now,
I don't think we're close to having a human, let
(19:57):
alone superhuman intelligence determined to wipe out humanity emerge, but
you know, you can definitely have bad AI contribute to
human suffering. See also Tesla, one of Mr Musk's companies.
One might even argue that Elon Musk knows the danger
that artificial intelligence poses to humanity because one of his
companies is leading the charge in that field in the
(20:19):
form of Tesla autopilot and full self driving modes. Now again,
you could say that I'm being unkind, because we do
need to remember that Tesla, despite the languages it uses
for marketing purposes, does alert drivers that they are not
supposed to take their hands off the wheel or stop
paying attention to the road, and that at least in
(20:41):
all the accounts I have read about terrible accidents involving
Tesla vehicles that were in driver assists mode, it sounds
like the driver wasn't following those directions. So you could
argue that, you know, the driver ultimately is at fault
because they're failing to adhere to the instructions that Tesla gives.
The flip side of that is that Tesla markets these
(21:03):
features as if they are more than you know, sophisticated
driver assist features. The other co founder of open Ai
that's frequently mentioned is Sam Altman, the current CEO of
open Ai. Sam Altman was previously president of y Combinator.
He became president of y Combinator in fourteen, which was
(21:26):
the year before he co founded open Ai with Elon Musk.
And you might say, well, what is why Combinator. It's
a startup accelerator, which doesn't really mean anything either, right, Well,
that's a company that helps people who have startup business
ideas get the support they need in order to launch
their idea and make it a reality. So that can
(21:47):
include stuff like mentoring the startup leaders so that they
can build a good business model and create the right
corporate structure that they're going to need in order to
do business, all the way up to prepping them and
connecting them with people that they can pitch their idea
to in order to get investment into their startup. So
(22:09):
one of the big valuable services that companies like y
Combinator provide is access to the investor community that you
might not otherwise be able to get to without that
kind of support. Now, Altman would continue to serve as
y Combinator president until twenty nineteen. At that point he
stepped down from that position to focus on open Ai.
(22:33):
H Elon Musk would sit on the board of directors
for open Ai until ten. We'll talk about that in
just a bit. Now. I mentioned that there were also
other co founders, So in addition to these two entrepreneurs,
early founders in the open Ai initiative included Greg Brockman,
who's still there. I believe he's a former chief technology
(22:54):
officer of Stripe, the payment processing company. The PayPal co
founder Peter Thiel was also one of the early investors
in open Ai. LinkedIn co founder read Garrett Hoffman, another
one one of Altman's y Combinator colleagues. Jessica Livingston was another,
and there were a few more. Now collectively, the founders
(23:15):
and partners all pledged one billion dollars to fund open ai,
which again was meant to be a nonprofit organization dedicated
to developing productive, friendly AI and not the scary pew
pew lasers kind of AI. But then there's also the
open part of open ai. So during the brainstorming that
(23:38):
would lead to the founding of this organization, the co
founders talked about how big tech companies typically do all
their AI development behind closed doors with no transparency, and
that their version of AI was meant to benefit the
parent company, not humanity as a whole. The open Ai
(23:58):
organization is going to take a different approach. The idea
was to share the benefits of AI research with the
world and do that as much as possible on an
effort to evolve AI in a way that helps but
doesn't harm. Researchers would be encouraged to publish their work
in various formats as frequently as they could, and any
(24:20):
patents that open ai would secure would similarly be shared
with the world. The message appeared to be the goal
is more important than the organization, that friendly AI is
the chief important goal here, and that open ai only
exists to see that become reality, and that open ai
(24:42):
was really kind of more of a a shepherd of
pushing AI into this direction rather than brazenly forging a
path into the wilderness, although that's not how things would
turn out now. Early on, the organization grew mostly through
connections in the AI research community, with Luminaries and x
birds joining the organization, but the organization itself kind of
(25:04):
lacked a real sense of leadership or direction. There was
this noble goal, right, Everyone knew that they were trying
to make reliable, safe, friendly, beneficial AI, but how there
wasn't really any plan for how to get to where
they wanted to be. Google researcher Dariomday visited open ai
(25:26):
in mid and he came away thinking that no one
at the organization really had any idea of what they
were doing. Despite that, or maybe because of it, i'm
a Day would join the organization a couple of months
later and became head of research there. Now. One of
the first things to emerge from open ai was in
ten like it was founded in late and in twenty
(25:50):
sixteen they were already producing some interesting stuff. And the
first up was a testing environment that the organization called
Jim Jim as a gymnasium, not as in Jimmy Jim
Jim Jim Hawkins. So what was being tested, well, they
were testing learning agents. This brings us to a discipline
(26:12):
that's within artificial intelligence. It's called machine learning, and basically
machine learning is what it says on the TIN. It's
finding ways to make machines learn so that they discover
how to do certain tasks and how to improve at
doing them over time. And there is no single way
(26:33):
that this is done. It's not like there's one and
only one way for machine learning to happen. There are
actually lots of different models. For example, there's the generative
adversarial model of machine learning. Basically, this is a model
that involves having two machines set against each other. One
machine is set up to try and accomplish a specific task.
(26:55):
This is the generative part, and the other machine is
set up to foil that task is the adversarial part. So,
for example, maybe you're training the generative model to create
a digital painting mimicking the style of famous impressionists, and
the adversarial system's job is to figure out which images
(27:15):
that are fed to it are real impressionist paintings from
history and which ones were generated by the computer system.
And you run these trials over and over, with each
system getting better over time. The generative one gets better
at making Impressionist style paintings and the adversarial one gets
better at finding little hints that indicate this was not
(27:38):
an actual painting but was computer generated. The open ai
jem specializes in learning agents that rely on reinforcement learning,
and when you break it down, it sounds a lot
like your typical kind of school work. That is, when
the learning agent performs well, it is rewarded when it
performs poorly, it is punished. So it's kind of like
(28:00):
getting your test paper back and finding out you aced
the exam, or if things didn't go well that you
totally whiffed it and you'll be going to summer school
to make up for that. Also in open Ai introduced
a platform humbly called Universe. This platform helps track progress
and train learning agents to problem solve, starting with the
(28:22):
most serious of all problems, finding the fun in Atari
video games. I'm talking about classic Autari video games like Pitfall, which,
let's be honest, awesome game. You don't have to find
the fund there, it's right there. But let's say et
the Extraterrestrial or their version of pac Man. Yeah, you
(28:43):
have to really find the fun in those. And I'm
being a little facetious here, but Universe really does train
learning agents by having them learn how to play video games.
They started with the Tari games and then they began
to build from there, and Universe trains these agents to
play the games, and the ideas that by learning how
(29:04):
to play games, as the agents encounter new games, they
can apply the previous learnings from the experiences of playing
everything before to the new game. Just like we humans,
will try and apply our knowledge and experience with certain tasks.
When we face a totally new situation. You come into
(29:25):
something you've never done before, and you might think, well,
when I do this other thing, I do it this way,
So let me try that here first. Maybe that skill
translates to this new situation, and maybe it works, maybe
it doesn't, but either way, that informs you and then
you can start branching out from there to learn how
to master this new task. That's the idea with Universe.
(29:50):
Jim and Universe both gave a glimpse at the big
plans open Ai had in store. But there was a
looming problem on the horizon. And it wasn't a levolent
Ai that was hell bent on destroying humanity. It was
a far more mundane threat. Open Ai was in danger
of running out of money. I'll explain more, but before
(30:12):
I run out of money, let's take a quick break.
We're back, okay, So we're up to and leaders in
open Ai realized that they were facing their own existential
(30:32):
crisis in the form of funding. So in order to
remain relevant and competitive in the fast paced world of
AI development, and in order to achieve the goal of
creating an a g I before anyone else. The company
was going to have to spend enormous amounts of money
on computer systems and other assets like training, databases or
(30:54):
else it was going to get left behind. It just
wasn't possible to do this while also being a strictly
not for profit company, so the leaders started to think
about how they might address this. Meanwhile, in Elon Musk
stepped down from the board of directors. Now officially, the
reason given was that Musk wanted to avoid a potential
(31:17):
conflict of interest because Tesla was pursuing its own AI
research and Tesla was bound to compete for the same
talent pool that Opened a I wanted to tap into,
so in order to avoid a conflict of interest, he
resigned from the board of directors. However, Musk also subsequently
tweeted out that he felt open ai was falling short,
(31:39):
mostly on the open part, and that he had disagreements
regarding the direction of the organization's efforts. It was also
in when open ai released its charter, the company charter,
which started to hint at upcoming changes. The charter read,
in part quote, we anticipate needing to marshal substant ential
(32:00):
resources to fulfill our mission, but will always diligently act
to minimize conflicts of interest among our employees and stakeholders
that could compromise broad benefit end quote. It was like
the leaders were starting to couch things in an effort
to explain what was going to be coming up next.
So the following year, twenty nineteen, saw open Ai create
(32:22):
a new for profit company as a subsidiary. So the
parent company, Open Eye Eye, Incorporated, remains a not for
profit organization, but open Ai l P is a for
profit company. Open Ai published a blog post that tried
to explain this decision, saying, quote, we want to increase
(32:44):
our ability to raise capital while still serving our mission,
and no pre existing legal structure we know of strikes
the right balance. Our solution is to create open Ai
LP as a hybrid of a for profit and nonprofit,
which we are calling a capped profit company end quote.
(33:04):
So the idea here is that an investor can pour
money into open Ai LP and can potentially earn up
to one hundred times that investment as the company releases
and generates revenue from products. But that's the limit. Once
an investor hits one hundred times their investment. That's they're done.
You ain't getting a hunter and one times return on
(33:26):
your investment, bucko. So all the additional money over that
one hundred times return would go toward nonprofit work. But um,
that's that's a lot, right. One hundred times return on
investment is huge, to the point where some people say, like,
when would you ever hit that? I mean, Google, I
(33:47):
think is somewhere in the realm of twenty times return
on investment if you got in early on. So um,
it's hard to imagine a hundred time return. So some
people say, well, this is just language to make it
seem like they're still dedicated to this nonprofit but aren't. Really,
(34:07):
that's one of the criticisms I've I've read. Now, just
imagine that you know that initial investment into open ai
was a billion dollars, so presumably you'd have to see
more than a hundred billion dollars in profit, uh in
order to return that to investors before they were all
paid out, and then the rest could go toward nonprofit
(34:27):
That's just that initial investment, because believe me, open ai
has received subsequent funding. In fact, in twenty nineteen, Microsoft
board an additional billion dollars into the company, although only
half of that was cash, so it was only like
five million. The other five million was in like cloud
computing credit, so that open ai could make use of
(34:47):
Microsoft's Azure platform without having to pay for it because
they had five hundred million dollars in credit. Yalza. And
of course we've heard recently that Microsoft is considering a
ten billion all our investment into open ai, and there
ain't a yells a big enough to express how princely
that sum is. In twenty nineteen, open Ai did something strange,
(35:10):
at least strange if you remember that open is part
of the company's name. The PR Department released information that
open ai had been sitting on a language model named
Generative pre Trained Transformer TO or GPT two that developed
this and not talked about it, and now they were
finally talking about it, and that this language model was
(35:31):
capable of generating text in response to props, including stuff
like it could create fake news articles or alternative takes
on classic literature. Further, open ai said that it was
actually too dangerous to release the code because people might
then use the code to create misinformation or worse, which
(35:51):
seemed to fly in the face of open Aiyes, purpose
that the company had fostered a published, often and transparently culture,
and that was keeping certain projects secret, and when finally
talking about them, denying access to the research that seemed
counter to the founding principles of open ai. The folks
(36:11):
in open ai had sort of shifted their perspective a
little bit. In their eyes, some secrecy and restrictions were
needed to ensure safety and security, as well as to
maintain a competitive advantage over others in the field of
AI research. Open ai would eventually release GPT two in
several stages before the full code finally came out in
(36:32):
November twenty nineteen. Critics accused open ai of relying on
publicity stunts to hype up what their research and work
had created, and thus pumping unrealistic expectations into the investor market, like,
in other words, by saying, oh, this is really dangerous,
I don't know if I can let you have this.
It got people really excited about it, and so investors
(36:53):
were willing to pour more money into open Ai. That's
what the critics were saying, that you're just doing this
to get people worked up into a frenzy and that
the staged release process for GPT two was open AIS
way to capitalize on all this height gradually so as
(37:14):
not to just deflate expectations by releasing it and then
everyone say, oh, that's it. Later, in a paper released
in early open AI revealed another secret that the company
was essentially using the more power approach of trying to
achieve artificial general intelligence or a g I. So a
quick word on what they were doing. This was called foresight,
(37:37):
by the way, So broadly speaking, there are two big
schools of thought on how the world will see a
true a g I emerge. That is, an artificial intelligence
that can perform very much like a human intelligence, you know,
perhaps not in the same way, but again achieving the
same outcomes. So one way, the one school of thought
(37:58):
is that we already have all the off that we
need in all the AI research that has been done
over the years. We have all the pieces, They're all there.
We just need to amp it up by providing more
computational resources behind it and larger training sets. So everything's
good to go. We just got to provide the power
to push it into the realm of a g I. Now,
(38:20):
the other school of thought is that we're still missing
something or maybe several some things, and that until we
figure those out and we incorporate them into our AI strategy,
we just are not going to see an a G I.
It won't matter how much power you put behind it.
We're still missing elements that will actually allow us to
hit a GI status. Now open ai subscribes to the
(38:43):
more power philosophy generally speaking, and the research paper kind
of explained us. And again this was something that open
ai was holding in secret. They even compelled employees to
stay quiet about the work. And what was essentially going
on was that open ai researchers were taking AI work
that was developed in other research labs and companies. These
(39:05):
were tools that other competitors were offering, and so they
essentially got hold of these tools, and then they jacked
up the power of the tools by training them on
larger data sets and providing more compute computational power to
see if, oh, maybe what we already have is the
(39:26):
way there and we just gotta give it the extra
oomph to get it to in open ai announced the
next generation of its Generative pre Trained Transformer. This would
be GPT three and that it would make available in
Application Programming Interface or a p I, which would be
the company's first commercial product, so customers developers in this
(39:50):
case could get access to the GPT three language model
through this ap I and then integrate that with their app.
So if it was an app would help you do
things like I don't know book meetings, then the language
model would be part of what would power this app.
The following year, we got open aies tool that would
(40:11):
generate digital images, which is doll E. That's d A
L L E kind of a combination of Wally the
Pixar character and Salvador Dolly, the absurdist artist with the
incredible mustache. So you would feed Dolly a text prompt
and it would try to create images based on that prompt.
(40:34):
Sometimes it was delightful and sometimes it was disturbing. Sometimes
it was a combination. But it was really impressive that
it was able to do this at all, and similar
to that of other generative image AI services like mid Journey,
which would actually debut a year later in two and
open Ai updated Dolly and released Dolly two. In the
(40:57):
new version of Dolly is able to combine find more
concepts together to create images and also to imitate specific styles.
So you know, if you wanted a style that imitated
a photograph from the nineteen twenties, it would try to
create that that effect, or if you were to say,
like a painting from the Cubist movement, that it would
(41:20):
try and and accomplish that. In late two, open Ai
introduced chat GPT, a chat bought built on top of
the GPT three point five language model. That's the one
that stirred up conversations around transparency, trusting AI output, and
worrying about students cheating off an AI s AST. Now
(41:42):
we've already touched on this in this episode about you know,
a lot of the concern here, and I think a
great deal of it rises not from Chat GPTs incredible abilities,
which are genuinely impressive, but rather our human tendency to
trust automated output implicitly when a fact it's sometimes wrong.
In fact, as many reports have said, sometimes Chat GPT
(42:06):
gets things very very wrong, but it presents it in
a way that appears to be authoritative and trustworthy. So
if we do trust the output of such a system
and then we act on that output, where we're falling
far short of that AI that's supposed to be beneficial
to humanity, right. Open ai was built around that, So
(42:28):
this seems again to be a contradiction to open a
eyes goal that if it has a chat bot that
occasionally produces incorrect information and then people act on it,
wouldn't you argue that this AI could be potentially harmful
to humanity not beneficial. Now you could say that it's
(42:48):
the people who are relying too heavily on chat GPT
that are the problem, and that's not really open aiyes fault.
They can't control how people use their tools. That, just
like the test law owners, people are not properly making
use of the technology with enough awareness of that technology's limitations.
But others might argue that open ai hasn't exactly made
(43:12):
people aware of the limitations at all, at least not
in a way that's equal to the hype that surrounds
their various products. That open Ai is benefiting from this
excitement around the undeniably impressive achievements, but that the company
is failing to live up to this commitment to creating
beneficial AI because they're not being good stewards of this
(43:36):
tool and the outcome of people using it. And it
is a very complicated problem, and AI isn't likely to
solve this one right away. Open ai is currently developing
GPT four, so that's the next generation of the language
model it's been developing all these years. CEO Sam Altman
has already said that people are likely going to be
(43:57):
disappointed by GPT for not the cause the model won't
be impressive. I have no doubt it will be, but
because people have already built up in their minds a
bar that GPT four simply will not be able to reach.
And while that is a fair observation, I can't help
but think that open ai is at least partly responsible
(44:18):
for encouraging the fervor that led to this impossibly high bar.
I don't think people said it all on their own.
I think open aiyes own approach has kind of encouraged
this sort of reaction. I mean, there's already this tendency
for us to hype stuff when we just get a
hint of what is possible and we start to extrapolate
(44:40):
from that. That's true all the time. You can see
it over and over and over again in lots of
different technologies throughout the years. But at the same time,
I feel open ai takes a kind of almost coy approach,
and that helps encourage this behavior rather than discourage it.
The company is openly doing the goal of building the
(45:01):
first a g I, though as we've seen, it's not
doing so in quite as transparent away as the organization
first set out to follow. But if you're pursuing that goal,
it means you've got like really big ambitions, and that again,
I think helps to fuel the hype cycle. Now, I
guess I can conclude this episode by just reflecting on
(45:21):
the fact that open ai is a company that Elon
Musk has criticized for failing to be transparent. That's something, y'all. Now.
I don't wish to disparage the people who work for
open ai or even the goal of the organization itself.
I think it's a worthy goal. I think there are
(45:41):
a lot of people who truly believe in that goal
who are working for open Ai. I think the leadership
believes in the goal and that that's what they're pursuing.
It's just the realities of trying to achieve that in
a world where you need to make money in order
to fuel that pursuit creates complications, and there are no
(46:01):
perfect solutions unless you just happen to have, you know,
a a bottomless pit of a benefactor who can just
pour money into the organization and allow it to pursue
these these developments without having to worry about the commercial
aspect of it. Unless you have that, then you have
(46:22):
to deal with these real world complications. And just like
the autonomous cars that you know, on the surface should
be able to maneuver without any driver in the driver's
seat and do so perfectly safely, we learned that once
you put it into the real world, there are so
many other variables and complications at play. It's never as
(46:44):
simple as you first thought. So I know I've dogged
on open Ai a lot. There are a lot of
really great critical articles about the company. But I do
believe in the work they're doing. I just the way
they go about it has some elements to it that
I find troubling. But it's not like I can suggest
(47:07):
a better approach. I just think that it's important for
us to pay attention and to criticize when necessary, and
to ask questions and to hold the organization accountable because
it has claimed to be this organization founded with a
pursuit of developing beneficial AI and doing so in an open,
(47:28):
transparent way. And if it fails to do that, I
think we have to call them on it, because otherwise
what we get may not be that beneficial AI we've
been hoping for. All Right, that's it for this episode.
Hope you enjoyed it, and if you have suggestions for
topics I should cover in future episodes of tech Stuff,
please reach out to me. You can download the i
(47:48):
heart Radio app for free and navigate over to tech Stuff.
Just put tech stuff in the search field. That will
bring you over to our little page on that app,
and you will find a microphone icon on the tech
stuff page. If you click on that, you can leave
a voice message up to thirty seconds in length let
me know what you would like to hear, or if
you prefer, you can head on over to Elon Musk's
(48:08):
Twitter and you can send me a Twitter message. The
handle for the show is tech Stuff H s W
and I'll talk to you again really soon, y. Tech
Stuff is an I Heart Radio production. For more podcasts
from I Heart Radio, visit the i heart Radio app,
(48:30):
Apple Podcasts, or wherever you listen to your favorite shows