Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:05):
Apogee Production.
Speaker 2 (00:10):
The use of AI voices has been used in this
podcast to help recreate some moments.
Speaker 3 (00:16):
Welcome back to deep Fake the forty million Dollar Fraud.
I'm Amelia Thompson. We'll come back to the Hong Kong
scam across the next few episodes, but for now, I
want to take a small deep Fake break and look
into other things that aren't what they seem. But for now,
let me introduce you to Lil MICHAELA.
Speaker 4 (00:35):
Hey, it's MICHAELA and I just did an interview with
the zach seinshow.
Speaker 3 (00:38):
Lil's bio says she's a twenty one year old robot
living in La Her tag is be your Own Robot.
Mikaela's a complete fake and everyone knows it. Over fifteen
hundred social posts, all of them generated by AI. She's
the brainchild of Trevor mcfedriie's and Sarah Decowar, the co
founders of Brood and the creators of the first computer
(01:00):
generated social media influencer. She charges ten grand for a
social post. In twenty nineteen, her real net worth was
one hundred and twenty five million dollars. She's worked on
campaigns for Samsung Prada and Calvin Klein.
Speaker 4 (01:16):
Everything seemed unimaginable when I was just a few lines
of code. I love creating new things, achieving the impossible.
So that's what being part of Team Galaxy is all about.
Speaker 3 (01:30):
Team Galaxy and this social media robot influencer is controversial.
The Internet went into meltdown after she appeared on a
Calvin Klein commercial kissing. The real celebrity Bella had died
in May twenty nineteen.
Speaker 4 (01:44):
It was a dream come true. She's hot. Who wouldn't
want to get you?
Speaker 3 (01:49):
The robot influencer and the real Bella got backlash after
the kissing video was released, being accused of using their
sexuality to artificially increase their views.
Speaker 5 (01:59):
Was she a good kisser?
Speaker 1 (02:00):
Come on? What do you think?
Speaker 4 (02:03):
It's bal eddie? Okay, what can she do?
Speaker 5 (02:05):
That's true?
Speaker 3 (02:05):
In December twenty nineteen, the fictional robot told the world
of her sexual assault on YouTube. American singer Kaylanney called
out Mkayla on Twitter for being ignorantly offensive. Mikayla is
also a singer with her own channel on Spotify.
Speaker 6 (02:20):
You'll never see how you really feel? How can I
know if this thing is real?
Speaker 5 (02:26):
Visit on in my head.
Speaker 3 (02:29):
There are one hundred and forty five thousand monthly listeners
following her on Spotify, and she's even done collaborations with artists.
Speaker 7 (02:36):
Like Love You just worked with Lave Sims, great record.
Speaker 4 (02:41):
Oh so you are a fan?
Speaker 5 (02:42):
Yeah?
Speaker 4 (02:43):
You just don't keep up with the Instagram beat. I
would probably cry if Lizo let me jump on the track.
Same with rear Stara, Steve Lacy, and Charlie XCX oh
I stand.
Speaker 3 (02:55):
She's also been interviewed by a real person on YouTube
about being a robot influencer.
Speaker 4 (03:00):
How do you realize you could sing?
Speaker 1 (03:02):
I don't know.
Speaker 4 (03:02):
Do you remember realizing you could dark?
Speaker 5 (03:05):
It's just.
Speaker 4 (03:06):
Uh no, actually yeah, it's kind of like that. I
don't know. Like I've always used sound to express how
I'm feeling, even if I wasn't sharing it with the
world necessarily.
Speaker 3 (03:16):
It's pretty crazy that a robot who isn't real could
generate millions of likes on social media and make a
truckload of cash.
Speaker 4 (03:24):
You're probably asking yourself, how is this robot talking to
me right now? Did y'all hear about the robot invasion
of twenty twenty? I mean, I've been around for a minute,
But I feel like last year was on a whole
other level.
Speaker 6 (03:37):
If you relate to the party.
Speaker 4 (03:38):
Hey, I'm Mikaela. I'm a nineteen year old robot. I've
be in La making music and well I just just
keep watching and catch up.
Speaker 3 (03:48):
Remember what you're listening to is not a real person,
She's made by AI. I mean, who would fall for
something like that where you think someone's real and then
they turn out to be fake. Ken Gamble is a
well known cyber crime expert.
Speaker 7 (04:03):
Really it's lovely to chat to you today.
Speaker 3 (04:05):
He is the executive chairman and co founder of IFW Global.
He says it's pretty easy to be tricked.
Speaker 7 (04:12):
So deep fakes don't always have to be surrounding fraud.
You can also experience a deep faith with a podcast
like this, for example. I mean, we could be actually
doing a de fake right now, and Amelia Thompson may
not even be a real person.
Speaker 3 (04:29):
If you haven't yet clocked it. I am not who
I say I am.
Speaker 8 (04:32):
I'm not Amelia Thompson.
Speaker 3 (04:34):
A journalist who's been working for twelve years in finance crime.
I am a deep fake.
Speaker 7 (04:42):
So you know, beware that deep fakes are everywhere and
they could fail you at any time.
Speaker 3 (04:48):
I've been created using generative AI voice reproduction. Throughout the podcast,
you might have already worked that out. Everything you're hearing
has been written by a human. We've left in the
breaths and even made me stuf stumble here and there. Occasionally,
when the voice is generated, it doesn't sound real, So
the humans behind me keep updating the voice until I
(05:09):
sound close enough to being a real person. So the
humans behind me keep updating the voice until I sound
close enough to being a real person. So the humans
behind me keep altering the voice until I sound close
enough to being a real person. Now, I don't want
to spoil it for anyone else, So if we can
(05:30):
keep this as our little secret, that would be great.
I mean, it probably wasn't that hard to pick. There
are warnings at the start of each show that might
have blown my cover anyway. I also want you to
keep listening because we have more deep fake info we
want you to be armed with when you go out
to the real world. The idea of this podcast was
(05:52):
to tell you the story of Europe, but also show
you how to spot a fake and to be alert,
not alarmed. Let's get back to some real scams that
are intended to deceive to make money. A few weeks ago,
a man called Mike Smith was indicted over allegations when
he used an AI music company to create hundreds and
(06:12):
thousands of songs. Then Mike used bots to listen to
those deep fake songs and earn ten million dollars of
streaming income. It's alleged he had been doing it since
twenty seventeen.
Speaker 6 (06:24):
Five o'clock only WBTV was there as FBI agents raided
a Cornelius man's home. He's a musician now facing serious charges,
accused of using AI to create and stream music and
illegally collect more than ten million dollars in royalties.
Speaker 5 (06:40):
Michael Smith is charted with wirefraud conspiracy, wirefraud and money
laundering conspiracy. We can't word of this morning's FBI raid
at the musician's home. This morning, our Erica Lunsford was there.
She joined with our live year to Cornelius musician's home. Eric,
I'm sure these neighbors were just stunned to see this unfolding.
Speaker 3 (06:56):
There the indictment alleges that are around twenty eighteen, Smith began
working with the chief executive officer of an unnamed AI
music company company and a music promoter to create thousands
of songs that Smith could then fraudulently stream.
Speaker 9 (07:11):
A FBI agents were in the driveway of Smith's home
on will Dare Drive. They appeared to be searching through
the home and again. Smith is accused of using automated
programs to stream AI generated songs billions of times. Documents
state that hundreds of thousands of AI generated songs were
streamed by bought accounts billions of times, which allowed him
to fraudulently obtain more than ten million dollars in royalties.
(07:35):
The indictment alleges at a certain point from about twenty
seventeen up to and including twenty twenty four, Smith estimated
that he could use the bot accounts to generate over
six hundred and sixty one thousand streams a day, yielding
annual royalties of over a million dollars now. According to
the US Attorney's Office, Smith repeatedly lied to the streaming
platforms when he used false names and other information to
(07:57):
create the bot accounts. When he agreed to abide by
the terms and conditions that prohibited streaming manipulations.
Speaker 3 (08:07):
Back to the scam at ARAP and how new technologies
are affecting global markets like Hong Kong. Hong Kong Police
say that in the first half of twenty twenty four,
the city's police force recorded sixteen thou one hundred and
eighty two technology related criminal cases. Losses in these cases
amounted to almost seven hundred million Australian according to Police
(08:28):
Chief Superintendent Raymond lambchuk Ho, Hong Kong Police have recorded
three cases related to the technology and discovered twenty one
clips using deep fakes to impersonate government officials or celebrities
on the internet since last year. The Hong Kong Securities
and Future Commission earlier this year warned of a scam
using deep fakes of Elon Musk touting a cryptocurrency trading
(08:51):
platform called quantum Ai.
Speaker 8 (08:54):
It claims that Elon Musk has created some ai crypto.
Speaker 1 (08:57):
Thing and he's inviting people and he wants to help people.
He's just a good person wanting to help give people
free money, and that's why he created quantum Ai.
Speaker 2 (09:05):
If you don't make your first million and six months,
I will personally give you a Tesla Model three.
Speaker 5 (09:10):
Hello, my name is Elon Mussin. Hello AI.
Speaker 3 (09:16):
Hong Kong police have also cracked down on a fraud
syndicate that sent more than twenty online loan applications that
used deep fake technologies to bypass the online application process.
One of the applications for a seventy thousand dollars loan
was approved. So could AI be the end of the world?
Is it all doom and gloom? Or is AI only
(09:38):
as smart as the people that control it? Or are
there positives that can come out of this type of technology.
Let's go back to our legal AI expert, Adrian mccullor.
He has a PhD in cryptology, which is the mathematics
of coding.
Speaker 1 (09:55):
Oh look, there's a lot of really good things that
artificial intelligence. The first thing to get is that artificial
intelligence not there as a replacement of many aspects of
human life. It is a complementary. It's a tool to
(10:15):
be used. For example, I use anthropics. Claud Coot is
a legal AI tool. Won't replace lawyers, but it can
assist lawyers greatly in accelerating the process of delivering contracts
(10:38):
advices to clients, and it should actually because it's speeding
up the process, it actually is cheaper for the client
in the long run. So yeah, there's some very good
My son is in advertising and they he uses i
(10:58):
think called MIDI journey to which as a tool to
assist in developing ideas in advertising. So there's a lot
of good things. I've been around for a long time.
The term artificial intelligence was first coined by a guy
(11:20):
called John McCarthy in nineteen fifty five. An artificial intelligence
doesn't replicate human thought. It imitates human thoughts and processes.
I'm writing a paper at the present moment on what's
called technological singularity, and it's an interesting area in artificial intelligence.
(11:43):
Whereby there's a particular psychologist at Stanford University. He's just
about to retire. He's been got a massive amount of
academic writing. And one of his students built a large
language model on all of his writings. And this large
language model was able to replicate his thought processes on
(12:09):
various ideas. And this Johnny student presented it to the
Stanford professor.
Speaker 9 (12:17):
What it does.
Speaker 1 (12:19):
The artificial intelligence tool will operate past his death, and
he said, this has got to be the closest thing
to immortality that you could think of. And they gave
this ALI tool set of questions, and it came up
(12:41):
with answers, psychological analysis and structures and gave it to
his wife and she came back and said, that's exactly
how you would have approached this.
Speaker 3 (12:52):
New laws that start next year have just been passed
in the UK. The laws say that social media bosses
could face jail if they persistently failed to stop their
platforms being used for revenge, porn and deep images. It
will mean that companies such as Facebook that fail to
prevent them from being shared or do not remove them quickly,
(13:13):
will face fines of up to ten percent of their
global turnover equivalent to ten pounds for matter.
Speaker 1 (13:20):
Think that there's going to be an obligation.
Speaker 2 (13:26):
On social media companies to not be exempt from liability
where they have been by that of misinformation. Misinformation is
the biggest thing that these AI tools can be used for.
(13:50):
They can have a lot of guts and they can
have a lot of bad. It's like, as I explained
to some students, I said, well, think about the early
twentieth century. You have this motorca okay now has a
lot of goods, but at the same time it was
(14:12):
used in bank robberies just because there's a sector of
the community that use a piece of technology for a
nefurious activity doesn't mean you should ban the whole thing.
You just need to regulate it appropriately.
Speaker 1 (14:29):
I don't think that in the long run, social media
companies will be able to continue with the way they
are operating in a wild West fashion. I think that
eventually they will be caught up and told to they
have to monitor a lot more carefully what they are presenting.
Speaker 3 (14:56):
Georgia Harrison, the Love Island star whose former boyfriend was
jailed for sharing a video of them. Her ex partner,
Stephen Bear, was jailed for twenty one months after being
found guilty of voyeurism and sharing private sexual videos online.
The twenty eight year old said seeing the footage on
subscription site OnlyFans was the final straw for her. He
(15:20):
had used CCTV cameras in his garden to capture them
having sex, and then sent it to a friend and
sold the video online, none of which she consented to.
Speaker 1 (15:30):
I think, first of all, as soon as it came out,
I had people from television show saying, you know, you
should be documenting this. This is something that could really
make a difference in society.
Speaker 3 (15:38):
And Yuval Noah Harari is an author and thought leader.
His book Sapiens has sold over twenty five million copies.
It is a New York Times Top ten best seller.
His new book is called Nexus, and in it, he writes.
Speaker 8 (15:52):
We are living through the most profound information revolution in
human history. To understand it, we need to understand what
has come before. We have named our species Homo sapiens,
the wise human. But if humans are so wise, why
are we doing so many self destructive things? In particular,
why are we on the verge of committing ecological and
technological suicide?
Speaker 3 (16:13):
In a strange twist, we have used AI to read
his quotes. These are his words, but not his voice.
Speaker 8 (16:20):
I prefer to think about AI as alien intelligence. I
know that the acronym is artificial intelligence, but I think
it's more accurate to think about it as an alien intelligence,
not in the sense of coming from outer space, in
the sense that it makes decisions in a fundamentally different
way than the human minds. Artificial means or have the
(16:40):
sense that we design it to we control it. Something
artificial is made by humans. AI is becoming less and
less artificial and more and more alien. AIS learn and
they change and they start making unexpected decisions, and they
start coming up with new ideas which are alien to
the human way of doing things.
Speaker 3 (16:58):
So is it the humans that are creating the content
that are the problem or is it the fault of
the platforms that create the technology.
Speaker 8 (17:05):
If the question is whether to ban somebody like Donald
Trump from Twitter, this is a very difficult issue and
we should be extremely careful about banning human beings, especially
important politicians, from voicing their views and opinions, however much
we dislike their opinions or them personally. It's a very
serious matter to ban any human being from a platform.
(17:26):
But this is not the problem. The problem on the
platform is not the human users. The problem is the algorithms,
and the companies constantly shift the blame to the humans
in order to protect their business interests.