Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
Welcome everyone to
another episode of Dynamics
Corner.
Is AI a necessity for thesurvival of humanity?
That's my question.
I'm your co-host, chris, andthis is Brad.
Speaker 2 (00:11):
This episode was
recorded on December 18th 2024.
Chris, chris, chris.
Is AI required for the survivalof humanity?
Is humanity creating therequirement for AI for survival?
That's a good question.
When it comes to AI, I have somany different questions and
there's so many points that Iwant to discuss about it With us
(00:34):
.
Today we had the opportunity tospeak with Zoran
Fries-Alexanderson and ChristianLenz about some of those topics
.
Good morning, good afternoon.
(00:59):
How are?
Speaker 3 (00:59):
you doing there, we,
there we go good day good
afternoon over the pond.
Speaker 2 (01:06):
How are you doing?
Good morning, well, good goodgood, I'll tell you, soren, I
love the video.
What did you do?
You have the nice, the niceblurred background, the soft
lighting yeah, it's uh.
Speaker 3 (01:23):
You can see great
things with a great camera.
Speaker 2 (01:27):
It looks nice, it
looks really nice, christian.
How are you doing?
Speaker 4 (01:31):
Fine, thank you very
much.
Speaker 2 (01:35):
Your background's
good too, I like it, it's real.
Speaker 1 (01:38):
Back to the future.
Speaker 2 (01:41):
It is good, it is
good, but thank you both for
joining us this afternoon, thismorning, this evening, whatever
it may be been looking forwardto this conversation.
I was talking with chris priorto this.
This is probably the mostprepared I've ever been for a
discussion.
How well prepared I am we'llsee.
Uh, because I have a lot ofthings that I would like to
(02:01):
bring up based on someindividual conversations we had
via either voice or via text.
And before we jump into thatand have that famous topic, can
we tell everybody a little bitabout yourself, soren?
Speaker 3 (02:18):
Yes, so my name is
Soren Alexandersen.
I'm a product manager in theBusiness Central engineering
team working on finance featuresbasically rethinking finance
with co-pilot and AI.
Speaker 2 (02:33):
Excellent, excellent
Christian.
Speaker 4 (02:37):
Yeah, I'm Christian.
I'm a development facilitatorat CDM.
We're a Microsoft BusinessCentral partner.
Development facilitator at CDM.
We're a Microsoft BusinessCentral partner and I'm
responsible for the education ofmy colleagues in all the new
topics, all the new stuff.
I've been a developer in thepast and a project manager and
now I'm taking care of takingall the information in that it
(03:00):
leads to good solutions for ourcustomers.
Speaker 2 (03:04):
Excellent excellent
and thank you both for joining
us again.
You're both veterans and Iappreciate you both taking the
time to speak with us, as wellas your support for the podcast
over the years as well.
And just to get into this, Iknow, soren, you work with AI
and work with the agent portionI'm simplifying some of the
terms within Business Centralfor the product group and you
(03:30):
know, in our conversationsyou've turned me on to many
things.
One thing you've turned me onto was a podcast called the Only
Constant, which I was pleased Ithink it was maybe at this
point a week or so ago, maybe alittle bit longer to see that
there was an episode where youwere a guest on that podcast
talking about AI, and you knowBusiness Central, erp in
(03:52):
particular.
I mean, I think you referencedBusiness Central, but I think
the conversation that you hadwas more around ERP software and
that got me thinking a lotabout AI, and I know, christian,
you have a lot of comments onAI as well too, but the way you
ended that with you know nobodywants to do the dishes is
(04:15):
wonderful, which got my mindthinking about AI in detail and
what AI is doing and how AI isshaping.
You know business, how AI isshaping how we interact socially
, how AI is shaping the world,so I was hoping we could talk a
little bit about AI witheveryone today.
So with that, what are yourthoughts on AI?
(04:39):
And also, maybe, christian,what do you think of when you
hear of AI or artificialintelligence?
Speaker 4 (04:46):
I would say it's
mostly a tool for me Getting a
little bit more deeper into whatit is.
I'm not an AI expert, but I'mtalking to people who try to
elaborate how to use AI for thegood of people.
(05:07):
For example, I had aconversation with one of those
experts from Germany just a fewweeks before directions and he
told me how to make use ofcustom GPTs and I got the
concept and tried it a littlebit custom GPTs and I got the
(05:28):
concept and tried it a littlebit and when I got to Directions
EMEA in Vienna in the beginningof November, the agents topic
was everywhere, so it wasco-pilot and agents and it
prepared me a lot how thisconcept is evolving and how fast
this is evolving.
So I'm not able to catch upeverything, but I have good
connections to people who areexperts in this and focus on
(05:51):
this, and the conversations withthose people, not only on the
technical side but also on howto make use of it and what to
keep in mind when using AI, arevery crucial for me to make my
own assumptions and decide onthe direction where we should go
as users, as partners for ourcustomers, and to consult our
(06:17):
customers and on the other side.
With the evolving possibilitiesand capabilities of AI,
generating whole newinteractions with people, it
gets much more harder to havethis barrier in mind.
This is a machine doingsomething that I receive and
(06:41):
this is not a human being or aliving being that is interacting
with me.
It's really hard to have abird's eye view of what is
really happening here, becauseit's so like human interaction
that we have with AI, that ishard to not react as a human on
(07:06):
this human interaction and thenhave an outside view of it.
How can I use it and where isit good or bad, or something
like that, that moralconversation we're trying to
have.
But having conversations aboutit and thinking about it helps a
lot, I think.
Speaker 2 (07:27):
Yeah, it does, Saren.
You have quite a bit of insightinto the agents and working
with AI.
What is your comments on AI?
Speaker 3 (07:38):
I think I'll start
from the same perspective as
Christian.
From the same perspective asChristian, that for me, ai is
also a tool in the sense thatwhen looking at this from a
business perspective, you haveyour business desires, your
business goal, your businessstrategy and whatever lever you
(08:00):
can pull to get you closer tothat business goal you have AI
might be a tool you can pull toget you closer to that business
goal you have.
Ai might be a tool you canutilize for that.
It's not a hammer to hit all ofthe nails.
I mean it's not the tool to fixthem all.
In some cases it's not at allthe right tool.
In many cases it can be afantastic tool.
(08:21):
So that depends a lot on thescenario.
It depends a lot on the goal.
It can be a fantastic tool.
So that depends a lot on thescenario.
It depends a lot on the goal.
I will say that I'm fortunate inthe way that I don't need to
know the intricate details ofevery new GPT model that comes
out and stuff like that.
So that's too far for me to goand I could do nothing else.
(08:44):
And to your point, christian.
So you said you're not an aiexpert.
So but I mean by by modernstandards and the ai that we
typically talk about these days.
Well, lms, it's only been outthere for such a short while.
Who who can actually be an aiexpert yet?
Right, I mean, it's been outthere for a couple of years.
(09:05):
In this modern incarnation, noone is an expert at this point.
I mean, you have people whoknow more than me and us, maybe
given in this audience here, butwe all try to just learn every
day.
I think that's how I woulddescribe it.
There's some interesting things.
(09:28):
I mean from my perspective as aproduct manager.
What I'm placed in this worldto do is to basically rank
customer opportunities andproblems.
That's my primary job.
Whether or not AI can helpsolve some of those
opportunities or problems that'smy primary job.
Whether or not AI can helpsolve some of those
(09:48):
opportunities or problems great.
So that's what I'm about to do,like reassess all those things
that I know about our customers,our joint customers and
partners, and how can AI helpthose?
Speaker 1 (10:05):
Yeah, just when you
started speaking about the
dishwasher, it made me chuckleand say how can you relate that
to why AI was invented?
And I had to look it up.
I looked up, you know why wasthe dishwasher invented?
So I thought it was prettyinteresting to share to the
(10:27):
listeners.
One was to Josephine Cochran,who invented the dishwasher, and
her reasoning was to protecther china dishes and she didn't
want to hand wash and then freeup time.
And how relatable is that withAI?
(10:49):
Is that we want to free up ourtime to do other things and use
AI to.
In this case, she had notedthat hand washing, avoiding hand
washing, she wanted to create amachine that could wash dishes
faster and more carefully thanshe could.
So, in a sense, when AI isinvented, you kind of want to
(11:17):
have a tool in this case an AItool to do other things for you,
maybe better than you can andmaybe more carefully in feeding
you information.
I don't know, but I thoughtthat was pretty interesting.
Speaker 3 (11:31):
The relatable
component there and that makes
total sense to me.
That makes sense in the sensethat AI is very good at paying
attention to detail that a humanmight overlook if we're tired
or it's end of the day or earlymorning.
Even so, there's so muchrelatable things to what you
(11:55):
just said that applies for AI,or even just technology, I mean,
and automation.
It's not just AI, because IT isabout automating stuff.
Ai just brings another level ofautomation.
Speaker 2 (12:08):
You could say it is a
beneficial tool.
But, chris, to go back to yourpoint with the invention of
dishwasher and maybe even theinvention of AI, I think I don't
know the history of AI and I'mnot certain.
If you know, I'm sure you coulduse AI to find the history of
AI.
But is AI one of those tools?
I have so many thoughts aroundAI and it's tough to find a way
(12:32):
to get into unpack all of thecomments that I have on it.
But a lot of tools get createdor invented without the
intention of them being invented.
You know it's sometimes youcreate a tool or you create a
(12:52):
process or something comes of itand you're trying to solve one
problem.
Then you realize that you cansolve many other problems by
either implementing it slightlydifferent, you know, working on
it with another invention or atool that was created.
So where does it end?
And with AI, I think we're justI don't know if we'll ever or
(13:15):
we can even understand where itwill go or where it will end.
We see how individuals areusing it now, such as creating
pictures.
Right, I'm looking at some ofthe common uses of it outside of
the analytical points, pointsof it people creating pitches
you know a lot of your searchengines now will primarily give
you the ai results of the searchengines, which is a summary of
sources that they cite.
Uh, ai gets used, you know,from that way, from like the
(13:37):
language model points of view,but then ai also gets used from
a technical point of view.
Um, I'm also reading.
I started reading a few weeksago a book uh, moral ai and how
we get there which is by pelicanbooks and I think it's borg,
synod, armstrong and contentsI'm so bad with names which also
(13:57):
opened up my eyes to ai and howai impacts everybody in the
world.
Speaker 1 (14:07):
I think it creates
different iterations, right with
AI.
You know, clearly, you see AIin almost practically anywhere
you had mentioned.
You know creating images foryou and started with that and
then followed with creatingvideos for you now and and so
(14:28):
much more, and then you know, uh, sorted.
You know I was trying to.
I mean, I was listening to yourepisode um, you know, where
does ai come into play in erpand where does it go from there?
Right, I'm sure a lot of peopleare going to create different
iterations of AI and Copilot andBusiness Central, and that is
(14:49):
where I'm excited about.
We're kind of scratching thesurface in the ERP and what else
can it do for you in thebusiness sense?
Of course, there's differentAIs with M365 and all the other
Microsoft ecosystem productlines.
What's next for businesses,especially in the SMB space?
(15:11):
I think it's going to create alevel playing field for SMBs to
be able to compete better andwhere they can focus more on
strategy and be more tactical inthe way they do business.
So that's where I'm excitedabout and and I think a lot of
us here in this call we're the,I guess, curator and and that's
(15:35):
where we become more of businessconsultants in a sense of how
you would run your businessutilizing all these Microsoft
tools and AI.
Speaker 4 (15:46):
I think yeah.
Speaker 1 (15:46):
I think, Go ahead.
Speaker 3 (15:48):
Christian.
Speaker 4 (15:49):
Okay, I think that we
see some processes done by AI
or agents which we never thoughtwould be possible without doing
the human.
What was presented is reallymind what level of steps and pre
(16:10):
decisions AI can make and offera more, better result into the
process until a human needs tointeract to that.
And I think that will gofurther and further and further.
What I'm thinking is where isthe point where the human says
(16:33):
okay, there is a new point whereI have the feeling that now I
have to grab into this processbecause the AI is not good
enough and that point is, orthis frontier is, leveraged on
and on and on, something likethat.
(16:54):
But to have this feeling, tohave in mind this is the thing
AI cannot do.
I have to be conscious andcautious and I think, on the one
hand side, with AI we can makemore processes, we can make more
(17:20):
decisions easily, and on theother side, the temptation is
high that we just accept whatthe AI is prompting to us or
offering us.
I like the concept of the humanin the loop.
So at least the human at somepoint in this process has to say
(17:42):
, yes, I accept what the AI issuggesting, but having more time
to process.
More communication is alsocritical.
Just to click yes, okay, okay,okay.
I think we should implementprocesses where we just say,
(18:06):
okay, let's look at how we useAI here and take a little bit
back and say, wow, what numberof steps AI can make for us.
But just think where it justgoes too far.
Speaker 3 (18:25):
I think that's an
interesting line of thinking,
christian, and I think so.
Before we go deeper, let memaybe just say that some of the
stuff that we talk about in thisepisode like, if nothing else
is mentioned, these are mypersonal opinions and may not
reflect the opinions ofMicrosoft.
Let's sort of get intoproduct-specific stuff, but I
(18:47):
would like to take sort of aproduct's eye view on what you
just said, which is when we lookat agents these days and what
an agent can do and what shouldbe the scope of a given agent
and what should be its name, andso now we've released some
information about the salesorder agent and described how
does it work and actually beingfairly transparent about what it
(19:11):
intends to do and how it works,which I think is great.
We actually start by drawing upin the process today, before
the agent.
How would this process look?
Where are the humaninteractions between which
parties?
Now bring in the agent?
(19:34):
Now, how does that human in theloop let's say flow look like?
Are there places where thehuman actually doesn't need to
be in the loop?
That's the idea.
Don't bring in the human unlessit's need to be in the loop.
That's the idea.
Don't bring in the human unlessit's really necessary or adds
value.
So that's the line, that's theway that we think about it, to
try to really apply.
(19:54):
You know, if that A to Zprocess can remove the human
like can automate a piece We'vealways been trying to automate
stuff right for many years.
If AI can do that better now,well, let's do that.
But of course, whenever there'sa risk situation or wherever
(20:15):
there's a situation where thehuman can add value to a
decision, by all means let'sbring in the human into the loop
.
So that's the way that we thinkabout the agents and the tasks
that they should perform inwhatever business process.
And to your point, chris, Ithink that the cool thing about
(20:37):
AI in ERP, as in BusinessCentral these days, is that it
becomes super concrete.
Like we take AI from somethingthat is very sort of fluffy and
marketing and buzzwords that weall see online and we make it
into something that's veryconcrete.
So the philosophy is that in BCunless, of course, you're an
(21:00):
ISV that needs to buildsomething on top of it, or a
partner, a customer wants to addmore features AI should be
ready to use out of the box.
You don't have to create a newAI project for your business,
for your enterprise to startleveraging AI?
No, you just use AI featuresthat are already there, immersed
into the UI and among all otherfeature functions in Business
(21:22):
Central and among all otherfeature functions in Business
Central.
So, because small mediumbusinesses, many of them don't
even have the budget to do theirnew AI project and hire data
scientists and what have you andall these things create their
own models.
No, they should have AI readyto use.
So that's another piece of ourphilosophy.
Speaker 2 (21:44):
AI is.
I look at that as more as AI asa function, because if you have
AI as a function, you can getthe efficiencies.
I think, to some of thecomments from the conversations
that we've had and theconversations that I've heard,
you look for efficiencies sothat you can do something else.
People want to use the wordsomething else or something that
they feel is more productiveand let automation or AI or
(22:10):
robots I use the word quote dothe tasks that are mundane or
some would consider boring orrepetitive.
And we do use AI on a dailybasis and a lot of the tools
that we have.
To your point, Soren, that it'sjust embedded within the
application If you buy a vehicle, a newer vehicle now, they have
(22:31):
lane avoidance, collisionavoidance, all of these AI tools
that you just get in yourvehicle.
You either turn it on or turnit off, depending upon how you'd
like to drive, and it works andit helps the, the function, uh,
be there for you.
But to kind of take a step backfrom um ai in that respect.
(22:54):
But a couple things that I comewith ai we.
We talk about the vehicle.
Um, I'll admit I have a tesla.
I love the fsd and I used it alot and it just seems to improve
and improve and improve to thepoint where I think sometimes it
can see things I use the wordsee or detect things faster than
(23:15):
I can as a human right Now.
Ai may not be perfect and AImakes mistakes.
Humans make mistakes.
Humans get into car crashes andhave accidents right for some
reason, and we have acceptedthat.
But if AI has an accident, wefind fault or find blame in that
(23:36):
process, instead ofunderstanding that.
You know, in essence, nothingis perfect, because humans make
mistakes too and we accept it.
Why don't we accept it when AImay be a little off?
Speaker 3 (23:51):
That's such a great
question and the fact is, I
think right now is that to apoint that we don't accept it,
like we don't give machines thatsame benefit of the doubt, or
like if they don't work it'scrap and we throw them out, like
I mean that's like, but humanslike we, we're much more
(24:13):
forgiving, like we give them asecond chance.
And oh, maybe I didn't teach youuh well enough how to do it, or
so, but that's a good point andI, I, I love your example with
the Tesla.
So I also drive a Tesla, butI'm not in the US, so I can't
use the full self-drivingcapability, so I use the what do
you call it?
The semi-autonomous, so it cankeep me within the lane.
(24:35):
It reacts in an instant ifsomething drives out in front of
me much faster than I can do.
So I love that mix of me beingin control but just being
assisted by these great features.
That uh makes me drive in amuch safer way.
Basically, uh, I'm not sure I'ma proponent of sort of full
(24:55):
self-driving.
I don't know, I'm still tornabout that, but uh, that could
lead us into a good discussionas well, um I think you have
that trust because that I'm.
Speaker 1 (25:05):
I'm the same way with
brad, you know, I love, I love
it, um, as as I, you know,continue to use it.
But in the very beginning Icould not trust that thing.
I had my hand in the steeringwheel.
Um, you know, a white knuckleon on the steering wheel.
But uh, eventually I come toaccept it and I was like, oh,
(25:27):
that's a pretty good job, uh,getting me around.
Uh, am I still cautious?
Absolutely, I still want tomake sure that I can quickly
control something if I don'tbelieve it's doing the right
thing.
Speaker 3 (25:38):
So I, I think, um,
actually my reason for not being
a sort of full believer in insort of full self-driving, like
complete autonomy with cars isis not so much because I don't I
mean, I actually do trust thetechnology to a large extent.
It's more because of many ofthe reasons that are now in that
(25:58):
book that I pitched to all ofyou that moral AI like who has,
like if something goes wrong.
And there's this example in thebook where, where an uber car
like you would think it was avolvo they, they test an uber
car, some self-drivingcapabilities in some state and
it accidentally runs over a, awoman who's who's passing the
(26:20):
street in in an unexpected placeand it was dark and things of
that nature, and the driverwasn't paying attention, and
there was all these things aboutwho has the responsibility for
that end of the day.
Was it the software?
Was it the driver who wasn'tpaying attention?
Was it the, the government whoallowed that car to be on that
(26:40):
road in the first place?
But while testing it out all ofthese things and if we can't
figure that out or all thosethings need to be figured out
first before you allow atechnology loose like that,
right, and so that and I wonderif we can do that.
If we can, we like we don'thave a good track record of of
(27:06):
doing that, uh.
So I wonder I I'm I'm fairlysure the technology will, will
get us there, if we can livewith the uh, uh when it doesn't
work well.
So what happens if aself-driving car kills 20 people
per year, or cars multiple?
(27:26):
Um, can we live with that?
What if 20 people is a lotbetter than 3000 people from
from human drivers Like yeah,that is.
Speaker 2 (27:35):
I think in the United
States there's 1.3.
I don't don't quote me on thestatistics.
I think I heard it again withthe all these conversations
about self-driving and you knowthe Moralei book and listen to
some other tools.
I think in the United States isone point three million
fatalities due to automobiles ayear.
You know I forget if it's aspecific type, but it's a lot.
So, to get to your point, youknow not to focus on the you
(28:02):
know, the driving portion,because a lot of topics we want
to talk about.
Is it safer?
In a sense, because you maylose 20 individuals tragically
in an accident per year, right,whereas before it was a million
because AI?
You know I joke and I've hadconversation with Chris talking
about the Tesla.
(28:22):
I trust the FSD a lot drivingaround here in particular, I
trust the FSD a lot more than Itrust other people.
And to your point of someonelosing their life tragically,
crossing in the evening at anunusual place and having a
collision with a vehicle, thatcould happen with a person doing
(28:42):
it as well, and I've drivenaround and the Tesla detected
something before I saw it.
So the reaction time is alittle bit quicker because if
you're driving right and it goesup to a couple points I want to
talk about, which I'll bring upto is, you know, too much trust
(29:03):
and de-skilling.
I want to make sure we get tothose points.
And then also, if we're lookingat analytics, some you know
harm bias as well, and then also, if we're looking at analytics,
some you know harm bias as well, no-transcript.
(30:02):
And then to Christian's pointand even your point where the
humans are involved.
Are the humans even capablewith the skilling?
Because you don't have to dothose tasks anymore to monitor
the AI?
You know, if you look back, I'mgoing to go on a little tear in
a moment.
In in education, when I wasgrowing up, we learned a lot of
math and we did not, you know,use calculators.
(30:24):
I don't even know when thecalculator was invented, but we
weren't allowed to.
You know, they taught us how touse a slide rule.
They taught us how to use aslide rule.
They taught us how to use evenbelieve it or not, when I was
really young an abacus, and nowand then I could do math really,
really well.
Now, with the, you know, easeof using calculators, ease of
using your phone or ease of evenusing AI to do math equations?
(30:47):
can you even do math as quicklyas you used to?
So how can you monitor a toolthat's supposed to be
calculating math, for example?
Speaker 3 (30:54):
I, I, I think you're,
I mean, you have very good
points about the like.
Just coming back to the car fora second, because, uh, I mean,
technology will speak for itselfand what it, what it's capable
of, I think.
I think where we have to takesome decisions that we haven't
had to before is when we dial upthe autonomy to 100% and the
(31:15):
car drives completely on its own, because then you need to be
able to question how does itmake decisions?
And get insights into how doesit make decisions based on what?
Who determines how large anobject has to be before the car
will stop if it runs?
So I think back in the old daysin Denmark, insurance companies
(31:41):
wouldn't cover if the object youran over was smaller than a
small dog, something like that.
So who set those rules?
And the same thing for thetechnology too Should I just run
that pheasant over or should Istop?
For the pheasant?
Those kind of decisions.
But if it's a human driving incontrol, we can always just
(32:02):
point to the human and say, yeah, you need to follow the rules,
and here they are.
But if it's a machine, allkinds of things, and eventually
if the machine fails or we endup in some situation where
there's a dilemma who'sresponsible, who's accountable
and that just becomes very hardquestions.
I don't have the answer, but Ithink when we dial up the
(32:23):
autonomy to that level, we needto be able to have you know and
we need to talk about what levelof transparency can I demand as
a user or as a bystander orwhatever?
So there's just so manyquestions.
That opens up, I think.
Speaker 4 (32:39):
And if you are
allowed to turn off AI
assistance, will, at some pointin time, when a failure is
occurring, you be be responsiblefor turning that assistance off
.
Speaker 2 (32:53):
That's a very good
point.
Speaker 4 (32:55):
Someone could say.
So you have to keep in mindthat with assistance you're
better.
Like in the podcast episode youmentioned, a human together
with a machine is better thanthe machine.
Other ways you could say ahuman with a machine is better
than another human or just ahuman.
(33:17):
And I think at some point intime, companies who are looking
for accountability andresponsibility will increase the
level of you have to turn on AIassistance.
You could imagine when you getinto a car that is recognizing
(33:39):
you as a driver your facialexpression or something like
that that it can recognize ifyou're able to drive or not, and
then the question is will itallow you to drive or will it
decide no, don't touch the wheel, I will drive, or something
like that.
Or if something pops up you'renot able to drive, I decide that
(34:04):
for you and I won't start theengine.
Will you override it or not?
That are those scenarios thatpop up in my mind.
And and how will you decide asa human when you have something,
uh, emergent happening?
You have to drive someone tothe, to the hospital or
(34:24):
something like that?
You will override, but will thesystem ask is it really an
emergency?
Or something like that?
You say I just want to do this.
How are you reacting in thismoment?
Speaker 3 (34:40):
I think that's super
interesting.
And coming back to thetransparency thing, one of my
favorite examples is if I go tothe bank and I need to borrow
some money, for many years, andeven before AI, there's been
some algorithm that the bankperson don't even know about how
(35:03):
it works, probably, but canjust see a red or green light
after I ask so, okay, how muchmoney do you want to borrow?
Oh, I want to borrow 100K.
No, you can't do that, sorry.
Uh, machine says no, right.
And and uh, even before ai, ifsomething is complex enough, uh,
it doesn't really matter ifit's ai or not.
(35:25):
But in these sort of lifeimpacting situations, do I have
a right for transparency?
Do I have a right to know whythey say no to lend me money,
for example?
The same if I get rejected fora job interview based on some
decision made by an algorithm orAI.
These are very serioussituations where that will
(35:48):
impact my life and of course,they don't go.
You can't claim transparencyeverywhere, but I think there
are some of these situationswhere, as humans, we do have a
right for transparency and toknow how do these things know?
And there is a problem if theperson who's conveying the
information to us.
The bank bank person doesn'teven have that insight, doesn't
(36:10):
even know how it works.
They just push the button andthen the light turns red or
green.
So that's yeah, but again, somany questions, and that's why
I'm actually happy that today Idon't know if you saw it we
released a documentation articlefor BC about the sales audit
(36:33):
agent that, in very detailed way, describes what this agent does
, what it tries to do, what kindof data it has access to, what
kind of permissions it has, allthese things.
I think that's a very, verytransparent way of describing a
piece of AI and I'm actuallyvery, very proud of that.
We're doing that.
Yeah, just want to make that,doesn't make that segue.
Speaker 4 (36:56):
Yeah, it's filling
the need of humans to know how
does the system work or does thesystem make decisions?
To proceed to the next step,Because I think there's a need
to have a view on is what hashappened before and has an
(37:19):
influence on me as a human isjudged in a way that is doing
good for me or not?
Like your example, what isevaluated when you ask for a
back credit or something likethat.
And having this transparencybrings us back to yes, I have an
(37:39):
influence on how it is needed,Because I can override the AI,
because I can see where it makesa wrong decision or wrong step
or something like that.
Make the wrong decision orwrong step or something like
that, Like I would do when Italk to my bank account manager
and say, hey, does it have theold address?
(38:01):
I moved already.
Oh no, it's not in the system.
Let's change that and then makeanother evaluation or something
like that.
And I think this autonomy for usas users to keep this in play,
that we can override it or wecan add information, new
(38:23):
information, in some kind of way.
We can just do it when we knowwhere is this information taken.
We can just do it when we knowwhere is this information taken,
how old is it and how is itprocessed.
So I like that approach verymuch.
I don't think every user islooking at it, but as an ERP
(38:46):
system owner like I'm in ourcompany as well needs to have
answers to those questions fromour users when we use these
features, but it's true and justso.
Speaker 3 (38:58):
Yeah, coming back,
just come back to the banking
sample just again.
So the bank person probablydoesn't know if their AI or
algorithm takes into account howmany pictures they can find
with me on it on Facebook whereI hold a beer, like would that
be an influencing factor on ifthey want to lend me money?
(39:19):
So all these things.
But we just don't have thatinsight and I think that's a
problem in many cases.
You could argue I don't knowhow the Tesla autopilot does its
.
You know whatever influences itto take decisions, but that's
(39:40):
why I like the semi-autonomouspiece of work right now.
Speaker 2 (39:45):
No, it is, I think.
But listening to what you'resaying, I do like the
transparency, or at least theunderstanding.
I like the agent approachbecause you have specific
functions.
I do like the transparency sothat you understand what it does
, so you know what it's making adecision on.
So if you're going to trust itin a sense or you want to use
(40:06):
the information, you have toknow where it came from.
Ai or computers in general canprocess data much faster than
humans.
So, being able to go back toyour bank credit check example,
it can process much moreinformation than a person can.
I mean a person could come upto the same results, but it may
(40:28):
not be as quick as a computercan, as long as that information
is available to it.
But I do think for certainfunctions the transparency needs
to be there because in the caseof bank credit, how can you
improve your credit if you don'tknow what's being evaluated to
maybe work on or correct that?
Or, to Christian's point, theremay be some misinformation in
(40:50):
there that, for whatever reasonis in there, that's impacting,
so that.
Or to Christian's point, theremay be some misinformation in
there that you know, forwhatever reason was in there,
that's impacting so that youneed to force it.
Some other things, to the pointthat Christian also made.
You know humans with a machineis better than a human.
You know, potentially in somecases, because the machine can
be the tool to help you dosomething, whatever it may be.
(41:13):
You referenced the hammerbefore and I use that example a
lot.
You have hammers, you havescrewdrivers, you have air guns.
Which tools do you use to dothe job?
Well, it depends on what you'retrying to put together.
Are you doing some rough workon a house where you need to put
up the frame, so maybe a hammeror an air gun will work, and if
you're doing some finish work,maybe you need a screwdriver.
You know, with a small screw todo something.
So there does have to be adecision made.
(41:33):
And at what point can AI makethat decision versus a human
make that decision?
And, to your point, where doyou have that human interaction?
But I want to go with the humaninteraction of de-skilling,
because if you have all thesetools that we rely on.
To go back to the calculator,and you know we've all been
(41:54):
reading, you know I think we allread the same book and I think
we all listened to some of thesame episodes.
But you look at pilots andplanes with autopilots right
same thing with someone drivinga vehicle like, do you lose the
skill to?
You know ai does so muchportion of flying a plane.
I didn't even really thinkabout that.
You know AI does so muchportion of flying a plane.
I didn't even really thinkabout that.
You know the most difficult orthe most most dangerous is what?
(42:15):
The taking off and landing of aplane, and that's where AI gets
used the most.
And then a human is in there totake over in the event that AI
fails.
But if the human isn't doing itoften right, even with the
reaction time, okay well, howquickly can a human react, you
know, to a defense system?
Same thing, you know, if youlook at the Patriot missile
(42:36):
examples, where you know thePatriot missile detects a threat
in a moment and then will go upand try to, you know, disarm
the threat.
So at what point do we ashumans lose a skill?
Because we become dependentupon these tools and we may not
(42:56):
know what to do in a situationbecause we lost that skill.
Speaker 1 (43:04):
That's a good point.
Sorry, go ahead.
No, it's a really good point.
Speaker 3 (43:08):
Sorry, go ahead.
No, it's a really good point.
I like that example from Ithink it was from the Moral AI
book as well, where there's thisexample of some military people
that you know they sit in theirbunker somewhere and handle
these drones like day in and dayout and, because they're so
autonomous, everything happenswithout their.
(43:30):
You know they don't need to beinvolved, but then suddenly a
situation occurs.
They need to react in sort of asplit second and take a
decision, and I think one of theoutcomes was you know, their
manager says that.
Well, who can blame them ifthey take a wrong decision at
that point?
Because it's three hours ofboredom and then it's three
(43:54):
seconds of action.
So they, they're just notfeeling it.
Where, to your point, right, ifthey were like they're, they're,
they're being de-skilled fortwo hours and 57 minutes and now
there's three minutes of actionwhere everything happens.
Right, who can, who can expectthat they keep up the level of
you know, skills and what haveyou if, if they're just not
involved.
So it's super interesting point.
(44:15):
Um, yeah, so many, so manyquestions that it raises.
Speaker 2 (44:23):
Uh this, it goes on,
it goes on, it goes on, it's,
and it is in that moral a bookis, and it was the patriotot
missile example.
Because the Patriot missile hadtwo failures, one with a
British jet and one with anAmerican jet shortly thereafter.
And that's what they weretalking about is how do you put
human intervention in there, youknow, to reconfirm a launch?
(44:44):
Because in the event, if it's athreat, it will use the word
threat.
How much time do you have toimmobilize that threat?
Right, you may only have asecond to two.
I mean, things move quickly inthe.
In the case of the patriotmissile, again, it was intended
to disarm, uh, you know, andagain, missiles that are coming
at you, that are being launched,you know, over the pond, as
(45:05):
they say, so they can take themdown, and that's the point with
that.
Speaker 1 (45:11):
And if I could step
back for a second.
You know when we're having aconversation about the
usefulness of AI is based uponthe source that it has access to
and you know understandingwhere it's getting its source
from and what access it has.
(45:31):
If you're limiting the sourcethat it can consume to be a
better tool, are we potentiallylimiting its capabilities as
well, because we wanna controlit so much, in a sense, to where
it's more focused, but are wealso limiting its potential,
(45:55):
right?
Yes, so yeah, go ahead, sorry.
Speaker 3 (46:01):
Yeah, no, I think
that's very well put and I think
that's a consequence and Ithink that's fine.
I mean, just take the salesauto agent again as an example.
We have railed it very hard.
We put many constraints up forit, so we can only do a certain
(46:22):
number of tasks.
We can only do task A, b and C,d, e, f.
It cannot do.
We had to set some guardrailsfor what it can do.
It's not just about and I thinkthis is a misconception
sometimes people think aboutagents and say here's an agent,
here's my keys to my kingdom.
Now, agent, you can just doanything in this business, in
(46:44):
this system, and user will tellyou what to do or we've given
you a task.
That's not our approach toagents.
In BC.
We basically said here's anend-to-end process or a process
that has sort of a naturalbeginning and a natural ending.
In between that process you cantrigger the agent in various
places, but the agent has a setinstruction.
(47:08):
You receive inquiries forproducts and eventually you'll
create a sales order.
Like everything in betweenthere could be all kinds of you
know human in the loop anddiscussions back and forth, but
that's the limit of what thatagent can do and that's totally
fine.
It's not fully autonomous.
You can't just now go and say,oh, by the way, buy more
(47:29):
inventory for our stock, that'sout of scope for it, and at that
point I think that's totallyfine.
And it's about finding thosegood use cases where there is a
process to be automated, wherethe agent can play a part, and
not about just creating a let'scall it a super agent that can
(47:51):
do anything with like.
So I think that's it's a verynatural development.
Speaker 4 (47:58):
So you don't aim for
a T-shape profile agent like it
is in many job descriptions Now.
You want a T-shape profileemployee with a broad and deep
knowledge.
We as human can develop this,but the agent approach is
(48:19):
different.
I would more say it's notlimiting the agent or the AI of
the input or the capabilities.
It is more like going more deep, having deep knowledge.
In this specific functionality,the AI agent is assisting.
That can be more informationand it can go deeper than a
(48:44):
human can be.
For example, I was veryimpressed by one AI function I
had in my future leadershipeducation.
We had an alumni meeting inSeptember and the company set up
an AI agent that is behavinglike a conventional business
(49:06):
manager.
Because we learn how to set upbusinesses differently and when
you have something new you wantto introduce to an organization,
often you are hit by thecultural barriers and just to
train that more without humans,they invented an ai model where
(49:32):
you can put your ideas in andyou have a conversation with
someone who has traditionaltayloristic business thinking
and something like that.
So you can train how you um putyour ideas to such a person and
what will the reactions will bejust to train your ability to
(49:56):
be better when you place thesenew ideas to a real person in a
traditional organization orsomething like that and that had
such a deep knowledge about allthese methodologies and
thinking and something like that.
I don't know who I could findto be so deep in this knowledge
(50:19):
and have exactly this profile,this deep profile that I needed
to train myself on.
Speaker 1 (50:31):
That is a really
interesting use case.
I think then it becomes tocontinuing a conversation about
maybe there's a misconception ormisunderstanding in the
business space, because rightnow, you know, I've had several
conversations where AI is goingto solve their problems.
Ai is going to solve theirbusiness challenges, but they,
(50:54):
you know, from a lot of people'sperspective, it's just this one
entity of, like it's going tosolve all my business problems,
whereas for us engineers, weunderstand that you can have
specific AI tool that wouldsolve a specific problem or a
specific process in yourbusiness.
But right now a lot of peoplebelieve, like I'm just going to
(51:15):
install it, it's going to solveeverything for me, and so not
realizing that there aredifferent categories for that,
you know different areas and Ithink having these kinds of
conversation in hopes that knowit's it's not just a
one-size-fit-all um kind ofsolution out there, yeah, and
indeed, and when you see, likethe um industrial work developed
(51:39):
in the first phases, it's likegoing back to um having one
person just fitting is a bold ora school or something like that
.
Speaker 4 (51:51):
That is the agent at
the moment, just one single task
it can do.
But it can do many, many thingsinto this task at the moment
and what I think it will takesome time to develop is
developing this T-shape from theground of the T to have this
(52:15):
broad knowledge and broadcapabilities out of one agent,
or the development of thenetwork of agents.
So in some sessions in Viennathat was presented, the team of
agents, that was presented, theteam of agents.
So you have a coordinator thatcoordinates the agents and then
brings back the proposal fromthe agent to the user or
(52:37):
something like that.
That will look like the oneagent can do all of these
capabilities for the user.
That is presented.
But in the deep functionalitythere is a team of agents and a
variety of agents doing veryspecific things.
Speaker 2 (52:57):
I like that case.
It goes to, chris, to yourpoint of sometimes it's just a
misunderstanding of what AI is,because I think there's so many
different levels of AI and wetalked about that before.
You know what is machinelearning, what is large language
models.
I mean, that's all in AI.
A lot of things you know canfall into AI.
But to the point of the agentsto go into ERP software, even
(53:20):
Christian, to your point, maybeeven in an assembly line or
manufacturing, I'd like theagents in the business aspect to
have a team of agents togetherso they all do specific
functions.
To Soren's point of where do youhave some repetitive tasks or
some precision tasks, or even,in some cases, some skilled
(53:43):
tasks that need to be done, andthen you can chain them together
.
Because even if you look at anautomobile we talked about an
automobile there isn't anautomobile, that just appears.
You have tires, you haveengines, you have batteries, you
have right.
The battery provides the power,the wheel provides, you know
the, the ability to easily moveright.
(54:03):
The engine will give you theforce to push.
So putting that all togethersee, this is how I start to look
at putting that all togethernow gives you a vehicle.
So the same thing if you'relooking at erp software.
That's why when I first heardabout the agent approach when we
talked some months ago, soren,that having an agent for sales
orders or having an agent forfinance or having an agent for
(54:24):
purchase orders or something, aspecific task, you can put them
all together and then use theones you need and then have
somebody administer those agents, so you have like an agent
administrator.
Speaker 4 (54:35):
That is where the
human comes back into the loop,
because at some point you haveto put these pieces together.
I think at the moment, this isthe user that needs to do this,
but this will develop further inthe future.
So you have another point whereyou end in or where you need
(55:01):
ideas or something like that,because that is also what I
learned and found veryinteresting.
When you see an AI suggestingsomething to you, this feeling
this is a fit for my problem isinside your body and at the
moment, you cannot put this intoa machine.
(55:23):
So the idea, if the suggestionis right and you decide to take
it and to use it, you need ahuman to make this decision,
because you need the human body,the brain and everything
together seeing and perceivingthis, to make this decision if
it is wrong or good for this usecase.
Speaker 3 (55:48):
I think that depends
a bit Christian, if I may.
So there are places where,let's say, one AI could you
could give it a problem totackle and it will come with
some outcomes.
And there could then be anotherAI and now I use the term
loosely but another process thatis only tasked with assessing
(56:11):
the output of the first onewithin some criteria, within
some aspects.
So that has been, say loosely,now trained, but its only
purpose is to say, okay, give methe outcome here and then
assess that with complete fresheyes like it was a different
person.
(56:31):
Of course it's not a person andwe should never make it look
like it's a person but onemachine can assess the other.
Speaker 1 (56:38):
Basically, that's
what I'd say to a certain degree
, right, if we can frame theproblem, right, yeah, and you
had mentioned about from thehuman aspect, to take over and
said you know that's wrong.
Right, like, oh, it's wrong, Iknow it's wrong, I'm going to
take over.
It reminds me of a story when Idid a NAV implementation a
(57:02):
while back where we had demandforecasting and when we
introduced that to theorganization it does like tons
of calculation and it's going togive you a really good output
of what you need based uponinformation and data that you
have.
And I had this individualperson that I was working with,
(57:23):
or that person was working forthis organization, where that's
not right, that's wrong, and Iwould ask can you tell me why
it's wrong?
I'd love to know, like, how areyou feeling?
Like, what made you feel likeit was wrong?
Do you have any calculations?
No, I just know it's wrongbecause typically we do it, you
know we, typically it's thisnumber right, but they couldn't
(57:46):
prove it.
So that's also a dangerouscomponent where a person could
take over and then whateverdecision, whatever they feel
like it's wrong, it could.
Where they think it's wrong,they can also be wrong.
Right, it's just like the humanaspect of it.
But, but they can.
But they can.
Speaker 3 (58:07):
Yes, but they can,
yeah, yeah and I think I mean
and that.
So the first time when Ilearned more about sort of ai,
like these recent years, wassome eight, nine years ago when
we we did some of the classicsort of machine learning stuff
for some customers and what wasan eye-opener for me was that it
didn't have to be a black box.
(58:27):
So back then, let's say, youhad a data set.
I think the specific customerwanted to predict which of their
subscribers would churn right,and there was a machine learning
model for that on Azure thatthey could use for that.
I don't know the specific nameof it and the data guy that
(58:50):
helped us one of my colleaguesfrom Microsoft back then showed
them data because they had theirideas on what were the
influencing factors that madeconsumers churn.
These were, these weremagazines that they were
subscribing to, and when he toldthem, show them the data, and
(59:10):
then said uh, and showed thembecause they could do that with
with the machine learning toolsthey could, he could show them
these are the influencingfactors, like actually determine
based on the data that you justsee and he had validated
against their historic data.
They were just mind-blown.
(59:31):
So it turned out I'm justparaphrasing now that people in
the western part of the countrywere the ones who churned the
most.
So the geography was thepredominant influencing factor
to predict churn.
They were just mind-blownbecause they had never seen that
data.
They had other ideas of what itmeans to churn.
Like to your point, chris, like.
(59:52):
But that was just so cool thatwe could bring that kind of
transparency and say this is howthe model calculates, these are
the influencing factors that ithas found by looking at the
data.
So I just thought that was agreat example of bringing that
transparency when humans, likeyou say, are just being stubborn
and saying no, it doesn't work,it's not right.
Speaker 2 (01:00:15):
That's definitely
another factor, because we've
all come into those situationswhere that just doesn't feel
right and in some cases it couldbe correct.
Speaker 1 (01:00:25):
But it depends on the
skills.
That's what I want to go backto is the skills.
It's the skills.
Speaker 2 (01:00:31):
How, if we're going
to keep creating AI tools to
help us do tasks okay, one, I'mgoing to go off on a tangent a
little bit.
One how do we ensure we havethe skills to monitor the AI?
How do we ensure that we havethe skills to perform a task?
(01:00:53):
Now I understand.
The dishwasher Chris you talkedabout was invented.
Now we don't have to washdishes manually all the time to
save us time to do other things.
We're always building thesetools to make things easier for
us and, in essence, up therequired skill to do a function,
saying we need to work on morevaluable things.
Right, we shouldn't have to beclicking post all day long.
(01:01:16):
Let's have the system do a fewchecks on a sales order.
If it meets those checks, letthe system post it.
But when is there a point wherewe lose the ability to have the
skill to progress forward?
And then with this, with all ofthese tools that help us do so
much, because now that we haveefficiency with tools,
(01:01:39):
oftentimes it takes a reductionof personnel.
I'm not trying to say peopleare losing their jobs.
It's going to take a reductionof personnel to do a task.
Therefore, relieving thedependency on others.
Humans are communal.
Are we getting to the pointwhere we're going to lose skill
and not be able to do somecomplex tasks because we rely on
(01:01:59):
other tools?
And if the tools are to get morecomplex and we need to have the
skill to determine thatcomplexity, if we miss that
little middle layer of all thatmundane building block stuff,
how do we have the skill to dosomething?
And two, if I can now I see AIimages, I see AI videos being
(01:02:23):
created all the time.
It does a great job.
Before we used to rely onartists, publishers, other
individuals to create thatcontent for the videos, for
brochures, pictures, images, theB-roll type stuff we'll call it
.
If we don't need any of thatstuff and we're doing it all of
(01:02:44):
ourselves, what are we doing tous to be able to work together
as a species if now I can do allthe stuff myself with less
people?
So I have many points there.
One, it's the complexity of theskill.
And how do we get that skill ifwe immediately cut out the need
, for we no longer need someoneto put the screw on that bolt.
(01:03:04):
As you pointed, christian, weneed someone to come in and be
able to analyze these complexresults of ai.
But if nobody can learn that bydoing all those tasks, what
does that give us?
So that's my little, so twopoints so what is?
Speaker 3 (01:03:19):
yeah, no, that's
great, great questions.
So what you're saying is how dowe determine if this car is
built right if there's nodrivers left to to to test it,
like no, no one has the skill todrive anymore.
So how?
How can they determine if thiscar is built up to a certain
quality standard and what haveyou?
Well, the other answer would beyou don't have to because it
(01:03:41):
will drive itself.
But until we get that point,like in that time in between,
you need someone to still beable to validate and probably
for some realms of our work andjobs and society, you will
always need some people tovalidate.
So what do you do?
I think those are greatquestions and I certainly don't
have the answer to it.
Speaker 1 (01:04:01):
I would say I've had
this conversation with Brad for
a couple of years, I think himand I, you know, we just we love
where I love where AI is comingand I pose the question about,
you know, is AI becomes anecessity for the survival of
humanity.
Becomes a necessity for thesurvival of humanity Because, as
(01:04:22):
you all pointed out, thateventually you'll lose some of
those skills because you're sodependent.
Eventually you'll lose it.
And I've had tons ofconversation Right now we don't
need AI.
We don't need AI for thesurvival of humanity, but as we
become more dependent, as welose some of those skills,
(01:04:46):
because we're giving it to AI todo some tedious tasks sometimes
it could be in the medicalfield or whatnot it becomes a
necessity in the future.
It will eventually become anecessity in the future for
humanity's survival, but we'reforcing it.
Right now we don't need it.
Speaker 2 (01:05:03):
We are forcing the
dependency by losing this
Because.
I'm not saying it's right orwrong, but I'm listening to what
you're saying, saying that weare going to be dependent on
machine for the survival of thehuman race.
I mean, humans have been aroundfor how long?
Speaker 3 (01:05:23):
But we're already
dependent on machines.
Right, we've been around forhow long?
But we're already dependent onmachines.
Right, we've been there for along time.
We're forcing ourselves to bedependent upon it.
Speaker 2 (01:05:29):
That's why I use the
word machine, because we force
ourselves to be dependent uponthat right.
We force ourselves to lose theskill or use something so much
that it's something that we musthave to continue moving forward
.
Speaker 3 (01:05:47):
Yeah, my point was
that that's not new.
I mean, we've done that for 50years like force dependency of
some machines, right?
So without them we wouldn'teven know where to begin where
to do that task.
So AI is just probablyaccelerating that in some realms
now, I think.
Speaker 1 (01:06:07):
Yeah, it is, Because,
you know, as humans' desire is
to improve quality of life,expand our knowledge and
mitigate risk.
It's not improving quality oflife.
Speaker 2 (01:06:18):
It's to be lazy?
I hate to tell you it's.
Humans take the path of leastresistance and I'm not trying to
be there's a little levity inthat comment.
But why do we create the toolsto do the things that we do?
Right?
We create tools to harvestfruits and vegetables from the
farm, right, so we can do themquicker and easier and require
(01:06:39):
less people, right?
So it's not necessarily, youknow, we do it because to make
things better.
We do it because, well, wedon't want someone to have to go
to the field and, you know,pick the cucumbers from the
cucumber vine, right, we want,you know, they shouldn't have to
do that, they should dosomething else.
We're kind of, in my opinion,forcing ourselves to go that way
(01:07:00):
.
It is necessary to harvest thefruits and the vegetables and
the nuts to eat, but, you know,is it necessary to have a
machine do it?
Well, no, we just said it wouldbe easier, because I don't want
to go out in the hot sun allday long and you know harvest.
Speaker 3 (01:07:16):
You can do the dishes
by hand if you like, right yeah
?
Speaker 1 (01:07:20):
If you like, yeah, if
you choose to.
No one wants to.
No one wants to do the dishes.
Speaker 3 (01:07:24):
trust me I will never
live in a place without a
dishwasher.
I mean, it's the worst that canhappen.
Speaker 2 (01:07:31):
It is, and the pots
and the pans forget it right.
Speaker 4 (01:07:35):
If you take this,
further at some point in time.
If you have a new colleague andyou have to educate him or her,
do you educate him to makethese steps the sales order
agent is doing by him or herself, just to have the skill to know
(01:07:56):
what you're doing.
Or if you are saying, just pushthe button.
Speaker 1 (01:08:06):
Yeah, but I think
what?
Eventually, as you continue tobuild upon these co-pilots in AI
, eventually you just have twoER pieces and talk to each other
.
And then what then?
Where are we then?
Speaker 3 (01:08:23):
Yeah, super
interesting.
What then?
Where are we then?
Yeah, super interesting.
I mean, who knows?
I think it's so hard to predictwhere we'll be even just in 10
years.
Speaker 2 (01:08:34):
I don't think we'll
be able to predict where we'll
be in two years, I think it's.
Will we ever be able to press abutton Like right now?
I can create video images andstill images.
I'm using that because a lot ofpeople relate to that, but I
can create content, createthings.
I've also worked with AI fromprogramming in a sense, to
(01:08:56):
create things.
I was listening to a podcast theother day.
In the podcast they said within10 years, the most common
programming language is going tobe the human language.
Because it's getting to thepoint where you can say create
me this.
It needs to do this, this andthis, and an application will
create it, it will do the testand produce it.
You wake up in the morning andnow you have an app.
(01:09:17):
So it's going to get to thepoint where what happens now?
Let's move fast forward alittle bit, because you even
look at github, co-pilot forcoding, right.
You look at the sales agentschris's point erp systems can
just talk to each other.
What do you need to do?
Is there going to be a pointwhere that's what I was getting
at where we don't need otherpeople because we can do
everything for ourselves?
And then how do we survive ifwe don't know how to work
(01:09:42):
together because we're not goingto need to?
Speaker 3 (01:09:45):
that is so how we go
yeah, I'm sorry.
Speaker 2 (01:09:49):
Sorry, now, that's so
.
To go to your point, how is aigoing to help progress, the
human civilization, right, orthe species, if we're going to
get to the point where we're notgoing to need to do anything,
we're all just going to sit inmy house because I can say make
me a computer and click a button, it will be, you know there and
(01:10:12):
that's you know where I comefrom with I would in that other
podcast show that you mentioned,where I quote james burke when
he says that we will have thesenanofabricators, that in 60
years, everyone will haveeverything they need, and just
produce it from air, water anddirt.
Speaker 3 (01:10:27):
Basically right, so
and uh, so that that's the end
of scarcity.
So all the stuff that we'rethinking about right now are
just temporary issues that wedon't need to worry about in 100
years.
So that that's just impossibleto even imagine.
But because, as one of you saidjust before, we'll probably
always just move the needle andfigure out something else to
desire, something else to do.
(01:10:48):
But I think it is a goodquestion to ask but what will we
do with this productivity thatwe gain from AI?
Where will we spend it?
So now you're a company, nowyou have saved 20% cost because
you're a company.
Now you save 20% cost becauseyou're more efficient in some
processes due to AI or IT ingeneral.
What will you do with that 20%?
(01:11:09):
Do you want to give youremployees more time off?
Do you want to buy a newprivate jet?
I don't know.
You have choices right, but as ahumanity, I definitely
personally my's.
Uh, you have choices, right, um, but as a but as a humanity, I
definitely.
I personally, my personalopinion is I mean, I would
welcome a future where we would,where we could work less, where
(01:11:31):
we could have machines to dothings for us.
But it requires that we have aconversation, start thinking
about how will we interact insuch a world where we don't have
to work the same way we dotoday.
What?
What will our social lives looklike?
Why do we need each other?
Do we need each other?
We are social creatures, we arecommunal creatures.
So, yes, I think we do.
But how, what will that worldlook like?
(01:11:53):
I think this keeps me up atnight sometimes.
Speaker 2 (01:12:04):
I can't imagine, and
nor did I imagine, there'd be
full self-driving vehicleswithin a short period of time,
as it had to.
I mean, I think, as you made agreat point, soren, I don't
think anyone can know whattomorrow will be or what
tomorrow will bring with this,because it's advancing so
rapidly.
And go back to the points Isaid I had mentioned you talked
about the podcast with JamesBurke, which was a great podcast
as well too.
That was the You're Not soSmart episode I think it was 118
(01:12:28):
on connections, which talked alot about that.
And yes, it was a great episode.
That's another great podcast,and a lot of this stuff is going
to be building blocks that wedon't even envision what it's
going to build.
You know, look at the historyof the engine.
You look at the history of anumber of inventions.
They were all made of smalllittle pieces.
So we're building those piecesnow.
(01:12:49):
But also our mind is going toneed to be I use the word
stimulated.
If we're going to get to thepoint where we don't have to do
anything, how are we going toentertain ourselves?
We're we going to entertainourselves?
We're always going to findsomething else right to have to
do, but is there going to be apoint where there is nothing
else because it's all done forus?
Speaker 3 (01:13:12):
yeah, just want to
comment on that one thing.
You said there like that youreferenced that no one, no one
just imagined the car, like you.
You know, people did stuff,invented stuff, but suddenly
some other people could build onthat and invent other stuff and
then eventually you had a car,right?
Or anything else that we knowin our life.
(01:13:32):
And I think James Berg alsosays that innovation is what
happens between the disciplines,and I really love that.
I mean, just look at agentstoday.
Like four years ago, beforeLLMs were such a big thing.
I know they were in a veryniche community, but with sort
of the level of LLMs today, noone said let's invent LLMs so we
(01:13:58):
could do agents.
No, I mean, LLMs was inventedNow because we have LLMs, so we
can do agents.
No, I mean, llms was inventedNow because we have LLMs.
Now we think, oh, now we can dothis thing called agents and
what else comes to mind in sixmonths, right?
So it just proves that no onehas this sort of five-year plan
of, oh, let's, in five years, dothis and this.
No, because in six monthssomeone will have invented
(01:14:21):
something that, oh, we can usethat and oh, now we can build
this entirely new thing.
So that's what's just super.
It's both super exciting, butit's also a bit scary.
I mean I can, I can speak foras as a product developer.
It's definitely challenged meto rethink my whole existence as
(01:14:41):
a product person, because now Idon't actually know my toolbox
anymore.
Two years ago I knew what ALcould do Great.
I knew the confines of what wecould build.
I knew the page types in BC andstuff.
So if I had a use case, I couldvisualize it and see how we can
probably build something.
If we need a new piece from theclient, then we could talk to
(01:15:04):
them about it and we can figurethat out.
But now I don't even know if wecan build it until we're very
close to having built it.
I mean, so it's.
There's so much experimentationthat, yeah, we're building the
airplane where we're flying itin that sense, right and so that
also challenges our wholetesting approach and testability
(01:15:25):
and frameworks.
But so, which is super excitingin itself, so it's just a
mindset change, right, um, but,but definitely challenge your
product people oh, it definitelydoes.
Speaker 2 (01:15:36):
I I think uh ai is um
, it's definitely changing
things and it's here to stay.
I guess you could say.
I'm just wondering, you know.
I say I think back of a moviewas it from the 80s, called
Idiocracy.
You know, if you haven'twatched it it's a mindless movie
(01:15:57):
, but it is.
It's the same type of thingwhere a man from the past goes
into the future movie, but it is.
It's the same type of thingwhere a man from the past goes
into the future and you knowwhat happens to the human
species in the future and howthey are.
It's pretty comical.
It's funny how some of thesemovies are some of these
circling back.
Yeah, they circle back, youknow with.
Speaker 4 (01:16:16):
You know star trek,
star wars I'm wondering when we
will be there.
Speaker 3 (01:16:25):
That already happened
.
I just hope we won't get to thestate where I think you said
that cartoon or that animatedmovie Wall-E where the people
are just lying back all day andeating and their bones are
deteriorating because they don'tuse their bones and muscles
anymore.
So the skeleton sort of turnsinto something like they just
(01:16:46):
become like wobbly creaturesthat just lie there.
Speaker 4 (01:16:50):
As I don't know seals
, or consuming what was really
interesting with Back to theFuture is this thing here,
because Doc Brown made this timemachine using a banana to have
(01:17:12):
the energy of 1.2.1 gigawatts orsomething like that.
You don't have to wait for athunderstorm to travel into time
a bit.
This idea was mind-blowing backthen and and I I'm dreaming of
using my using free time as as ahuman to to make this leaps.
(01:17:34):
Because we are.
We have this scarcity inresources and, even if this goes
further and further and further, I assume that we don't have
enough resources to make thismachine computing power to
fulfill all that.
I think there will belimitations at some point in
(01:17:54):
time, and most of what is AIfreeing us up is to have ideas
on how are we using ourresources that is sustainable.
Speaker 3 (01:18:07):
I like that.
I have no way to say what youfear will become true or not,
but I like the idea of usingwhatever productivity we gain
for more sort of humanity-widepurposes, and I also hope that
whatever we do with technologyand AI will reach a far audience
(01:18:28):
and also help the people whotoday don't even have access to
clean drinking water and thingslike that.
So I hope AI will benefit mostpeople and, yeah, let's see how
that goes.
Speaker 1 (01:18:41):
Yeah, I think it's
going to redefine human identity
.
Yeah.
Speaker 2 (01:18:44):
I'd like to take it
further and I'd say the planet.
I think you know, with the AI,I hope we gain some efficiencies
, to go to your point, christian, that we don't.
We can have it all sustainableso we're not so destructive,
because you know the wholecircle of life, as they say.
You know it's important to haveall of the species of animals.
(01:19:09):
You know plants, water, youknow anything else is on the
planet.
It's an entire ecosystem thatneeds to work together.
So I'm hoping, with this AI,that's something that we get out
of.
It is how to become lessdestructive and more efficient
and more sustainable, so thateverything benefits, not just
humans because we are heavilydependent upon everyone else.
Speaker 4 (01:19:32):
That's the moral
aspect of it.
So if we use it to use all ofthe resources, then it is moral
aspects bad because it is notsustainable for us as a society
and as human beings on thisplanet.
(01:19:53):
So, as I see, moral is afunction of keeping the system
alive, because we use thedistinction between good and bad
in that way that it is notmorally good to use all the
resources.
So if we could extend anythingthat we can do with AI using all
(01:20:18):
of the resources, that is notreally good and that what we can
use with our brains is thinkahead when will this point in
time will be and label it as badbehavior.
So the discussion we are havingnow and I'm very glad that you
(01:20:39):
brought this point, sorin isthat we have this discussion now
to think ahead.
Where will the use of AI be badfor us as a society and as
human beings and for the planet?
Because now is the time we canthink ahead what we have to
watch out in the next month oryears or something like that,
(01:21:02):
and that is the moral aspect Ithink we should keep in mind
when we are going further withAI.
Speaker 3 (01:21:09):
I think there are so
many aspects there to your point
, christian.
So one is of course the whole,like we all know, the energy
consumption of AI in itself, ofAI in itself.
But there's also the other side, I mean the flip side, where AI
could maybe help us spotlightor shine a bright light on where
can we save on energy incompanies and where can AI help
(01:21:33):
us, let's say, calibrate ourmoral compasses by shining a
light on where we don't behaveas well today as a species.
So I think there's a flip side.
I'm hoping we will make somegood decisions along the way to
(01:21:53):
have AI help us in that.
Speaker 2 (01:21:58):
There's so many
things I could talk about with
AI and we'll have to have Ithink we'll have to schedule
another discussion to have youon, because I did.
I had a whole list of notes ofthings that I wanted to talk
about when it comes with AI, notjust from the ERP point of view
, but from the AI point of view,because, you know, after
getting into the more AI bookand listening to several
(01:22:19):
podcasts about AI and humanity,there's a lot of things that I
wanted to jump into.
You know we talked about thede-skilling.
We talked about too much trust.
I'd like to get into harm biasand also, you know how AI can
analyze data.
You know that everyone thinksanonymous because, reading that
(01:22:41):
Morley, I booked some statisticsthey put in there.
I was kind of fascinated.
Just to throw it out, there isthat 87% of the United States
population can be identified bytheir birth date, gender and
their zip code.
That was mind blowing.
And then 99.98% of people canbe identified with 15 data
(01:23:02):
points.
So all of this anonymous data.
You know, with the data sharingthat's going on, it's very easy
to make many pieces ofanonymous data no longer
anonymous.
Is what I got from that.
Um.
So all that data sharing withthose points, that um, the, the
birth date, gender and fivedigit us zip code here again,
that's in the united states wasone that that shocked me, and
(01:23:25):
now I understand why thosequestions get asked the most
because it's going to give, witha high probability, 87 percent.
Speaker 3 (01:23:32):
Uh who you are maybe
just for the audience, uh,
watching this or listening tothis.
So so the book that we'retalking about is this one Mole
AI.
I don't know if you can see it.
Does it get into focus?
I don't know if it does.
Speaker 1 (01:23:48):
Yeah now it does.
Speaker 3 (01:23:50):
So it's this one,
mole, ai and how we Get there.
It's really a great book thatgoes across fairness, privacy,
responsibility, accountability,bias, safety, all kinds of and
it tries to take sort of apro-con approach.
You know, because I think maybethis is a good way to end the
(01:24:11):
discussion, because I have to go.
I think one cannot just say AIis all good or AI is all bad,
like it depends on what you useit for and how we, how we use it
and how we let it be biased ornot, or how we implement
fairness into algorithms, and sothere's just so many things
(01:24:32):
that we could talk about for anhour.
But that's what this book isall about and that's what
triggered me to to share a monthback.
So just thank you for the, forthe chance to talk about some of
these things, and I'd be happyto jump on another one.
Speaker 2 (01:24:46):
Absolutely, We'll
have to schedule one up, but
thank you for the bookrecommendation.
I did start reading the MoralAI book that you just mentioned.
Again, it's Pelican Books.
Anyone's looking for it.
It's a great book.
Thank you, both Soren andChristian, for taking the time
to speak with us this afternoon,this morning, this evening,
whatever it may be anywhere.
I know where I have the timezones and we'll definitely have
(01:25:07):
to schedule to talk a little bitmore about AI and some of the
other aspects of AI.
But if you would, before wedepart, how can anyone get in
contact with you to learn alittle bit more about AI, learn
a little bit more about AI,learn a little bit more about
what you do and learn a littlebit more about all the great
things that you're doing?
Speaker 3 (01:25:26):
Soren, so the best
place to find me is probably on
LinkedIn.
That is my only media that Iparticipate in these days.
I deleted all the otheraccounts and that's a topic for
another discussion.
Speaker 2 (01:25:37):
It's so cleansing to
do that too.
Speaker 4 (01:25:38):
Yeah, and for me it's
also on LinkedIn and on Blue
Sky.
It's Curate Ideas excellent,great.
Speaker 2 (01:25:48):
Thank you both.
Look forward to talking withboth of you again soon.
Speaker 4 (01:25:51):
Ciao, ciao thanks for
having us.
Thank you so much bye, thankyou guys.
Speaker 2 (01:25:57):
Thank you, chris, for
your time for another episode
of In the Dynamics Corner Chairand thank you to our guests for
participating.
Thank you for your time foranother episode of In the
Dynamics Corner Chair and thankyou to our guests for
participating.
Speaker 1 (01:26:04):
Thank you, brad, for
your time.
It is a wonderful episode ofDynamics Corner Chair.
I would also like to thank ourguests for joining us.
Thank you for all of ourlisteners tuning in as well.
You can find Brad atdeveloperlifecom, that is
D-V-L-P-R-L-I-F-Ecom, and youcan interact with them via
(01:26:27):
Twitter D-V-L-P-R-L-I-F-E.
You can also find me atmatalinoio, m-a-t-a-l-i-n-oi-o L
I N O, dot I O, and my Twitterhandle is Mattelino16.
And see, you can see thoselinks down below in their show
(01:26:49):
notes.
Again, thank you everyone.
Thank you and take care.