Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:02):
Also media.
Speaker 2 (00:04):
Hello and welcome to Better Offline. I'm your host ed Zeitron.
Back in March of this year, MIT economist, their own
some Oglu and MIT Sloan Simon Johnson published a paper
(00:25):
called the Urgent Need to Tax Digital Advertising, where digital
advertisers will be levied with a flat tax of fifty
percent on all revenue above five hundred million dollars. And
today I'm joined by Drone to walk through it. Daron,
thank you for coming on.
Speaker 1 (00:37):
Thank you, Thanks Ed.
Speaker 2 (00:38):
It's my pleasure, so walk me through the idea. It
seems pretty simple, but maybe there's more nuance to it.
Speaker 1 (00:45):
Well, it's very simple to implement, it's very simple to explain.
The question is why, you know, why do something like this?
You know, after all, who doesn't love something free? And
we're getting a lot of free things because the entire
our online ecosystem is monetized via digital ads. The issue is,
(01:07):
I think this ecosystem creates a number of problems which
are set to get worse.
Speaker 3 (01:16):
With AI right.
Speaker 1 (01:19):
The most obvious of those is something that's been discussed
quite a bit over the last few years, which is
that monetizing via digital ads means that you are going
to encourage platforms to collect more and more data about people,
(01:43):
and this creates a much more intrusive, lower privacy, and
potentially distorted sort of system.
Speaker 3 (01:52):
Right.
Speaker 1 (01:53):
The second, perhaps corollary or separate thing which has started
Reach receiving more attention, is that this also means you
want to make sure that the eyeballs are glued to
the screen, which your large screen, small screen, and that
itself might generate a lot of other problems, including perhaps
mental health problems, because you try to get people to
(02:16):
stay on the platform by inducing strong emotions envy, jealousy, anger, outrage,
and so on and sort of it creates a very
different type of social environment than we are used to,
with widespread negative consequences. But I think the one that
I'm most worried about is that this reduces competition. It
(02:40):
is impossible today for any service by a newcomer to
come in and say, Okay, you're going to get this
from Google, Facebook, Instagram for free, and I'm going to
provide higher quality content for which you have to pay
a subscription fee or something else. Especially difficult because newcomers
(03:02):
are not going to have the network. There's going to
be uncertainty about their quality. So it really cements the
system where everything is going to be monetized via digital ads,
and that's going to discourage the entry of new products,
new services, and especially new technologies. For example, everybody worries
about what social media and other online platforms and AI
(03:24):
implies will imply for democracy. Well, something that actually creates
more pro democratic conversations, higher quality content, that's going to
be very difficult to get off the ground today.
Speaker 2 (03:40):
Right, So how does this tax actually change the incentive though?
Speaker 1 (03:45):
Well, I think the main way in which it changes
the incentives is that it makes it possibly more likely
that both existing platforms and new platforms will now say,
let me experiment with new products that are higher quality
for which I can build a clientele via subscription fees
(04:07):
or other things. Because if I get one hundred million
dollars via digital ads, half of it is gonna go away.
If I can get that money for I have subscription fees,
that's good. It's no longer so inferior to doing it
via digital ads.
Speaker 2 (04:21):
But I think one of the things that concerns me
with it, or maybe it's less of a concern and
more just the after effect will be is the people
are used to not paying for social media.
Speaker 1 (04:33):
That's why this needs to be high. You know, if
this was a ten percent tax, it wouldn't make so
much of a difference. So in some sense, when I
mentioned ecosystem, I was intending to imply by that both
what the offering side the platforms are doing and what
the consumers are expecting. So there's a synergistic relationship between users, consumers,
(04:56):
and the firms, and you want to change that relationship. Again,
you wouldn't want to do it if the market system
was working perfectly and everything was hanky dory, but I
think there's a lot of evidence that's not the case
at the moment.
Speaker 2 (05:07):
So and the incentives would be that they just can't
make that much money off of digital ads, so they
will have to find new business lines. But could this
not potentially kill off the idea of the free social network.
Speaker 1 (05:20):
Well, first of all, I think that idea has become excessive.
If we scale it back, it wouldn't be so bad.
And in some sense, I think the purpose is to
scale it back because once everybody expects everything for free,
that does create a race to the bottom in terms
(05:41):
of quality, in terms of data protection, in terms of
you know, new technologies that you know, will actually change
the face of the kinds of offerings that we get right.
Speaker 2 (05:52):
And it's I remember in the past, we've had companies
try paid social networks and it just hasn't worked.
Speaker 1 (05:59):
Well. It didn't work because they were up against free
social networks digitized so it monetized via digital ads, especially
at a time when people didn't understand what the costs
of these things were. You know, I think even today
there are consumers who do not fully recognize the amount
of data that's being collected about them. So if you
(06:22):
go back ten years ago, I think both the sort
of addictive or quasi addictive nature of social media, the
extensive data collection, and how that data can be used
both you know, to guide you towards certain products, perhaps
to behaviorally manipulate you, or perhaps to charge you higher prices.
All of these things were not completely understood.
Speaker 3 (06:45):
And I think I gain what you mean.
Speaker 2 (06:48):
It's like the latest stage model of Instagram and Facebook
is so different because when everything's free, you don't really
have a mechanism to control the customer other than just
tricking them just continually.
Speaker 1 (07:03):
I mean, I'm not gonna trick is probably wrong. There
is tricking. There is tricking, it's not everything. So you know,
when you get an AD, it's useful, you're finding out
about products. But what is the trade off between how
much of your private data you want other people to
have access to versus getting some of those products? And second,
(07:26):
once people wants platforms have access to that data, what
is there to stop them from offering some of the
manipulative products as well as some of the useful products.
Speaker 2 (07:48):
So perhaps the biggest thing, because I just while prepping
for this episode, I went and did some math, and
what you're suggesting would kill Meta. Now I'm not saying
that you should. You're suggesting you should kill Meta. I'm
just saying that metas for the last four quarters their
net income was fifty one point three five billion, offer
one hundred and thirty billion dollars worth of revenue. So
(08:11):
this would make their business model untenable.
Speaker 3 (08:13):
Is that well?
Speaker 1 (08:15):
First of all, no, I mean, you know, instead of
fifty five billion, if they make you know, thirty billion,
it's not the end of the world. I would be
happy to have a company that is worth thirty billion.
Speaker 3 (08:26):
As would I.
Speaker 1 (08:27):
But the hope is that they would also then offer
products and services that are higher quality, that are subscription based.
You know, we know people still you know, pay for
certain things, although less like Netflix. Netflix is also now
being sort of being forced to move more in that direction.
(08:51):
You know, is there a world in which higher quality
Netflix for which some people pay is viable, where whereas
you know, some lower quality one financed via you know,
monetized via ads also coexists. I think those are questions,
and I think it sort of opens up the market
system to different, different sort of forces. So if there
(09:14):
is a paid What's Up that has better security features
and you don't get these completely unexpected annoying things that
have started popping up in What's Up, perhaps that's attractive
and some people will be happy to pay ten dollars
a year for that.
Speaker 2 (09:29):
Kind of feels like the advertising model is a lazy
man's business as well, because it doesn't incentivize you to
make a better product. To exactly, a device is a monopoly, right.
Speaker 1 (09:38):
And why, you know, how did I arrive to this idea?
My stick for the last you know, two decades at least,
and especially with regards to AI and the digital technologies,
is these technologies are extremely malleable. We can develop them
in different ways. We can have higher quality technologies that
(10:03):
increase worker productivity, or we can have rot automation lazy automation.
You can have addictive social media that makes you go
into your cocoon and you become uninterested in forming bridges
and engaging in democratic citizenry. Or we can have platforms
that actually encourage different types of communications and the active
(10:26):
citizenry that democracy requires. Which ones of those will be?
And I think since we have gone more and more
in the automation way and the lay in the sort
of toxic environment of social media, a change requires new products,
new technologies, and that means a more open system. So
(10:50):
the way that I sort of started sort of coming
towards this idea was, well, what's the barrier to this? Well,
the fact that there are a few big tech companies
that can acquire competitors as a barrier, but the fact
that even if they don't acquire you, they pigeonhole you
(11:11):
into the same ecosystem in which they exist. And I
think that's a big problem.
Speaker 2 (11:16):
How do you mean they pigeonhole you even if they
haven't acquired you as a user, Because.
Speaker 1 (11:20):
To compete against them, you need to also offer your
products for free, which means the only thing you can
do is for digital advertising. If you need quality data,
you can't afford it, You're not going to have an
incentive to get it. If you need high quality, niche products,
you know nobody's going to buy them because everybody is
used to this free free v's and.
Speaker 3 (11:38):
Everything, right, So it creates a kind of h yeah,
RaSE the bottom. But also just there's no way to
there's no reason they would have.
Speaker 2 (11:47):
To compete, like you couldn't compete as they We haven't
really seen any new social networks in years other than
Blue Sky and Threads. And Threads is a part of
a part of the Facebook machine.
Speaker 1 (11:59):
And we have seen talk about TikTok is just a
amplification of the same business model, same monetization system, same
sort of weaknesses being exploited as Instagram, Facebook.
Speaker 2 (12:09):
And they burned, and they burned billions and billions of
dollars to get there. It wasn't like that. People try
and I don't know why people would possibly think this,
but they frame Byte Dance some plucky upstart versus just
a massively funded Chinese juggernaut.
Speaker 1 (12:23):
Actually much is why I actually use the word ecosystem
because this model itself is highly synergistic with companies growing
very fast. And why do they grow very fast Because
they want market share, they want to dominate the market.
But I also want data and how is that possible?
(12:45):
How can you grow so fast? Well, you get venture
capital or Microsoft to bankroll you, as in the case
of open ai, So you burn through a lot of
cash early on in order to get data dominance and
market dominance. And again it's not a pro competitive picture
that's emerging here.
Speaker 2 (13:04):
I feel like General VII is something separate because with
that they well they don't have a business model yet,
they do not.
Speaker 1 (13:12):
Have a business model. But you know what that does,
It makes it even more likely that we're gonna just
repeat the same sins of earlier social media. Look at
what is open ai doing. It's burning through a lot
of cash in order to acquire market share and data.
It's not monetizing through anything. And I think the most
likely cash cow ideas that are coming up we're going
(13:34):
to use generative AI for Internet search and we're gonna
take over the digital ad revenues. Well, that's more of
the same.
Speaker 3 (13:41):
And I don't different conversation.
Speaker 2 (13:43):
I don't think that'll work, but it's Nevertheless, I can
see the idea.
Speaker 3 (13:47):
It's funny, these incentives really have it.
Speaker 2 (13:50):
I've never really thought about it before this conversation, but
it feels like digital advertising really did harm the tech
industry in that they found something very profitable that got
more profit ple without making the user's life better at all.
Speaker 1 (14:04):
Well, Larry Page and say Gay Brinn, when they first
formed their company, said we don't want ads. That's not
like a good model.
Speaker 3 (14:11):
Yeah, it's empathetical to a search problem.
Speaker 1 (14:14):
At the moment, somebody came and gave them venture capital
money and said, but we do it so that you
can actually the lads.
Speaker 3 (14:22):
Yes, what if you were super rich.
Speaker 2 (14:25):
But it doesn't sound like this would kill the digital
ads industry, because if.
Speaker 1 (14:28):
I don't think we should kill it either. I mean
I think again, if we created a monolith, a monoculture,
that wouldn't be good. If everything was on online, everything
was based on subscription, that wouldn't be a good model either.
Speaker 2 (14:44):
So you want a variety, but so well is more
operational level. I'm guessing that this would also have to
have some way of cutting through things like if people
set up subsidiaries to try and pass off the tax
as well. You would have to make sure that there
was a way around.
Speaker 1 (15:02):
You know. That's why this is not a sort of
a graduated tax that is easier to gain. But if
you know five hundred million, you know it's like a
very small amount of money. So you just set a
minimum like that, perhaps a little bit more so that
the really small companies are not burdened by it. We
know one problem, for example, with the European GDPR is
(15:23):
that once you put a regulation like that, the burden
is heavier on smaller companies, and so it actually it's
a competitive advantage for the large platforms that you don't
want to do that.
Speaker 2 (15:33):
But I think five hundred million would actually that would
make a very healthy market for smaller companies.
Speaker 1 (15:39):
Which is great. We want more smaller companies as well.
Speaker 2 (15:42):
But wouldn't I think that this would help with the
problem a great deal. But don't we also need something
to do with the algorithms themselves. Because I feel you
could do this and it would solve some problems. It
would begin incentivizing them in the right direction. Theoretically, yeah,
but you still have this problem of they wouldn't stop algorithm.
In fact, this might encourage them to be more algorithmically.
Speaker 1 (16:03):
Absolutely, and that's why I don't think this is a
silver bullet, you know it. You need a range of policies.
But the hope is that at least such a policy
would create some push towards some platforms and products to
have better quality algorithms that don't trap you like that.
(16:26):
But for example, if we're talking about social media, we
should also be talking about repeal or relaxation of Section
two thirty of the Digital Acts, so that you know,
companies that algorithmically boost content cannot then say, well, you know,
this is not our speech, this is somebody else's thing
(16:47):
that we have nothing to do with it. So there
are some details there that we have to think about.
And I think Section two thirty, which was written in
a different age in the nineteen nineties, is definitely not
up for dealing with issues algorithmic boosty.
Speaker 2 (17:01):
And debating Section two thirty a side, because that is
also a separate conversation. It does feel like we have
just kind of not given tech companies much responsibility for
what they're doing.
Speaker 1 (17:12):
No exactly, just that's the main reason why I want
to have a conversation about Section two thirty because it
can't be optimal that that corporations that are arguably the
largest and the most powerful humanity has ever seen can
then wash off their hands and say we're not responsible
for anything that happens on our platforms.
Speaker 2 (17:32):
Yeah, it feels with two thirty and I am not
as well read as I should be about it, but
it feels as if that responsibility side is the real
niggling issue but also the most thorny because on some
level it makes sense with social media platforms that they
should be able to say we're not responsible for all
the posts, and if they had to be, they will.
Speaker 1 (17:54):
Absolutely And look, I think free speeches of major issue.
I am worried about erosion of free speech, but you know,
I'm also not like a one hundred percent free speech absolutist.
I think we have to balance it, and the way
that I would suggest as is again, so now we're
changing topic a little bit. Second two thirty is, you know,
subject to whatever legal requirements there are on you know,
(18:19):
abusive content, et cetera. If you post something on Instagram
or Facebook and the company doesn't algorithmically promote it, then.
Speaker 3 (18:30):
It's completely right.
Speaker 1 (18:33):
It's there. It's my speech. If my friends find out
about it, they can go and look at it. If
somebody stumbles on it, that's fine. But algorithmic boosting is
like New York Times putting you on their front page.
We can't say New York Times is putting you on
your front page, and you can say all sorts of
lies and crazy things, and that's just freedom of wealth.
Speaker 3 (18:53):
Yeah, okay, yeah, now I know what you mean.
Speaker 1 (18:55):
Though, if you have a more more preferred newspaper that example.
Speaker 2 (19:00):
No, no, no, but I know what you mean. It almost,
and it seems like the problem is that you're saying
is it isn't two thirty itself. It's the fact that
the algorithms boost yeah, these particular things.
Speaker 1 (19:10):
Thirty was fine for the age it was written where
nobody could could have understood the onslaught of algorithmic boosting
promotion and manipulation that would come.
Speaker 2 (19:20):
And it feels like it's an incentive problem that we
just it's almost that platforms should either move away from
the algorithm model, because if you because the amendments section
two thirty that just says it should go away kind
of doesn't sit right with me. I don't think you should.
It would make platforms like WordPress impossible to use, or
like blogging platforms, but the idea of what they promote
(19:42):
being more intentional.
Speaker 1 (19:43):
Absolutely that I actually reallyze this rule that I suggested
completely protects WordPress. WordPress doesn't promote, right, So this sort
of reformed two thirty is completely fine in that respect.
Speaker 2 (19:55):
So as far as this fifty percent tax, do we
have any kind of historical press to something like this
being done?
Speaker 1 (20:02):
And that's why, you know, am I sure fifty percent
is the right level? Absolutely not per thirty.
Speaker 3 (20:11):
But I think fifty is great.
Speaker 1 (20:13):
Thank you.
Speaker 2 (20:14):
Well, Because the thing is, the way I look at
it is these companies are so adept to avoiding responsibility,
and they're so unwilling to change their ways and so
unwilling to be responsible with what they're doing, that it
needs to be like this. And I'm sure that if
this actually got anywhere near a government, they would have
the world's biggest tantrum.
Speaker 3 (20:35):
But I feel like we need stuff.
Speaker 1 (20:38):
It needs to be a shock to the system, absolutely.
Speaker 2 (20:41):
And I think that well, actually maybe that's the question.
How practical would this be to actually implement?
Speaker 1 (20:49):
Well, Well, it depends on what you mean by that.
Speaker 2 (20:51):
Well, I mean, how how hard would it be to
get to get this actually into existence with the government
able to levy that tax.
Speaker 1 (21:00):
It would be extremely hard because of the lobbying, as
you pointed out, But if there was agreement, if the
Chinese government wanted to do it right, they could do
it overnight. It's it's all out there. The digital ad
revenue is there, it's measured, you know all. And the
flat tax is very easy to implement. You could do
it at source when you know advertisers, companies pay for advertising.
(21:26):
You could do it in many different ways, so it's
very easy, you know. We it's a version of variety
of VAT like at volume taxes. We have them in
under much more complicated situations when there is much lower
quality data about what's going on. You know, many middle
income countries have a very complex sales tax or a
VAT tax.
Speaker 2 (21:47):
Yeah, and it almost feels that you could have a
little fun with it as well. Maybe you could do
this fifty percent tax and freed it directly into some
sort of national venture fund. It would end up funding
the feure. It feels as if the as go like
on a larger scale, it feels like governments are just
twenty years behind technology. Not even putting aside even Section
(22:09):
two thirty. It feels like we should have an EPA
or an FDA for data. It feels like we should
have ways of Actually we don't know how these algorithms work,
and it just feels a little crazy to me.
Speaker 1 (22:21):
Yeah. Absolutely, But the way I would say it is,
I'm not sure that I would say they are twenty
years behind in terms of knowledge.
Speaker 3 (22:29):
I actually, oh, not knowledge, just legislation.
Speaker 1 (22:32):
Legislation. Yeah. So, like a completely different but related topic
is we actually need an infrastructure for data markets, right,
so you know, I think everybody says data is going
to be one of the most important factors of production
(22:52):
for the future, more important than land. Imagine that I
told you that today land that is still pretty relevant
for many businesses. It's up for grabs. You can just
go and get whatever piece of land you want and
you don't have to pay for it. You know, that
tragedy of the commons is well understood from history. It
will be disastrous.
Speaker 3 (23:14):
Yeah, it's just the wild West.
Speaker 1 (23:16):
Yeah, it's wild West. Nobody produces high quality data, Nobody
gets compensated for the data that they have, and it
encourages you know, the monetization that model that we talked
about where you actually sweep people's data without their permission
or without their understanding, and you try to monetize it
by digital app So a complementary thing is we think
(23:50):
about how how is it that we can have a
system where data producers are encouraged to produce higher quality
data so that perhaps, you know, next version of a
large language model learns not from Reddit, but learns from
high quality, dominant the domain relevant expertise.
Speaker 3 (24:07):
I think in that case, Hey, I think in that case.
Speaker 2 (24:10):
I'm not sure I agree, just because large language models
getting higher quality data is a pro like how you
build them, would like it? Just higher quality data would
just be they still need more. But I get what
you mean that if these companies were incentivized to actually
have good data and provide good services with the data,
that would be better. Because that's the thing I'm not.
I don't love digital targeting. I don't love any of it.
(24:32):
But man, if they have all this data, why are
all their services so impersonal? They don't feel like they're
for us?
Speaker 1 (24:39):
And why does Microsoft word always crash? Yes?
Speaker 2 (24:41):
I know that, but that one, that one's it doesn't matter.
They have the dot doc and dot doc x model.
They have a tiny little kind of monopoly now though
they were various sense trust actions have to change.
Speaker 1 (24:54):
Let me actually, let me actually talk push back a
little bit on generative AI. Okay, because I think your
statement makes sense. If you buy in that generative AI
is most productively developed in the form of general purpose
human like chatbots, they will need like huge amount of
(25:19):
data because next word prediction is very very inefficient and
you need to imitate humans and variety of circumstances. But
imagine that we use generative to AI in a very
domain specific way. You know, I want to know what
drug creates, what side effects in conjunction with other drugs, right, Well,
(25:40):
for that, I don't need my generative AI tool to
communicate with me in you know, human like fashion or
write Shakespearean sonnets. I just need some very specific domain expertise.
But that high quality data is actually not out.
Speaker 3 (25:53):
There, and I fully agree with that.
Speaker 2 (25:55):
And smaller language models focus language moddles does make sense.
But I think we are actually agreeing because this is
an incentive problem. The reason open AI isn't really focused
on that is because it doesn't make that much money.
Speaker 1 (26:09):
Well, open AI itself is not making a lot of money.
They're not liking No, I think I think they are
not focused on that because A, if you want to
get big, very very fast and collect a lot of data,
domain specific models are not going to work. So creating
hype around something that sounds very intelligent is a much
(26:31):
better tool for doing that. And second, I think in
the industry is still in an unhealthy way, in my opinion,
preoccupied with artificial general intelligence right, human like intelligent, even
if that's not what we need, even if that's not
what's feasible, even if that's got a lot of downsides.
(26:51):
So that's why they don't, in my opinion, pay sufficient
attention to these domain specific expertise models.
Speaker 2 (26:56):
So if for you, I actually agree, But also then
and that is an economics problem, that is a these
companies are incentivized to grow at all costs, to get
as big as possible. Is what you're suggesting that they
shouldn't it is.
Speaker 1 (27:10):
It is not just an economics problem, but it has
an important economic leg If we did not have venture
capital be so important, this model would not have gotten
off the ground.
Speaker 2 (27:21):
So how do we push back on the VC model?
How do we actually make technology work without that model,
because I agree it's the growth of all costs. Build
as big as possible, then IPO and everyone gets rich
other than the user.
Speaker 3 (27:35):
Well, what are the alternatives?
Speaker 1 (27:37):
I don't know. I mean, I am not so much
of an interventionist that I'm going to say, you know,
you should tax VC as well. But I think one
thing that encourages the VC grow at all cost model
is that we don't have any anti trust right there
was very strong anti trust and implementation. Then becoming so
(28:04):
big wouldn't be so attractive, and you wouldn't be able
to acquire all your competitors in the process as well,
which is a very important part of this get very big,
very quick. So I think our big failure of you know,
upholding existing antitrust laws and introducing new anti trust laws
(28:25):
appropriate for the digital age, I think has contributed to
this problem.
Speaker 2 (28:29):
Is there a way of incentivizing vcs? So actually this
is how we are. This is the final question, because
you might actually have an answer here. How do we
incentivize venture capital and the startup industry to start investing
at the earlier stage because a big fund just gave
back a chunk of their fund because they were not
finding as many opportunities in the late stage, and it
(28:53):
like most of the money goes into that late stage.
How do we incentivize that.
Speaker 1 (28:57):
That's a much harder problem, because even well qualified vcs
are not going to have an easy time recognizing a
very promising product when it's in the garage stage. Right.
But and this brings another, you know, set of issues.
The fact that we tax capital so lightly contributes to this. Again,
(29:25):
we're subsidizing vcs because if you make your money via
vcs as sort of investment on your capital, you pay
very little tax. You know, if you are you know,
a tech billionaire, I want name names, you don't even
pay yourself a salary. You keep on borrowing money from
(29:46):
venture capitals or other specialized financial vehicles, and you pay
yourself out of that. Everything is capital income. You pay
minimal taxes. So our tax system, already a big contributor
to inequality, actually also distorts the digital landscape. So one
simple thing, again, which is probably even less likely to
(30:08):
be implemented in our current polarized environment, is let's tax
capital and labor at a flat rate the same or
you know, you can act whether a progressity you want,
but you do not distinguish capital and labor income. If
you're going to tax labor income at thirty percent tax
capital at thirty percent.
Speaker 3 (30:25):
I agree on that one.
Speaker 2 (30:27):
It just feels like it would be great if there
was a way of just almost because you're talking a
lot about taxation such and I agree that we need
those controls, But is there a way to incentivize them
to put more money earlier make more risky bets. I'm
not saying a deduction, because that would be crazy, but
it feels like that would also solve, in part in
(30:49):
concert with these taxes, a way of getting that money
into the early ecosystem.
Speaker 1 (30:53):
There may be, but I don't know exactly. We should
think more about it, but there is an alternative solution
yet another policy proposal, as we fund a federal agency
on AI which is tasked both by communicating and developing
best standards on things like privacy data you know AI standards,
security safety, but also has deep enough pockets that it
(31:17):
can play that incubator role for especially for technologies that
are deemed to be socially beneficial. So if we have
technologies that actually protect in users of privacy and enable
users to make better decisions. That's the kind of thing
that the government should put money in early on. Some
of it will go to waste, some of it will
go bankrupt. But if a few of them are successful,
(31:37):
that's great. And if the alternative is that the vcs
are going to develop the technologies that are most manipulative,
there is even more reason for putting this money.
Speaker 2 (31:46):
Thorne, thank you so much for joining me again. It's
been such a pleasure.
Speaker 1 (31:49):
Of course, this was my pleasure. Thank you for being
interested in these issues and such by having such a
great conversation.
Speaker 3 (31:56):
Thanks so much. Thank you for listening to Better Offline.
Speaker 2 (32:07):
The editor and composer of the Better Offline theme song
is Mattersowski. You can check out more of his music
and audio projects at Matasowski dot com, M A T
T O s O W s ki dot com. You
can email me at easy at Better Offline dot com,
or visit Better Offline dot com to find more podcast
links and of course, my newsletter. I also really recommend
(32:29):
you go to chat dot Where's youreed dot at to
visit the discord, and go to our slash.
Speaker 3 (32:33):
Better Offline to check out our reddit. Thank you so
much for.
Speaker 2 (32:37):
Listening Better Offline is a production of cool Zone Media.
For more from cool Zone Media, visit our website cool
zonemedia dot com, or check us out on the iHeartRadio app,
Apple Podcasts, or wherever you get your podcasts.