Episode Transcript
Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:07):
Voices of Video.
Voices of Video.
Voices of Video.
Speaker 2 (00:12):
Voices of Video.
Speaker 1 (00:18):
Well, hello, I am
Mark Donegan and we have a very
special Voices of Video episodethat we're bringing to you today
.
So NAB is just right around thecorner Now.
If you're watching this afterNAB, then well, you know, this
is what you missed.
We are talking to verysignificant companies who are
(00:41):
doing interesting things withBPUs, and it's interesting that
BPUs have now switched from oneof maybe a smorgasbord of
encoding options andarchitectures to essential, and
what we're going to hear todayin this interview, in this
(01:02):
special interview, is how VPUsare enabling a whole new
generation, really, of videoencoding infrastructure
architectures that don'tcompromise quality but yet
provide tremendous density,therefore energy efficiency and,
obviously, cost advantages.
(01:25):
So, with that introduction, Iwant you to welcome today Nacho
Mileo from Cirrus21.
Nacho, thank you for joining us.
Speaker 2 (01:35):
Thank you for having
me.
It's a pleasure to be here.
Speaker 1 (01:39):
Absolutely,
absolutely.
So you know, as I said in mypreamble, there's a lot of
interesting things happeningright now in video and Cirrus
21,.
You're going to tell us exactlywhat you guys do and the kinds
of projects that you're in, andI know you're right in the
center of it.
So why don't you give anintroduction for those who
(02:02):
aren't familiar with Cirrus 21?
Speaker 2 (02:05):
Yeah, absolutely Well
.
Cirrus 21 is a company thatpioneered, let's say, live
streaming in Europe.
We started 15 years ago, oreven more than 15 years ago,
doing encoders, on-prem physicalhardware on-prem physical
hardware, but with the mindsetof this is going to be clouded
(02:30):
at some point.
This is going to the cloud atsome point.
So we started developing ourencoders with CPU and then GPU,
but when GPU jumped in, we stickto CPU because we knew at that
point that when the cloud startsit was not going to be GPU
(02:51):
based.
So we stick to having CPU andGPU for many, many years.
We built on top of that.
We were born in a very, let'ssay, complex, or in complex
scenarios, because our firstclients were big broadcasters,
big brands in sports.
(03:12):
So you know technicallydifficult clients or situation,
or schemes.
And we were born into that fire.
And after we stick, let's say,to CPU and GPU, we were able to
do hybrid approaches, withon-prem equipment and cloud
(03:33):
equipment working together.
We built our orchestrator,which is live control, which is
a product that talks to ourencoders and does some magic in
terms of what's capable on-premand on cloud simultaneously.
And in the last, I would say,year, or less than a year, we
(03:53):
started introducing VPUaccompanied by NetEnt, and we
have seen very, very goodresults and some amazing leap
forwards very very good resultsand some amazing, you know leap
forwards.
Speaker 1 (04:10):
Yeah, it's great.
We're excited about the workthat you know we're doing with
you, so for building theseworkflows.
You mentioned that you'reprimarily working with
broadcasters.
Give us a sense.
(04:34):
Are these broadcasters that youknow have their traditional
broadcast infrastructure youknow, maybe it's satellite
distribution over the air andthey're, you know, also adding
on OTT?
Or are these broadcasters whoare, you know, maybe
discontinuing some of those morelegacy approaches and going all
OTT?
What, you know, what's drivingthe work that you, the work that
you're doing with broadcastersand streaming?
Speaker 2 (04:50):
We are working mostly
with, let's say, big fishes,
big broadcasters.
I don't see them shutting downthe classical TV for now, but
they have very seriousapproaches when it comes to OTT.
So they they have all at thesame time and we are working on
the OTT part with Uh, but thenwhat we are seeing is that, due
(05:14):
to the flexibility that thatinternet brings, uh and OTT
brings, they may be doing, forinstance, some special things
that are only on the, on the OTTservices, or they bring for
certain sports, for instance,there's some content that's
exclusively over internet.
So I still see that there's amixed approach where we are
(05:37):
still sticking to classic TV,let's say, but then there's a
ton of new things coming up withthis.
We have seen fast channelsappearing in certain clients.
We have seen that they aretaking more advantage of how ads
works in the streaming world.
It's a living thing, but Idon't see that we are shutting
(06:06):
down classical TV yet.
Speaker 1 (06:08):
At least not yet.
Yeah, yeah, Interesting.
So let's talk about yoursolution.
Why don't you tell us you knowthe company is roughly 15 years
old, Is that?
Speaker 2 (06:22):
Yeah, we were founded
in 2008.
So, founded in 2008.
Speaker 1 (06:26):
So, yeah, okay, over
than 15 years over over 15 years
, early days of of streaming.
Really, you know, it's hard tobelieve.
I, I was uh, I I was buildingan instrumental or I was a part
of a team building aninstrumental video platform in
the US that eventually got soldto Walmart, and you know, 2008,
(06:48):
2007,.
In fact, the product you knowdebuted and it was like October,
november, 2007.
And boy, it's amazing how farwe've come.
Speaker 2 (07:00):
Absolutely.
Yeah, I still remember the daysprevious to, you know, even
when platforms like YouTubedidn't have live.
Those things are not that old,yeah it really is.
Speaker 1 (07:17):
It's still a boiling,
I would say industry and
there's a ton of things going onin terms of there's always new
challenges, right, you knowthere's a very interesting
paradox which is often talkedabout with AI, around AI.
But as the price of a newtechnology or a capability goes
(07:45):
down, it has the opposite effect.
You know, oftentimes maybe youknow someone who's not into
economics, you know would say,well, does that hurt the
business?
You know, doesn't that mean,now that you know they used to?
You know, the industry used tobe able to make this much money,
but the price is compressing,you know, meaning it's not as
expensive to deliver.
(08:07):
Technologies are better.
Does that mean the industrygets smaller?
No, it's actually completelythe opposite and that's why it's
.
A paradox is that as it getsmore cost-effective to stream
and as our technologies getbetter and more efficient, it
actually drives more consumptionand it drives more usage of
(08:28):
video and traffic on theInternet expands even faster
because there's new use cases.
Right, there's new applicationsthat are created, new
entertainment experiences.
(08:49):
I still remember back in the daywhen a terabyte of CDN was $500
.
Speaker 2 (08:55):
It was prohibitive
for many brands to do streaming
or to do some stuff and, as yousaid, when it becomes more
available, it doesn't hurt theindustry, it just makes it
bigger and more accessible formore people to use it, and to
test and to do some really coolstuff that we have seen in the
last few years.
We're only possible due tothese things happening.
Speaker 1 (09:16):
Yeah, that's right.
So, cirrus21, are you primarilyan engineering company, or are
you an engineering company withalso products that you bring to
your clients?
Or are you a product companythat does a little bit of
engineering?
Where are you?
How do you describe yourself?
Speaker 2 (09:35):
I would say that 80%
of the company are engineers, so
I would stick to theengineering company definition
and we have been delivering thistype of streaming products for
the last, as we said, 15 years,and in the last two years we are
also broadening our offer to AIservices, but not just Gen AI,
(10:00):
generic services, but connectedto video and to how we compress
video, how we read video, how weread video, how we, you know,
extract data from video.
So we are trying to leverageall the know-how that we bring
from all these years workingwith live broadcasters and and
all these types of complexclients and add AI on top of
(10:22):
that.
But we are still a videostreaming company and we like
that.
I guess we're happy with wherewe are and, as you said, we are
an engineering team, most of theteam is engineering, and we
have right now a huge, huge R&Dteam too.
(10:42):
Like a third of the company isworking in R&D.
Speaker 1 (10:47):
And so what are they
focused on?
Are they focused on Kodakencoding?
Are they focused on streamingprotocols?
Software application layer.
Speaker 2 (10:57):
And, yeah, we have
our very own, let's say, R&D
deployment platform.
And yeah, well, we have ourvery own, let's say, r&d
deployment platform.
And, yeah, they work on mostly,of course, on video.
Everything is video and theywork mostly on either
improvements in coding andprocesses and also, of course,
(11:18):
on AI and data extraction fromvideos.
So everything we can bring ontop of our current offer.
For instance, right now you cantake an encoder, clip a certain
part of a video inside theencoder, send it over to Media
Copilot, which is the AI part,and get a reframe version for
(11:38):
TikTok.
So all of that is done on thesame website.
Speaker 1 (11:42):
Yeah, yeah,
interesting, and that's just a
part of your core platform, oris that something someone has to
buy separately?
Speaker 2 (11:52):
For now we are still
splitting streaming and AI.
It may converge at some point.
It may converge at some point,but we are still working with
streaming on one side and AI onthe other.
Everything is prettyinterconnected but we still work
on those two worlds.
(12:13):
You can use our AI without ourstreaming and you can use just
our streaming without any AIfeatures.
Speaker 1 (12:20):
I see Interesting,
okay.
Okay, is that in production?
Is there anybody that's usingthe AI clipping function?
Yes, absolutely yeah, we areworking with a couple of clients
already.
(12:42):
We introduced MediaCode Pilot onIBC last September, so it's
still a pretty pretty you knownew thing, but we still have a
ton of interest and a couple ofclients use it already.
You know it's funny howsometimes, sort of on the
surface, the simplest operationsactually become quite intensive
(13:05):
when you really have to executethem, especially at scale.
And publishing on socialnetworks they all have different
formats that they like.
You know there's the verticalvideo there's.
You know, there's square videoOne by one.
Speaker 2 (13:20):
Yeah, yeah, exactly.
Speaker 1 (13:22):
And yeah, it's square
video One by one.
Speaker 2 (13:23):
Yeah, exactly, it's a
challenge News and stuff.
We have clients that work withnews.
The thing is they really needthis quick approach.
Speaker 1 (13:33):
It's not just solid
streaming.
They can't afford to have aneditor spend half a day to edit
a video.
Speaker 2 (13:41):
Yeah, yeah, editor,
spend half a day, you know, to
edit a video.
Yeah, yeah, and right now weare seeing really, really new
approaches in terms of of howthey.
You know about thetransformation you mentioned in
the beginning.
One of the things that we areseeing, uh, more and more, is
this reduction of the equipmentthat goes out.
Right, you know, in the pastyou need to go with a van to a
(14:01):
certain public event and the vanhas a satellite dish on top and
two cameras, a ton of cables,and right now we are going out
with a mobile phone that uses,maybe, srt and go straight to
the encoder, and that's crazy.
Speaker 1 (14:19):
And there's other
companies, but I'm just thinking
of like Live View with theirbackpack.
And you know, from what Iunderstand anyway, you know this
is a little bit outside thespace that we work in, but you
know, I hear and I see them outthere and know what they're
doing.
It's like the standard, likeevery news organization in the
(14:40):
world has live view backpacks,you know, and the and the
reporters or the you know peoplewho are out there at at an
event or at some public, youknow some public, um, uh, um you
know, thing that they want tocover.
They're wearing a live viewbackpack and they're
transmitting via cellular and insome cases very high quality
(15:05):
yeah it's amazing.
Speaker 2 (15:07):
It's amazing and this
is also connected to the
improvements in codecs that weare seeing, the improvements in
performance that we're seeing,like 10 years ago it was
impossible to think about this.
Yeah, exactly.
Speaker 1 (15:21):
Or it was really
clunky and didn't always work
well and you know there was yeahI want to go back to.
So we're going to get to whatyou're showing at NAB, but you
know, this is kind of a buildupfor the listener.
So, AI hot, Everybody is doingsomething in AI, it doesn't
(15:46):
matter what function you are andit doesn't matter what industry
.
So, as an engineering companyand focused on video, maybe you
can bring us in a little bit towhat projects are you building
on or what solutions are youleveraging.
Are there models that you'reusing today, Maybe you're
(16:11):
looking at?
Bring us in as much as you canto how this solution is built
and what you're excited about.
Speaker 2 (16:21):
The thing is, we work
with a bunch of different
models and what we see is thateverybody's doing AI.
Everybody's talking about AIespecially.
I would say more talking thandoing yeah that's true.
Speaker 1 (16:37):
That's why I asked if
you were in production, and I'm
so happy to hear that you haveat least a couple clients, Not
chit-chatting right.
Yeah, yeah, yeah.
Speaker 2 (16:47):
But yeah, yeah, and
what we also have seen is that
you know, things look simple butthey're not simple.
Yeah, and we have a really,really cool example around this
when we started doing captioningautomatic live captioning for
events.
Speaker 1 (17:05):
We have the same
experience, so I'll let you
finish, yeah.
Speaker 2 (17:10):
But the thing is, we
started talking, you know if you
go, and you know, talk aboutthis lightly.
It's like yeah you put a Whisperinstance and there you go, so
you may get HLS, wishper andsubtitles.
That would be the full circleRight now.
(17:32):
I don't have the exact number,but at least we have at least 12
steps in order to see what weare doing.
So if you don't have a huge R&Dor engineering team in-house,
it's really hard to leverageright now not leverage in
ChatGPT, which is pretty obviousand simple, but leverage these
(17:54):
type of applications in which,for instance, you build
subtitles for live content.
So the thing is we are takingthe signal.
Right now we're working with,for instance, with HLS.
So we read the HLS, wetranscript that, we check that
we understand how lengthy thesubtitle line will be.
(18:17):
We check with dictionaries ifthere's some words that
shouldn't be translated.
We check with blacklisting thatsome words should be, you know,
striked out or replaced withasterisks.
Then we rewrite or, let say,rebuild the playlist and we
(18:39):
deliver it with certain delay tomake it match what we are
listening to, what the subtitlessay.
So all of that is just oneexample and we have this for a
ton of things in AI, like we dodubbing right now.
We do dubbing or voiceover.
We recognize speakers, werecognize objects.
So for everything you put in,it's like oh, we have object
(19:03):
detection.
Yes, this company does that,but then when you need to put it
all together and think aboutthis as a product, it's way way
more complicated.
Speaker 1 (19:11):
Yeah, that's super
interesting.
So I was curious what specificfunctions you were focused on
using AI.
So you know, we heard kind ofresizing reframing, which is
clearly needed for the socialnetworks, as we discussed.
So, and you're doing subtitlingIs that also in production or
(19:33):
is that coming?
Speaker 2 (19:35):
Yes, it's also in
production.
Or is that coming?
It's also in production, theonly thing we have right now in
staging and other production isdubbing.
That's coming too.
Speaker 1 (19:42):
Yeah, that sounds
super interesting.
Speaker 2 (19:44):
Maybe it should be
out Really and dubbing.
Speaker 1 (19:49):
So that would be
we're speaking English, right,
and then it could be translatedin real time.
Speaker 2 (19:55):
I guess I mean real
time meaning obviously there's a
delay to the HLS stream, not inreal time for now, but we will
use a synthetic voice over myvoice, for instance, to say what
I'm saying in English, inSpanish, portuguese, dutch, and
(20:18):
those things are coming.
We also do video highlights, wedo summarization, we classify
the content using ebu or a AP,depending on the region of the
world or using custom forcertain, let's say, broadcasters
that have their very own treeof how they treat data.
So that's also something we'reputting in, and you can download
(20:40):
all of this or integrate thisover API.
So if you have a MAM and youwant to enrich the data that's
already on your MAM, that shouldbe pretty easy to integrate.
Speaker 1 (20:51):
Yeah, amazing, well,
very cool.
Well, I know you're gonna beshowing at least I assume.
Anyway you'll be showing all ofthat and more at NAB.
Tell me this, when someoneengages with you so you're
dealing with these very largeoperators and a lot of them
(21:13):
they're using commercialservices, so they have a mix of
commercial products.
The engineering is sort of moreintegration engineering than it
is like building, say, from theground up on open source
projects.
So what does that look like foryou?
Are you doing that integrationwork as well?
(21:37):
Are you bringing you know, abespoke solution and then your
customer is integrating that?
Are you there advising and evenbuilding the entire system?
Like I'm just trying to get ahandle on what the scope is,
because there's so many you knowpeople in the market that say,
(21:58):
oh, we have a video platform,right, and you know they give
all the usual feature lists.
That is like everybody else's,you know.
But then you have to kind ofdig in and see, oh, okay, well,
you don't do this, you don't dothat, you can only work in this
environment.
You know there's all theasterisks.
Speaker 2 (22:19):
You know A lot of
small letter, right?
Yeah, yeah.
Speaker 1 (22:24):
So I'm asking the
question, nacho, because I'm
guessing that there's probablyat least one person listening
who's like hey, soundsinteresting, but how are you
guys different and how do youwork with your clients compared?
To fill in the blank, somebodyelse.
Speaker 2 (22:42):
Yeah, yeah, we will
not name anyone here, but yeah,
it depends on the project andthe scope.
We work a lot withcustomization with our clients.
Of course, we have our productsand we try not to, you know,
add features that are just afeature request for one client,
that will not be leveraged bythe rest of our clients, or at
(23:03):
least a group of them, but weget involved in general with how
they integrate, for instance,with different things.
I'm thinking about a projectthat's happening right now in
which we are doing real-timestreaming and there's a feature
request around sending certaintype of signaling inside the
(23:25):
streams.
And that came as a requirementand we may do like build it
yourself, or we can point tosomeone that builds it or we can
do it.
It depends a lot on the project.
In this case, we understoodthat these would be useful in
(23:48):
the future for more clients, sowe added as a feature and it's
already working.
So I would say that the answeris it depends, but we work a lot
hand in hand with our clientsyeah, yeah, understand well,
very good, um, okay, so let'send here.
Speaker 1 (24:07):
What are you showing
at nab and why should someone
come visit you?
Speaker 2 (24:13):
We are showing a
really, really cool encoder with
VPU right now.
Speaker 1 (24:20):
I've heard about that
.
Yeah, yeah, you've probablyheard about it.
I know something about that one.
Speaker 2 (24:27):
We have done a lot of
tests and you're familiar with
it, but the thing is we areseeing a huge, huge leap forward
when it comes to powerefficiency, when it comes to
consumption, but especially whenit comes to density.
We have been able to put 16 SDIsover one rack unit encoder
(24:52):
using NetEase VPU.
So this is already in place.
Our encoder does a lot of stuff, and also something that we are
already working on and it'salready being tested is that,
being partners of NetIn, butalso partners of Akamai, we are
going to be able to deploy weare going to be able to deploy
(25:13):
our encoder with VPU on Linode'scloud, on Akamai's cloud.
So that's something that it'spretty powerful when it comes to
having an encoder cloud.
And also, as I said before, wehave this hybrid approach, so
(25:41):
you may be able to have a VPU onground, but also a VPU on
Akamai's cloud working alltogether and connected using
different encoders, butcontrolled on a very, very live
control instance.
So I think it's a pretty,pretty interesting approach that
we're having right now, withthe VPU fully integrated in our
encoder.
Speaker 1 (25:58):
Yeah, we're really
excited about this.
You know, everybody's beentalking about hybrid, hybrid
cloud.
I mean, for years it's beentalked about theorized people.
Oh yes, we're doing it.
But you know, oftentimes it wasreally difficult to truly have a
(26:20):
real hybrid solution that wasduplicated in a data center or
on-premise somewhere or even inanother cloud, you know, and
then you seamlessly just sort ofmoved.
You know, it's like the, the,the architecture didn't matter
where the service was running,it just wasn't possible.
(26:41):
It was theory, it wastheoretically possible, but
there was always a difference inwhat cloud a provided and cloud
B and then what I had availablein my data center and, and you
know this right.
So the, what you're building,and with Akamai and the Akamai
connected cloud now I'm speakingfrom video encoding, you know
(27:03):
video encoder perspective.
But now for, really, I wouldargue, the first time, it is
truly, it's certainly the firsttime it's truly possible to flex
hardware from a cloudenvironment, as in Akamai's
connected cloud, to on-prem andthen back and forth, either
(27:24):
based on capacity, based onmaybe there's a certain use case
or a function that requires,you know, on on-prem.
I know of one, you know largeproject that's actually going to
be featured at NAB as a casestudy in Europe, where you know
(27:45):
it had, for this particularproject, the primary
infrastructure had to be.
It was a requirement it's agovernment project requirement
that it be on prem, but therewere other parts of it that
could flex and, you know, andthey're like super excited now
they can do this, you know,without having to, you know,
spend a whole lot of money, so,um, so that's really awesome.
(28:08):
So we're, you know, anybodywho's um, also interested in
flexing true hybrid.
Make sure you come see Nachoand Cirrus 21, because they're
going to be showing this andit's real, it's not just a demo
and it's not just a PowerPoint.
It exists, it works, it's real.
Speaker 2 (28:33):
We can deliver the
day after NAB without any
problem.
Amazing, no excuses.
Speaker 1 (28:40):
I love that we're
like you guys are.
We avoid at all costs going outand talking about things that
are wishes and dreams.
We only talk about what's real.
Speaker 2 (28:55):
Let's keep it that
way let's keep's real, so let's
keep it that way, absolutely,it's great.
Speaker 1 (29:00):
Well, nacho, thank
you so much for joining us.
We will, you know, maybe we'lldo a wrap up episode after NAB,
sort of a hey, what did you see?
What did you learn?
What was the response?
But I do encourage everyonewho's listening.
We're now about five weeksbefore five, six weeks before
(29:26):
NAB, so make sure you put Cirrus21 on your must visit list, and
it's really easy, because youare going to be where At
anything Right in the corner atNetInt's booth.
If you come to NetInt, come seeus.
That's right, it's super simple.
Speaker 2 (29:48):
This episode of
Voices of Video is brought to
you by NetInt Technologies.
If you are looking forcutting-edge