All Episodes

April 3, 2025 24 mins

Langflow: https://www.langflow.org/ 

https://www.producthunt.com/products/langflow 

Langflow Desktop: https://www.langflow.org/desktop 

In this insightful interview, Rodrigo from Langflow discusses the evolution and future of their low-code agent building platform. Starting with his background in machine learning and data science, Rodrigo explains how Langflow began as a vision to connect specialized AI models years before ChatGPT existed.

The conversation covers Langflow's journey from open-source project to being acquired by DataStacks while maintaining its commitment to open-source principles. Rodrigo announces the exciting launch of Langflow Desktop, designed to democratize AI development by eliminating technical barriers through an intuitive drag-and-drop interface.

Rodrigo details how Langflow serves both technical and non-technical users, supporting three main application types: LLM pipelines, RAG systems, and multi-agent applications. The interview highlights Langflow's integration with the new MCP protocol for more structured and efficient tool usage by AI agents.

Looking to the future, Rodrigo envisions advanced agent orchestration systems where AI agents can assign tasks to each other, with humans serving as collaborators in the process. This episode offers valuable insights for anyone interested in the rapidly evolving landscape of AI agent development and deployment.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
(01:00:00):
We have Rodrigo here from Langflow. We're
super excited. So Langflow is a low-code
agent building solution. We've been
talking to Lens about how 2025 is going
to be the year of agents.(...) And we
have so many tools to be able to build
agentic applications, but it's kind of

(01:00:20):
complicated, some of them.
LangChain is, you can get stuck down a
rabbit hole, end up continually debugging
that whole application and pull your hair
out. You have tools like Crew AI, which
are still developer-focused. It makes it
a lot simpler, but you still need to
write code. But Langflow is something

(01:00:40):
that a non-technical person can drag and
drop little cubes to connect them to data
sources, to other models, to APIs, and
build a whole application out of it. Here
we have Rodrigo all the way from Brazil.
He's going to give a talk later today at
our meetup, but we kind of wanted to give

(01:01:01):
you a little snippet of what he's been
working on. Give us a bit of a background
about how you came up with Langflow,
because you mentioned that you started it
pre-ChaGBT, which is very, very early.
Yeah, so happy to be here. It's good to
have you all getting to know a little bit

(01:01:22):
more about Langflow
and it's spreading out.
I think the history behind Langflow
started a couple years ago. It's even
more than that, like three, four years
ago. Gabriel and I, we come from this
machine learning data science background,
and we come from a time where models were

(01:01:43):
emerging as a new super powerful thing.
Right. So we started in 2016. We had a
TensorFlow becoming open source. So
machine learning models became a thing.
Then we had PyTorch and everything
around, and Hugging Face started to put
those specialized models available to
everyone with Transformers Hub, the

(01:02:04):
Hugging Face Hub.(...) And we were sort
of looking at the idea of having these
specialized models and thinking about how
this is going to behave in a future where
you want to connect them or have them
becoming some sort of a brain behind the

(01:02:24):
scenes that maybe you're going to have,
let's say, you have models for image
recognition, models for image detection,
models for entity recognition, time
series, Transformers, whatever.(...) And
how could we have a place where you're
going to connect all of these
specialists? Don't remember the right
word. I think we use either experts or

(01:02:45):
specialists by the time.(...) And one,
how we can connect them between
themselves, but most importantly, how we
can connect them with the external world.
(...) And I think this is what today we
would probably call agents,(...) would be
the specialists or whatever. And the
tools are the connections to the external

(01:03:07):
world.(...) And the idea came along with
this thought of, one, we need to put this
together. It needs to be simple and
democratized so that everyone can get the
access to this new power that is

(01:03:28):
emerging, which is AI. And now
eventually, GenAI and the power of LLMs
themselves, the entire history of machine
learning didn't differentiate LLMs from
other models. Language models were just
one more model.(...) But we ended up

(01:03:50):
figuring out that these models are, of
course, the best to interface with
humans. So they ended up creating a power
that we just discovered much
a long time after that. So I think
Langfield became from that idea. And it
evolved in many ways. There is a big
community helping us and building

(01:04:12):
together with us. And this community has
shaped Langfield much more than we did in
the beginning. We listened to the
community. We've been listening. We made
a lot of mistakes. And we are evolving as
a product and an open source project
itself. So I think that's a little bit of
the background behind how Langfield came
to be as a startup and as a project

(01:04:33):
together. That's awesome.
So it seems like you've leveraged the
open source community a lot over the
years. It started off, I believe, as an
open source project.
Right now, how different is the product
from the open source code? So everything
that we have-- so Langfield is still an

(01:04:54):
open source project.
Langfield is fully open.
We've been acquired by data stacks around
11 months ago or so. And one of the
commitments that we made was that it's
going to continue being open source
forever and agnostic, meaning that we're

(01:05:15):
going to integrate the tools regardless
of the company providers, et cetera. So
Langfield is still an open source
project. We also have in data stacks
something called data stacks Langflow,
which is a cloud version of that that
behaves very, very much like the open
source, like the repository.(...) But you

(01:05:40):
can just sign in and start using it. And
today, we're launching
something completely new.
I think a lot of folks at this point, by
watching this already, of course, saw
that. And it's a Langfield desktop. It's
the new kind of distribution that we're
putting together. And the idea here is
back to what we just discussed. It's

(01:06:00):
democratizing even more. We're making the
starting up with something like Langflow
becoming extremely smooth and easy
without CLI, without having to configure
environments, without a PyCon setup or
whatever. Get started. Go within the
desktop setup. And basically, the idea is

(01:06:22):
desktop first, not desktop only. Meaning
you start over there. If you want to
publish it, we are already figuring out
and working on ways that you're going to
publish, host, serve your flows in
different ways,(...)
outside of your own machine.
Yeah, so actually, I just started playing

(01:06:44):
with the Langflow desktop like an hour or
two ago. And I have to say that it's
really impressive. I enjoy it quite a
bit. And I love that you support so many
different models. So I saw you have Samba
Nova on there. You have Datastack stuff
as well. And then also, I was really

(01:07:05):
impressed you have Olama. So I was able
to run everything locally on my computer
and play around with some of my use
cases. So yeah, hats off to you for
helping to create such a useful tool. And
even though Shashant mentioned at the
beginning that this is for non-technical
users, I think that technical users will

(01:07:25):
also really like this. So like me, I feel
like I'm fairly technical. But using a
drag and drop thing just for quick
prototyping is really great, actually. So
yeah, thanks for helping to make such a
thing. I'm really impressed with it. So
added on to the Langflow

(01:07:47):
can you maybe just describe
who maybe Langflow is for and
somebody would want to use something like
Langflow over some of the competition?
a very good question. It's such a good
question that it has been discussed
internally for a long time. And a lot of
iterations changed our own mindsets about

(01:08:09):
this. And I think I'm going to start
pushing a little bit away from the fact
that Langflow is for non-technical folks.
Langflow, first, as an IDE slash
platform, allows you to write code your
own inside of the platform. Everything

(01:08:31):
that you see in Langflow is Python. And
it's Python-based, not only behind the
scenes, but also we have something called
custom components that you can literally
open up the component and write your own
function over there. And once you're
happy with what you just created over
there, you collapse that into a node, a

(01:08:51):
component, a building block, that you're
going to now connect to these other nodes
and create your entire agentic workflow
and continue with the pipeline. So the
idea is to simplify that process. And
then once you're happy with what you just
built over there, you're going to serve
it. So you're going to instantiate that
server Langflow and run that via, for

(01:09:14):
example.(...) Or embed that into a
website or create, use a chat experience
to deploy something like that. So I think
Langflow is trying to dig into a middle
ground between these technical folks that
want to start very quickly and the
non-technical folks that want to

(01:09:35):
experiment and prototype, et cetera. So I
think this is today--
the audience, I would say, we are focused
today is AI builders,(...)
hackers, data scientists,
devs,
who are interested in building agentic
pipelines and are comfortable with having

(01:09:59):
that as a separate module in their
environment. And you may ask, why would
you want to use something like Langflow?
Why can't I stay in my-- maybe as a
developer, a comfortable zone of IDs that
I already use or whatever? And the idea
here is that development(...) shifted a

(01:10:23):
lot with the emergence of AI. Before AI,
you have your code base. You have a
function.(...) When there is a bug,
you're going to look into a function,
into the class, the line of code that
broke, and you're going to try to fix
that.(...) After AI, AI-based development
is a little bit different because the

(01:10:44):
main function or the main part of your
code is a black box. It's the model. And
you cannot access the model at all. And
that's a fundamental paradigm shift in
how development is made. You are limited,
constrained to what you have access to in
your own code base. That means that what

(01:11:05):
developers are doing is they are working
around that big black box.(...) And you
need a lot of control and visibility and
ways to interact best and iterate so that
they understand what's going on,
validate, evaluate, and so on. So a
little bit of the idea of it being visual

(01:11:25):
and interactive is because of that. So
devs that are interested in building
around these black boxes that are
becoming more and more powerful and then
also connecting them with other black
boxes, other models. So you're sort of
creating this swarm of agents connecting
together, connecting them to tools. Now,

(01:11:47):
a lot of stuff through MCP, we just
released an MCP integration right now.
It's two-way integration. You connect
length flow to tools via MCP, and you
expose your flaws as MCP servers as well.
So I think it covers all of that as long
as it's an AI-first sort of application,

(01:12:08):
or it's an AI-first sort of API app that
you're building that is going to be
incorporated into maybe your main source
code for the rest of what you
have over there in your app.
Well, that's also exciting that you're
able to integrate MCP.
MCP is kind of the new hot thing, but I

(01:12:30):
think since it's so new, a lot of people
may not be familiar with it. Would you
maybe just describe what
that is really quickly?
Yeah, so MCP is a new protocol, right?
It's something like REST. We all know
what API REST is, and MCP is basically
that, but made and tailored for LLMs.

(01:12:51):
(...) So it's a way to get consistency
across how the model, the agent
interprets what comes through, right? And
it spits out. It contains information
about the tools that agents can access.
It contains information about how and
when to access these tools and everything

(01:13:11):
altogether in a formatted and structured
way so that the LLMs make fewer mistakes,
right? There's a traditional way to call
tools, which is you pass the descriptions
and the instructions for each one of the
tools in the prompt or instructions of

(01:13:32):
the agent, and with MCP, that changes a
little bit, which is now the agent has
access to the server, and to get to know
what tools it has access, the agent makes
a call itself. So it's almost like the
agent just have one, like in the previous
format, one tool, which is the MCP
server, right? Now it's gonna call the

(01:13:52):
MCP, it's gonna get back the response of
what it has access to, how to use each
one of these abilities, right? It has,
and when to use them. So it's faster
because of that communication, it's more
structured so it makes fewer errors,
right? And it's well structured so that

(01:14:14):
you can change, update the tools you have
in that server without having to reset or
change instructions of the agent. So they
live in separate places right now. It's
decoupling the tools from the agent.
Okay, yeah, that's pretty helpful.

(01:14:35):
So I know we don't have a lot of time
since there is gonna be an event soon,
but when it comes to what you see people
have been building with Langflow or what
you'd like to see them build with
Langflow, is there any kind of use cases
that you would like to

(01:14:56):
see people build or maybe?
I think we lost you, Mark. Yeah, I think
we lost your audio, but I kind of
understand what you were trying to say.
We're kind of curious,
what have you seen your developer
community or your users do with the
Langflow? What kind of application they
have, have they built? And is there any

(01:15:17):
specific kind of use case that's well
suited to using Langflow? Yeah,
absolutely, that's a very good question.
And we see a pattern so far. We have
three main types of applications that
folks are building with this kind of
tool. And the first one is LLM pipelines,
right? So this is basic workflows that in

(01:15:37):
the middle of the process, they need to
use an LLM or a heavy model that needs a
provider or something like that in the
middle. So it's logic, traditional
workflows, but that need the power of
these LLMs in the middle of them. That
will go through automations, triggers,
that will enable a pipeline to happen and

(01:16:00):
so on. Then the second one is probably,
I'd say RAG, right? So everything that
where you need to access and consult
based on a big amount of context, right?
You have databases, you have content from
the web, you have content from external

(01:16:22):
APIs and so on. You wanna bring that as
information to a model.
Model context is still a limitation. The
model accuracy over larger contexts is
also a limitation. So it's an increasing
limitation, right? And what you're doing
with RAG is basically you are vectorizing

(01:16:43):
this huge amount of content and
retrieving that content becomes much
faster, efficient and the agent, the
model just sees what it needs to see in
order to answer to a query from a user,
right? So that's RAG types of
applications, right? And not just basic

(01:17:03):
RAG, but lengthful allows you to do all
sorts of logic with RAG around it as
well. So it goes from filtering columns,
from re-ranking graph RAG and so on. And
then I think the final major kind of
application that we're thinking about is
multi-agent application, right? So you

(01:17:25):
have, we already know what an agent is.
An agent is basically a language model
with access to these tools, right? So
it's this brain with access to
disabilities and it has some sort of
recursion in the middle so that it tries
and tries again and it answers to itself
and sees the response to reason, right?

(01:17:46):
That's what causes the reasoning process
to go through. With structuring in the
middle, so we call them agents because
they feel like they have agency, right?
They have the power of making decisions.
And this is a single agent, but when it
comes to Modi agent, the reason why you
wanna connect one agent to another and

(01:18:07):
maybe create a hierarchical structure is
because a single agent with a lot of
tools start making a
lot of mistakes, right?
It's more tools to access the chances
that it's going to, there's going to make
a mistake, it's actually to increase.
(...) And tools that overlap in
description that might be more confusing

(01:18:28):
to those. So in the same way that humans
organize structurally to work and be
productive, organizing these agents in
that way is starting to become something
like interesting and not only
interesting, but probably state of the
art for a lot of problems that we see in
the world today. And lane flow sort of

(01:18:49):
allows that via a diversity of ways. One
of them is that you can, in lane flow, we
have this thing that is called tool mode,
right? So every component in lane flow
can become a tool and even the agent
component can become a tool. So you can
access agents via other agents because
you make the first agent a tool, right?

(01:19:09):
And that can go like recursively and more
and more powerful and complex if you want
so.(...) That's awesome. Yeah, I think
we're almost out of time,
but maybe one last question. What do you
see as the future of lane flow?(...)
That's a very, very good question. Maybe,
especially with the recent acquisition,
the IBM and the data stacks, you might

(01:19:30):
have access to more resources and where
every day we're one step closer to AGI.
So do you think with
models getting better,
context windows getting larger, more
tools? Yeah, so I see a lot of different
sparks,(...) but also a sort of unified
vision, which is, first, I see a lot of

(01:19:52):
the stuff, the pre-C, like auto
generating components via
some sort of vibe coding thing.
You can imagine assisted flows being
generated. So all of that, I think that's
sort of obvious and inevitable progress
to the platform. But as a builder, an

(01:20:15):
agentic workflow builder, I also see
better ways to orchestrate agents
together. So we have this sort of vision
where you can imagine something like a
task management table, something like a
Jira tracker or something like that,
where agents are assigning tasks to each
other. And you could have one

(01:20:36):
orchestrator that is going to coordinate
everyone, or you could have these agents
sending messages to each other and
elaborating on the task. And this could
go async, and where this entire system is
accomplishing entire pipelines of tasks
that today are literally so far to
accomplish with any model or system

(01:20:56):
wrappers around models. I think this is
one of the big visions around it. Another
thing that I like to think of Langflow
becoming is a little bit more towards
what retraining and fine tuning does. Not
only talking about fine tuning at a model
level, talking about fine tuning the

(01:21:20):
entire system. You have workflows, you
have agents that connect to each other
and send messages, and they make
mistakes, of course.(...) They will like
AI as much as it's smart, doesn't know
what humans want. Because words are
abstract. Human words, they are not so

(01:21:42):
deterministic, right? So that the AI
understands everything. So when mistakes
happen, we want to correct them. We want
to label. So with an auto labeling as
well, so agents, adversarial agents
fixing the mistakes of others. And this
being sort of a retrainable brain that

(01:22:04):
behind the scenes is improving at
multiple iterations, right? They can
imagine the system organizing tasks. And
within this task organization, there are
errors. These errors are being tagged as
labeled as correct, incorrect, or right
answer should be this or that. Human in
the loop, sort of like the human is one

(01:22:25):
more player in this game, where there are
a lot of actors in the scene. And humans
are sort of deciding together with the
agents if things go or stay, they reveal
tasks. Some agents may have access to
reviewing tasks as well. Some others
could just be very limited to basic, that
they are junior developers or junior

(01:22:48):
employees, right? So we can have this
variety of individuals playing this
organized orchestration. And I think this
is what Langflow is heading towards more
and more, right? And I believe that
Langflow is a good player for that
because it's backed again, it's backed by

(01:23:08):
the programming language that makes the
world turn in terms of machine learning
and AI and transfer learning and fine
tuning and so on, which is Python, right?
So I think that's a little bit, right?
(...) If you talk to everyone at the
Langflow team, everyone is going to give
you a different idea. And I promise that

(01:23:29):
at least 30% of them are gonna be super
cool. So it's a tough decision to get
over there, but I think we're listening
to the community more than ever right
now, taking a step back. It's a big
moment for data stacks, the announcement
of the acquisition by IBM, a big moment
for all of us as well. We're taking a
step back and trying to add this moment

(01:23:49):
slow down a little bit to speed up in the
right direction. That makes sense. I
think that's awesome and a great note to
end on. We'll look forward to agents
entering the workforce and being able to
assign them tickets to
hand over self-flight tasks.
But I think on that note, we should get
going and continue this discussion with a

(01:24:11):
broader audience at our meetup.
Yeah, thanks so much for taking the time.
Thank you. And for those who haven't seen
that yet, we today just launched Langflow
Desktop. So head over to Product Hunt. We
have Langflow over there. Check out in
our website, langflow.org slash desktop.

(01:24:32):
You can download it from there as well
and start playing. So have fun with it.
Let us know what you think and join the
community to give us feedback. We're
always welcome. Sounds great. We'll
definitely put that in the description.
Advertise With Us

Popular Podcasts

40s and Free Agents: NFL Draft Season
Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

Las Culturistas with Matt Rogers and Bowen Yang

Las Culturistas with Matt Rogers and Bowen Yang

Ding dong! Join your culture consultants, Matt Rogers and Bowen Yang, on an unforgettable journey into the beating heart of CULTURE. Alongside sizzling special guests, they GET INTO the hottest pop-culture moments of the day and the formative cultural experiences that turned them into Culturistas. Produced by the Big Money Players Network and iHeartRadio.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.