All Episodes

May 22, 2025 54 mins

Send us a text

This week on Sidecar Sync, Amith Nagarajan and Mallory Mejias dive deep into two cutting-edge developments in the AI world. First up is the mind-bending concept of "sleep time compute"—how LLMs might learn and improve during their downtime, transforming into smarter, faster assistants overnight. Then, the duo breaks down OpenAI's $3 billion acquisition of Windsurf, the booming arena of AI coding tools, and what it means for developers and associations alike. From persistent memory to prototype-ready AI partners, this episode is packed with insights for both techies and the tech-curious.

🤖 Join the AI Mastermind:  https://sidecar.ai/association-ai-mastermind

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
https://learn.sidecar.ai

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai

📅  Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/ 

🎉 More from Today’s Sponsors:
CDS Global https://www.cds-global.com/
VideoRequest https://videorequest.io/

AI Tools and Resources Mentioned in This Episode:
Claude Code ➡ https://docs.anthropic.com/en/docs/claude-code/overview
Claude Desktop ➡ https://claude.ai/download
Windsurf ➡ https://windsurf.ai
GitHub Copilot ➡ https://github.com/features/copilot
Cursor ➡ https://www.cursor.so
OpenAI Codex ➡ https://openai.com/blog/openai-codex
ChatGPT ➡ https://chat.openai.com
Gemini by Google ➡ https://deepmind.google/technologies/gemini

🚀 Sidecar on LinkedIn
https://www.linkedin.com/company/sidecar-global/

👍 Like & Subscribe!
https://x.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecar.ai/

Amith Nagarajan is the Chairman of Blue Cypress https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory:
https://linkedin.com/mallorymejias

...
Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:00):
You know, we don't have technical strength, we
don't have a thousand developers, we're not Amazon, we're not
Netflix, but the field'sleveling and you now have the
ability to do this if you takethe time just to go and
experiment with this stuff.
Welcome to Sidecar Sync, yourweekly dose of innovation.
If you're looking for thelatest news, insights and
developments in the associationworld, especially those driven

(00:22):
by artificial intelligence,you're in the right place.
We cut through the noise tobring you the most relevant
updates, with a keen focus onhow AI and other emerging
technologies are shaping thefuture.
No fluff, just facts andinformed discussions.
I'm Amit Nagarajan, chairman ofBlue Cypress, and I'm your host
.
Greetings everybody, andwelcome to the Sidecar Sync,

(00:44):
your home for content at theintersection of artificial
intelligence and all thingsassociations.
My name is Amit Nagarajan.

Speaker 2 (00:52):
And my name is Mallory Mejiaz.

Speaker 1 (00:54):
And we're your hosts, and today, hopefully, you will
not be put to sleep, but we'regoing to be talking about some
really interesting topics in theworld of AI and you'll be
hearing about it shortly.
So very, very excited aboutthis particular set of topics.
It's going to be, I think,quite impactful and quite
exciting for everyone.
How are you doing, mallory?

Speaker 2 (01:13):
I'm doing well, Amit.
I feel like AI is getting moreand more like humans than we
know.
I mean, even AI needs to sleep.
We're going to talk about thatin just a bit, but I thought
that was interesting to prefacethe episode with.

Speaker 1 (01:26):
Yeah, you know, I think AI is learning how to
sleep.
AI doesn't necessarily need tosleep but when AI sleeps,
interesting things start tohappen, which is something we're
going to be talking about indetail.
I think that it's just yetanother branch of opportunity
and research in the world of AI,so I can't wait to talk about

(01:47):
that in more detail.
And meanwhile, at Sidecar, Iknow we've had some very
exciting activities in terms ofour AI learning content for
association folks, evolving,brewing, developing not quite
sleeping over the last severalmonths, and we're about to roll
out, and by the time all of youare listening to this or

(02:09):
watching us on YouTube, by thatpoint in time you will have all
new content on the Sidecar AILearning Hub.
It is actually all AI-generatedcontent, and I'll take a second
to explain that.
It is the content itself.
The actual material is not AIgenerated.
It's generated by a little bitof AI assistance, but primarily
by us on Team Sidecar.

(02:32):
But what we do from there is weutilize an AI driven software
system that we've built thatessentially generates audio and
video for the content, whichallows us to much more rapidly
change it, so we're superexcited about it.
The last time Mallory and I,along with our other colleagues
on the Learning Hub, made bigupdates was in the fall and, as

(02:53):
you all know, ai is changing sofast that the content from the
fall is largely in many waysstill good, but in many ways out
of date, unfortunately.
So that's the reason we'reshifting to this model, so that
we can make incremental updatesconstantly and push those
updates every couple of weeks tothe Sidecar Learning Hub.
So we're extremely excitedabout this.

Speaker 2 (03:13):
It's an incredible feat, amit, and I know you
mentioned when you and I workedon the content in the fall.
We used AI pretty heavily tohelp us generate slide decks and
whatnot, but even so, it wasquite a tedious process.
We had to sit down at ourcomputers and record everything
slide by slide, which we werehappy to do.
But at that time I thought, wow, we are moving as fast as we

(03:36):
possibly could.
And then I think, amit, you hadthe idea.
When, would you say, was thefirst time you thought, wait a
minute, we could probably AI, ordid you always think that?
Is that the question?

Speaker 1 (03:46):
Well, I mean the ideas came to mind at various
points in time.
To me, as AI video and AI audiogeneration got better and
better that at some point inthat curve there would be
quality high enough or evenbetter than the average human
and it would be an opportunity,and so I started thinking about
it, probably a year ago.
But I'm a fundamentally verylazy person.

(04:06):
I don't like doing things morethan once.

Speaker 2 (04:08):
I don't know, Amit, if I would call you lazy.

Speaker 1 (04:11):
I'm extremely lazy and I have a very specific type
of laziness.
I don't like repeating the samething more than once, unless
it's skiing.
I do like skiing a lot of thesame runs repeatedly, but at
work I like doing new things allthe time.
So you can call me a spoiledbrat, but I've had the good
fortune, over quite a few years,of primarily been focused on
like the what's new and what'sinteresting kind of work.

(04:32):
Not all the time.
We all have to do things wedon't like, but the point is is
that when I run into a task thatI do not like, which is any
task that's repetitive, I try tofind a way to automate it.
I've always been like that.
I've been doing that since thebeginning of my career, and now
with AI, it's like being a kidin a candy store, because we can
automate things that previouslywere totally outside of the
realm of anything other thanscience fiction.

(04:54):
So we are living in interestingtimes, and this new content has
been reviewed by a bunch ofpeople internally and externally
and gotten really positivefeedback, so I'm really excited
about it.
We're going to always have asliver of human content
recording and generation invarious aspects of the Sidecar
Learning Hub.
We think that's an element,that's an important addition.

(05:16):
We're going to focus there onthings that don't go out of date
quite as quickly, to addpersonality and to add humanity
to the Learning hub, and I thinkthat's probably a good blend.
We're going to experiment andlearn and you know, for our
association friends, it's aninteresting thing to be talking
about because, of course, you'reinterested in AI learning
content, but also because youyourself are probably a prolific

(05:40):
learning delivery organization.
Most associations have theirhands in learning.
Some generate the majority orthe substantial majority of
their revenue from learning, andso when you have the
opportunity to consider new waysof accelerating the delivery of
learning from idea to reality,it's an interesting thing to be

(06:00):
thinking about.
So we'll be sharing more andmore about this with our
mastermind group, which is asmall, intimate group of very
dedicated practitioners ofassociation management who are
on a learning journey togetherwith us and with each other.
This group meets once a month.
We've talked about it in thepast.
It's an awesome group that'sbeen together for about 18

(06:20):
months now a little bit longerthan that and just the last
meeting that we had there was adetailed discussion about how to
actually do AI at this scalewith your educational content.
So we'll be sharing bits andpieces of that with Sidecar's
listeners and possibly buildinga course on the Sidecar AI
Learning Hub all about how tobuild an automated AI education

(06:44):
pipeline.
So really excited about it.

Speaker 2 (06:47):
Yep, Quite fun too for us to be the guinea pigs and
, exactly like you said, Amit,see what works, see what doesn't
, and then share all thoseinsights with all of you.
So hopefully you can take likethe next best step in that
direction, Amit.
I also have one more questionfor you.
You said you don't like doingrepetitive things.
I would argue this podcast isquite repetitive.
We're on now episode 83.

(07:08):
Are you planning to automatethe Sidecar Sync podcast?

Speaker 1 (07:11):
Not at all.
This is super fun to me.
So if we recorded the sametopic over and over, I'd find
that quite boring, as would Ibelieve our listeners and our
viewers, but I think that it'ssuper fun.
It's actually a great touchpoint each week for me where I
know we're going to be talkingabout this stuff.
It helps me reflect and puttogether thoughts on how I might

(07:33):
want to frame certain topicswith the association market to
make them most helpful, andthere's always new ideas that
come out of this too.
So it's actually it's a routine, but it's not repetitive.

Speaker 2 (07:44):
Yes, all right.
Well, you heard it here, thenAmit and I are here to stay
Today.
We have two exciting topicslined up for all of you.
We're talking about sleep timecompute Hopefully that doesn't
make you too sleepy, it'sactually quite interesting and
then we'll be talking about theopen AI potential acquisition of
Windsurf and some other codingtools that are out there.

(08:12):
So first, sleep time compute.
Over the last year or so maybea little more than that we've
seen language models push tothink harder by giving them
extra test time.
Compute seconds while userswaited.
Giving models more time tothink allowed them to craft
better responses, but everyextra second increased latency
and inference cost, aka the costto actually run the model.
And the model still forgotthings between chats.

(08:33):
So a new research paper fromLetta, which is a UC Berkeley
spinout best known for its MGPTwork, tackles that bottleneck
with sleep time compute.
The idea is pretty simple Keepthe agent busy during downtime,
so you have a heavyweight sleepagent that runs after hours,
reorganizing knowledge andwriting distilled insights into

(08:54):
a persistent memory.
Because that memory survivesacross sessions, the live
primary agent can answer almostinstantly the next morning
without burning fresh GPU time.
This persistent statearchitecture shrinks real-time
compute by about five-fold andstill boosts accuracy by up to
18% on tough reasoning tasks,according to Lettuce Benchmarks.

(09:17):
The breakthrough mattersbecause it turns an idle chatbot
into a night shift analyst thatkeeps learning instead of
starting every conversation fromscratch.
So I worked with ChatGPT a bitto see how this might apply to
associations, and it came upwith two interesting examples.
One might be a member serviceagent that digests a year's

(09:37):
renewal FAQs overnight and cangreet your members with a
confident one-shot answer at 9am the next morning.
Or perhaps a regulatory watchagent that scans new rules
overnight, stores key points inthis memory and then delivers a
curated briefing with yourmorning coffee.
Sleep time compute shows thatmemory plus off-peak reasoning

(09:58):
unlocks lower costs, fasterreplies and continuously
improving service Exactly themix that would benefit
associations, of course, or,frankly, any business for that
matter.
So, amit, can you?
There's really a lot to unpackhere.
Sleep time compute when yousent this to me I thought oh man
, that's a great topic for thepod.
I want to talk about test timecompute a little bit, because I

(10:21):
feel like it does relate tosleep time compute.
Can you talk about both ofthose and how they relate to
each other, or perhaps onesolves something that the other
can't.

Speaker 1 (10:31):
Sure, let's zoom out a little bit and talk about some
of the history behind theprocess of scaling AI.
So some of you may have heardthese terms scaling laws and a
few years ago there was a lot ofconversations about how scaling
laws seem to continue to hold,meaning that as you increase the
amount of computation that youthrew at the training process

(10:52):
for AI, the models becamesmarter.
So that's basically what the socalled AI scaling laws were to
show and in fact, they did holdtrue for quite some time.
They started to not hold trueto the original benchmarks after
a period of time, but there'sstill truth to the fact that if
you throw more compute attraining, you typically get a
better model.

(11:13):
Of course, being smarter abouthow you train and being more
efficient in how you train isdefinitely an opportunity.
Algorithmic improvements are anopportunity, but that was kind
of the first dimension ofcompute scaling was training
time, making the model smarterthrough better and more training
.
Now, test time compute was thisconcept.
That is actually kind of anawkward term, which is, of

(11:35):
course, very much the domain ofAI folks and you know we're very
good at that generally as anindustry of coming up with weird
acronyms and strange words thatmight mean something to the
nerds but not a whole lot toeveryone else.
But test time, compute, testtime, essentially, is when you
use the model.
So training time is whenthere's these massive computers
creating the model from scratch,essentially.
And then test time is when youuse the model.

(11:57):
So, mallory, when you type intocloud or chat GPT and you hit
enter.
As soon as you hit enter, thatmessage is transmitted across
the network and eventually getsto a computer where the model is
running.
We call that inference.
Test time is another term.
That basically means the samething.
So models have historically,for the history of neural

(12:19):
networks, basically always beentrained to respond as quickly as
they can, meaning they inferfrom the input what the output
should be.
They're probabilistic machines,meaning that they'll say, hey,
for this sequence of inputs whatshould be the outputs.
That's basically what they'vebeen doing.
Now what's interesting is latelast year there was the first

(12:40):
release of a so-called reasoningmodel and we covered it in
detail, first when it was calledStrawberry from OpenAI, and
then later when it became it wascalled O1.
And since then lots of thishappened in reasoning models.
We've talked about that a lotand, by the way, there's
actually a new lesson onreasoning models in the Sidecar
Learning Hub with this updatethat we just were talking about.
But reasoning models essentiallywhat they do is they invoke a

(13:01):
new modality of thinking whenyou're querying them, when
you're asking the model aquestion.
So previously the models wouldonly react as they just
essentially react as quickly asthey possibly could.
They would not edit theirresponse as they went, even if
they potentially found a mistake.
They didn't look back at alland they didn't really stop to
think and say, hey, what is thenature of this problem?

(13:23):
How can I best solve it?
Let me break it down intopieces what's called chain of
thought a lot of times today.
So models were not able to dothat.
You could do those things inagents that sat on top of models
, but models themselves didn'thave the ability to do anything
other than the instantaneoustype of responses responsibly as
possible.
So with test time compute, whatwe were saying is hey, if we

(13:44):
give the model the opportunityto think longer, then the model
might be smarter.
It kind of makes sense, right,if the model has certain
fundamental capabilities throughtraining.
But if we say, hey, model likeyou get, take 10 seconds to
think about this, or take aminute to think about this, or
take as long as you want tothink about this.
Like us, probably if I askedyou a question and gave you you

(14:07):
know zero time to respond, you'dhave a harder time coming up
with a great response versussome things you know.
You'd probably want to stepback and think about them and
say, hey, like, what's the bestway for me to solve this problem
?
You'd start working on it.
Then you might go edit yourresponse.

(14:50):
No-transcript.
Extended thinking mode andwe've seen dramatic improvements
in really complex reasoningacross domains like math and
physics and biology and a numberof other domains as well.
So that's what test timecompute is about, and you know

(15:10):
it is really.
Those are the two dimensions ofscaling that we've had thus far
.
And what sleep time compute isabout is this new third
dimension of potential scalingwhere we can say, hey, what if
we threw compute resources, notduring training and not when
Mallory asks a question ofChatGPT, but when perhaps
Mallory is not asking a questionof ChatGPT?

Speaker 2 (15:31):
And what?

Speaker 1 (15:31):
can the model learn from that and how can the model
improve?
That's essentially what this isabout.

Speaker 2 (15:36):
And what does that look like in practice, amit?
So would you provide access tosome large data source,
initially that it can kind ofstudy while it sleeps, quote,
unquote, and then, when I ask it, questions?

Speaker 1 (15:48):
you know the best models are like maybe originally
they were high school graduatesand then they were university
graduates, then they were eliteuniversity graduates, now
they're PhD graduates of the80th percentile, right.
So these models are really,really smart and really

(16:10):
well-versed in a wide array ofdomains, which is cool.
But models today are fixed inthat moment in time, meaning
when OpenAI releases O3, whichthey just did, the full O3, that
model, that particular versionof the model, will never get
smarter.
So it's like having thisamazing, you know, multi-phd
individual that knows all thisgreat stuff, but every time you

(16:33):
interact with that model, itdoesn't remember anything about
what it got right and what itgot wrong, and so that model
will never be better than theday it was born, so to speak,
which is fortunately not truefor us, because we are
continually rewiring our brainsbased on our continuous
experience loop, and so modelarchitecture right now is still

(16:54):
that way.
Models are essentially fixed intime as of the end of their
training processes.
Now you can do other things.
You can do what's called finetuning, you can do additional
training through somethingcalled reinforcement learning.
There's a lot of cool stuff youcan do at the model
architecture level, but they allrequire significant development
processes and they're notthings that you do on a
continuous basis.

(17:15):
So models are kind of frozen intime, like that university grad
who's brilliant but is incapableof remembering what you told
them the day before when thenext day comes around.
You'd find that quitefrustrating at work if you had a
team member like that, right.
So then the question is OK,well, how do we deal with that?
How do we improve on it?
And so a lot of differentthings have been done.

(17:37):
You know people have been doingthings like building scratch
pads and trying to give modelsforms of memory.
Memgpt which you referred to,which was from the same group of
folks, attempted to do that,where it was trying to basically
create like a scratch pad formemory to make it possible for
models to quote unquote havepersistent memory in an earlier
version of this concept, and theidea, though, is that the model

(17:59):
actually has no memory.
It's just essentially like aseparate component that the
model has access to that hasmemory.
That's what this is about.
So now sleep time compute.
Coming back to that, so the keyhere is what are you trying to
do when you sleep?
Well, first of all, justindividually, not thinking about
the science behind it.
You want to get rest.
Why do you want to get rest?

Speaker 2 (18:21):
So I can be better the next day, you know.

Speaker 1 (18:24):
Yes, you'll feel better, You'll feel refreshed,
you'll get started again anewand you'll perhaps, in that
process too, something else ishappening.
You know, that's kind of quietand not really something we
think about a lot, but ourlong-term memories are being
formed, some things are beingpruned.
We're kind of having like acleaning process, both for
emotions and for thoughts, andthere's this distillation that's

(18:45):
occurring where the brain isessentially saying oh this thing
really was important.
Today Mallory learned thisreally important thing, or had
this experience that was reallyemotionally positive or negative
, and then kind of lodges thoseinto your memory and actually,
in some cases, rewires the wayyour brain is actually working

(19:07):
functionally.
So it's quite interesting howthat works.
Now, our AI architecture isridiculously simplistic compared
to the way a biological neuralnetwork works.
We try to learn from thisprocess, right?
So the idea behind sleep-timecompute primarily is to emulate
what happens in a biologicalneural network, aka the brain,
when we're sleeping.
So I'll give you an example.

(19:29):
We're actually implementingthis concept and have been for
about a year in one of our AItools called Skip.
So Skip, if you haven't heardme talk about it before,
essentially is a data analystagent.
So what does Skip do?
Skip is a conversational AIlike ChatGPT or Cloud or Gemini,
and you talk to Skip.
Skip has private access to yourdata, so data that you

(19:51):
consolidate into an AI dataplatform from your AMS, lms,
whatever your systems are, andthen Skip is able to talk to you
about your business and alsowrite reports.
Basically, that's the primaryfunction is to create analytics
and reports, and Skip needs tounderstand quite a bit about you
as a user, your organizationoverall and, of course, your

(20:13):
data in order to be effective.
So what have we historicallydone when implementing Skip for
clients?
We've tried to learn a lotabout the organization and the
data and put in a bunch ofinformation into Skip's brain
and make it possible for Skip tobe quite effective, and that
works pretty well.
You know that gets us 80, 90percent of the way there,
sometimes 95 plus percent of theway there.

(20:34):
But users are constantly comingup with new ideas and having new
questions, right, and so Skipmay not have seen a particular
request, or some users might useslightly different types of
terms than others, and so Skipmight fail at solving the user's
problem.
So I might say, hey, I want torun an analysis that shows me
member retention, but I want tocorrelate that member retention

(20:56):
with how long the member's beenwith us and also what their
level of education is.
So run a report, generate ananalytics.
You know an analytical kind ofview of that.
So that might be prettystraightforward sounding to us,
but Skip might interpret that indifferent ways depending on how
much prior experience he hashad in solving problems like

(21:16):
that.
For you, right?
So what if Skip gets it wrong?
Well, skip gets it wrong and Isay well, that's not quite right
, you pulled the data from thewrong place, or it's really not
what I was looking for.
So I have this conversationwith Skip where I'm giving
feedback and Skip's like oh,okay, cool, and then Skip will
be able to fix the problem andgive you a revision and
eventually you get what you want.

(21:37):
Right, it might take two, three, four turns.
And we asked the question well,how can we make Skip just
continually learn and also havetransference of knowledge from
conversations with one user toanother across an organization?
And so sleep time compute?
We don't call it that.
We call it a learning cycle,which is not nearly as cool as
sleep time compute.
We should have called it that.
I was talking to Thomas Altman,who quite a few of our

(21:57):
listeners know, and he and Iwere chatting about that.
I was talking to Thomas Altman,who quite a few of our
listeners know, and he and Iwere chatting about that.
We're like, yeah, we totallyshould have called it that.

Speaker 2 (22:03):
Drop the ball Done.

Speaker 1 (22:05):
Yeah, we, you know, we tried to call it something a
little bit more generic, butlearning cycles.
Essentially what happens isthis it's very much what you
described at the beginning ofthis segment where, essentially,
outside of when a user's askingSkip for anything, skip on his
own says every so often and thisis typically actually done
every hour or so, not overnightnecessarily and Skip will say,

(22:26):
hey, I had this longconversation with Mallory and I
also talked to these 20 otherusers and in these conversations
, what did I learn?
Well, let's see, mallory reallyliked it when I did this, she
really didn't like it when I didthis.
And so it's kind of like if youever have done journaling,
where maybe at the end of a day,you are in the practice of

(22:46):
saying, hey, I'm going to writedown some of the things, like
some of my experiences, some ofmy thoughts, some of my feelings
from the day, that can be boththerapeutic and it can also be a
very helpful way of learning.
That's kind of what Skip'sdoing.
Skip has this journaling processwhere Skip's saying, hmm,
that's interesting, what Ilearned and how does this
compare to everything I've everlearned before?

(23:06):
Because Skip's quote unquotejournal is everything Skip's
ever learned in these priorlearning cycles.
So Skip's saying, well, here'sall the notes I've ever taken
before.
I've learned these things.
And then in some cases, like,oh, what Mallory really means
when she says ABC is what aMeath set means by something
else, because I have a differentterminology set.
And so then in the future Skipbecomes smarter, not only

(23:27):
dealing with Mallory, butdealing with everyone.
So that's the way theselearning cycles work.
This happens actually quiteslowly, offline.
In fact, you can utilize whatare called batch APIs through
all the major AI providers toget much cheaper rates.
You just get much slowerresponse times and in this
process you get back yourfeedback.
But if you get it back half anhour later or even a couple

(23:47):
hours later, it didn't reallymatter that much.
And so then that feedbackessentially gets stored in this
quote unquote journal.
Right, what I'm calling ajournal is basically the scratch
pad, but it's a distillation ofknowledge using really high,
high horsepower or high compute.
So we're using like O4 andwe're using cloud 3.7 and we're
going to keep pushing theboundary of using really the

(24:09):
most expensive, slowest modelsto do the distillation of
knowledge, to say, hey, what arethe key elements of insight
that I need to glean from these5,000 conversations I've had in
the last day, and then how do Iconsolidate that with everything
I've ever learned before?
And then what happens is in thefuture.
Every time future users come inand ask questions, that
distillation of knowledge, thatjournal is immediately and

(24:32):
instantly available to Skip tolearn from, and so Skip will be
able to utilize that to improvethe quality of his responses
With Skip.
Specifically, we're very earlyin testing and rolling this out.
We actually have not rolled outthis capability to any users
yet, but we will soon, like inthe next 30 days, but our
testing so far shows verypositive results.
It's a really exciting, youknow, additional dimension of

(24:54):
scaling.

Speaker 2 (24:56):
Sleep time compute is a good name, but I will say
you're on to something, Amit,with the AI journaling.
I think that could.
That could definitely getpeople interested.
What I want to ask you is itseems like it's less about
giving the model access to somerepository of data to study
beforehand and more about theability to learn from previous

(25:16):
experience and previousinteractions.
Is that correct?

Speaker 1 (25:20):
I think you could use it in both ways.
But in our use case for Skip,for Betty, for other products we
develop learning from theinteractions with users is
really, really important, anduntil this innovation came along
, it was really something thatrequired a serious level
technical sysadmin or developereven to go in and provide that

(25:41):
additional knowledge to thesetools.
And now we're in thiscontinuous learning loop,
essentially, where, as youinteract with these systems,
they'll just feel smarter.
Every day that you use them,they'll feel smarter, they'll be
faster and they'll be better.
So it's pretty exciting.
Now the underlying neuralnetwork, the underlying models
we use, have not gotten anybetter in terms of their
knowledge, but what we'veessentially done is built an

(26:03):
engineering solution on top ofthat basic layer to make the
system smarter At some point.
If the neural networks becomemore liquid or more elastic in
their nature, that will be great, but ultimately that also has
some risk to it, because, youknow, models tend to be shared
across organizations, so do youreally want other organizations'

(26:26):
behavioral changes to affectthe way your version works?
There's a layering concept thatpeople are working on, or what
I just described won't behappening, but there's all these
different things happening atthe same time.
This innovation I think my maintakeaway that I'd share
particularly with thenon-technical leaders of
associations is modelsthemselves as they get smarter.

(26:48):
It's great, it's exciting, butthis innovation means things are
going to happen faster.
The capabilities of the systemthat you're using, whether it's
an AI for you know,conversational intelligence, or
if it's, or if it's a codingtool or whatever it is these
tools are getting smarter andsmarter and smarter, and if you
think that we're about to slowdown because we've had so much

(27:08):
progress, I think it's quite theopposite it's going to continue
to compound and drive progressforward at an even crazier pace.
So pretty nice.

Speaker 2 (27:17):
It's like can it get faster?
I know it can.
I know it can in theory, butit's crazy to me.

Speaker 1 (27:22):
Yeah, the speed of progress.
I mean the numbers are justmath, right For our brains.
We're already blowing up.
But you know we'll, we'll see.
But sleep time computedefinitely is something I think
people should at least be awareof at a minimum, because it's
something that, if you'rethinking about building one of
the systems that you describedearlier Mallory, you know,
member services, agents orsomething else the capability to

(27:44):
get smarter in this way is afairly new concept.
And so being able to understandthat that is possible.
And so if you're working withyour team whether it's an
in-house development team, athird party or a product company
many of them might not evenknow that this is a possibility.
So you, as the non-technicalindividual in the room, can come
in and say hey, have you heardof sleep time compute?

(28:05):
You can start to weave that in.
It might solve some of theseproblems that you're saying are
unsolvable.
So that'd be a fun conversation.

Speaker 2 (28:12):
I think one day this podcast is going to turn me into
a technical person.
I don't know exactly how yet,but I'll just start saying you
know, I don't know exactly howyet, but I'll just start saying
you know, I'm pretty technical.

Speaker 1 (28:21):
Yeah, I mean my last question.
You're on your way.

Speaker 2 (28:23):
My last question on this topic is kind of my gut
reaction when you shared thisinformation with me, which this
sounds like it could be actuallyexpensive running extra cycles
while the model's quote unquotesleeping.
Maybe it could have a negativeimpact on the environment as
well.
You said that wasn't exactlythe case, so can you address
that?

Speaker 1 (28:43):
Yeah, I actually think this is going to help in
all of those areas.
So, first of all, most of thesesleep cycles like you and I,
you have the opportunity to runthem offline at night, and so
the power grid is less busy inthe evenings.
You know that's driven by a lotof factors.
Obviously, people aren't doingas much stuff at night.

(29:05):
Also, you don't need as muchpower to cool things with air
conditioning.
All that kind of stuff affectspower consumption, so you tend
to have both less expensivepower and sometimes you have
surplus power available in theevening, so that makes it more
environmentally efficient inmany cases to do what I'm
describing.
The other thing is is that youdon't really care so much about
latency, so you can send yourworkloads basically anywhere, so
you might have data centersthat are too far away to be, you

(29:29):
know, effective in terms oflatency for a real-time
application.
So that gives you anotheropportunity.
And the other thing that'simportant to point to back from
the research is that they foundthat because of the learning
from the sleep time computer, orwhat we call learning cycles,
because of the learnings, itactually decreases the use at
inference time.
So it makes the models fasterbecause they can refer to this

(29:51):
distillation of knowledge andsolve a lot of problems that
previously might have requiredboth multiple turns of a
conversation, which is, ofcourse, very expensive in terms
of both multiple turns of aconversation, which is, of
course, very expensive in termsof GPU or LPU time and also with
respect to environmental impact.
So the net effect of this isuse offline resources that are
less expensive and lessenvironmentally impactful to

(30:12):
improve the efficiency of youronline resources.
So I think it's actually areally positive story in all
those ways.

Speaker 2 (30:19):
Mm.
Hmm, well, perfect.
It sounds like AI deserves anap time just as much as we do,
and it's beneficial for all ofus, ai and humans alike.
Next up, we're talking aboutthe OpenAI acquisition of
Windsurf, and we'll cover someother coding tools as well.
So OpenAI has reached anagreement to acquire Windsurf,
an AI-powered coding toolformerly known as Codium, for

(30:42):
approximately just some smallchange $3 billion making its
largest acquisition to date.
Windsurf is an advanced AIintegrated development
environment, or IDE, thatleverages large language models
and agentic AI to automate andenhance the coding process.
It's recognized for featureslike Cascade agentic AI, which

(31:02):
enables autonomous codegeneration and refactoring,
local code-based indexing engine, which allows efficient
context-aware code suggestions,and SuperComplete, which
predicts developer intent andoffers inline code completions.
The acquisition is widely seenas a defensive and strategic
move for OpenAI, which facesrising competition from Google,

(31:24):
anthropic, microsoft andfast-growing startups like
Cursor Speaking of.
Beyond WinSurf, there areseveral powerful AI coding tools
worth considering.
Github Copilot is a widely usedassistant that integrates
directly into popular IDEs,offering real-time code
completion, chat-based help andmulti-file editing capabilities.

(31:45):
Cursor, which I just mentioned,provides a full-featured
AI-powered IDE experience withadvanced multi-line autocomplete
, chat-driven code edits anddeep context awareness perfect
for power users who wantgranular control over their
coding workflow.
Meanwhile, clog Code fromAnthropic shines as a
terminal-based agentic AIassistant designed for complex

(32:08):
multi-step coding tasks, bugfixes and code-based exploration
, catering especially todevelopers comfortable with
command line interface or CLIenvironments.
Each of these tools bringsunique strengths that complement
different coding styles andproject needs and, amit, I know
you have some experience withmaybe, for sure, probably all of

(32:29):
them, if I had to guess.
But I feel like my firstquestion here is more of a
declaration.
We can always kind of followthe dollars, right, if we want
to look at trend lines, if wewant to look at where we're
going in the next few years,follow the money.
Obviously, openai making this$3 billion acquisition and when
CERF makes you probably realizethat this is a direction we need

(32:49):
to focus in, is that shockingto you and me that we're going
to be putting more dollars intoAI-assisted code?

Speaker 1 (32:58):
Not at all, and I think there's going to be, you
know, continued investment inthis area.
And you know coding has beenseen for some time now, for
several years, as this killerapp for the current generation
of language models, and itcontinues to be the case.
You know, windsurf is it's oneof the tools in this space, as
you mentioned, and I think, bythe way, what you just shared

(33:20):
shows that you are prettytechnical.
Going back to the earliersegment, you know, with all
these AI coding assistants, theyall do something in common,
which is they build software foryou.
Right?
That's what you're trying to do.
You're trying to say, hey, whatcan I do to build software
without being a coder, or to bea more powerful coder?
And what I would point to interms of the trend line is the

(33:41):
ability for a so-callednon-technical person to build
software, to build applicationsto add to existing applications
and do it in non-trivial ways.
So for a long time, we've hadthe ability, in a variety of
different ways, to build verysimple things.
For example, there's a productcalled Airtable that came out
years ago that made it possiblefor business users to create

(34:03):
databases in the cloud.
Came out years ago.
That made it possible forbusiness users to create
databases in the cloud.
It was not like SQL Server orPostgres or these other you know
, developer-oriented databases.
It was very, very simple forpeople to create apps.
I mean, even if you rewind intime prior to that, we had
Microsoft Access the last youknow, 20 plus years.
That allowed fairlynon-technical people to build
meaningful business applications.

(34:23):
But beyond like declaring whattype of information you wanted
to store, beyond that, you kindof needed a coder to come in and
like build things for you.
And what's changing now is theability to talk to an AI and say
, hey, I want the app to do thisand this and this and this.
I want my membershipapplication to work this way
when the user comes into thewebsite.
I want my pricing to work thisway.
I want my you know, I want myfunctionality for abstract

(34:46):
submission to work in theseother ways.
I'm using association examplesintentionally because these
custom you know, code-basedthings are way, way more
accessible to everyone.
Now, but coming back to yourbroader point, mallory, about
following the money, that'sgenerally an interesting path to
consider.
I think it's true that it'softentimes a line that gives you

(35:10):
insight into where things aregoing.
At the same time.
Sometimes those insights mightbe directionally correct, but
the timing and kind of themagnitude of the investments may
be wrong.
I actually think in this casethe amount of money they're
spending is trivial to them anda very you know it's kind of an
irrelevant amount.
It's more about them gettinginto the coding space and that's

(35:32):
how big these dollars havegotten.
In the world of AI that thatrepresents less than 1% of
OpenAI's market.
Not market cap is what I wasgoing to say, but their latest
valuation.
They're still a private company.
So I would point out that.
You know, ultimately the modelbusiness meaning the business of

(35:52):
building the underlying AImodels like GPT-4 and CLOD 3.7,
that is a race to the bottom.
There is going to be really avery.
It's going to be very, verydifficult for companies to make
significant money in buildingand selling models.
There's free open sourceoptions available that are
nearly as good as the commercialcounterparts.

(36:14):
Some argue that the open sourcemarket will at some point
overtake the commercial market.
We talked about that withDeepSeek R1 back at the
beginning part of the year,where that model was as good as
O1 from OpenAI, which isobviously a proprietary piece of
software.
So if models are becomingcheaper and cheaper and cheaper

(36:35):
and eventually close to free.
How do you make money?
You can't scale your way out ofsomething that's approaching
zero in terms of revenue andprofit, so all these companies
are heading to the applicationlayer.
So the application layerincludes coding, it includes
agents, it includes you knowthings that integrate into
business applications.
It includes research.

(36:56):
It's all the utility that youget as a business user on top of
the model.
So you think about people thatare starting to form opinions
and loyalties even to certaintools, whether it's Clod or it's
OpenAI or anything else.
It's not really because of themodel.
The model is very, very similarbetween Clod and ChatGPT's
latest underlying model.

(37:18):
But it's about user experience,it's about simplicity, it's
about low friction.
It's also about connectivity.
So one of the things that hasbeen going really well for the
Claude team is they.
I mean, they were the peoplewho proposed the standard called
MCP, which we've coveredrecently on the pod, and the
model context protocol, or MCP,opens up AI systems to all sorts
of connectivity with othertools, as we both covered and

(37:41):
Mallory demonstrated in thepodcast.
It's really, really exciting,and Claude was the earliest
adopter of this standard and noweveryone else is following.
So you know that makes Claudemore functionally valuable to me
than ChatGPT, because ChatGPTis likely to very soon support
model context protocol, maybeeven by the time you're

(38:01):
listening to this, but as ofthis moment in time, it does not
.
So it's those kinds of thingsthat make the ecosystem better
to lower friction and improve,like the business value, and
coding is just one of thoseapplications.
In fact, just to kind of put anexclamation point on this,
openai, in addition to their theWindsurf thing, they recently
announced that they're hiringthe former Instacart CEO, fiji

(38:25):
Simo, who was and is on theOpenAI board, to be the CEO of
not OpenAI but OpenAI'sapplications business, which I
believe will include this aswell, as you know, ChatGPT as a
consumer product and a number ofother things.
So the applications business isclearly going to be where the
money is at things.
So the applications business isclearly going to be where the

(38:46):
money is at and you know clearlyour thesis is that if you focus
even more specifically withinapps on particular verticals or
particular highly you knowspecialized use cases, you can
build something deeplymeaningful for people and also
have a path to you knowsustainable business at the same
time.

Speaker 2 (39:02):
For our technical folks.
Amit, I know you haveexperience with most of these
tools.
If there's someone technical Idon't know if this would be the
case, but someone technicallistening to this podcast who
has not experimented with any ofthe tools that I mentioned or
AI assisted generating code,what would you say about the
experience?
One, using these tools, if youhave any favorites, and then two

(39:24):
, kind of what that experienceis like developing software with
AI versus without?

Speaker 1 (39:30):
So we have a lot of people using Cloud Code.
That's our hands down, ourfavorite coding tool.
It's far more powerful thananything else.
We've tried including WinServeCursor Replit.
You know Microsoft VisualStudio Code.
It's not to say that itactually is used.
Instead of those things, youstill need what's called an IDE,
which is this overall visualdevelopment environment where

(39:53):
you can see your code and editit and do things.
But having a command lineinterface for developers is
super powerful because it allowsthe tool to interact with your
computer in ways that theseother software tools really
generally can't.
It's also just much, muchsmarter in terms of where
CloudShines is being able todeal with super complicated,

(40:15):
long running processes.
Where you want to, you know, gothrough an entire code base and
make certain types of changesor check for problems, look for
performance optimizationopportunities or, in some cases,
build entirely new appscompletely from scratch.
So if you consider yourselfkind of an intermediate to
advanced developer, I would sayget Cloud Code.
It runs on Mac.

(40:36):
It will run on Windows usingsomething called WSL, so it's
pretty easy to install.
Openai has a competitor productthey announced fairly recently
called Codex.
I don't find it to be nearly asgood as Cloud Code.
So Cloud Code is, in my opinion, at the moment the king of the
hill as far as IDEs areconcerned.
I know a lot of people who loveCursor.
We have some team members thatuse Windsurf.

(40:57):
Most of our team still usesVisual Studio Code because they
actually have within VisualStudio Code something that's
equivalent to Cursor and WinSurf, which is called the Copilot
Agent Mode, where Copilot, whichis kind of the first AI that
most people had experience within the developer world.
That tool now has gotten a lotmore powerful, kind of quietly
in the background, it doesn'thave as much buzz around it as

(41:19):
Cursor.
My point of view is that,absent an acquisition from a
major technology player likeOpenAI, products like Cursor and
Windsurf are going to have areally hard time because these
products are going to I mean,it's really a commoditization
unless you have enough scale.
And the cool thing about CloudCode is because it's produced by

(41:40):
a model developer.
They're closer to the metal,meaning that they are able to
take advantage of the model inways that I don't think these
other vendors are able to do.
Again, I'm not an expert inCursor or Windsurf.
I've used one of the two ofthose tools.
I just think that Cloud Code isworth checking out and a lot of
people still haven't even triedit out.
It's kind of maybe a little bitintimidating looking initially

(42:00):
because it's a command linething, but if you're a developer
, check it out.
I think you'll find it to bequite interesting.
One thing you can do with CloudCode that's super easy after
you install it is just go inthere, open up Cloud Code inside
one of your projects and sayexplain this project to me and
you'll see an interesting pointof feedback and ask Cloud Code
to perhaps solve an issue thatyou have in your backlog and

(42:22):
give it the issue, a URL to yourGitHub repository if you use
GitHub or a description of theissue, and Cloud Code will go
through every area of your codebase necessary to solve a bug
report or feature request andgive you back a complete, you
know, set of changes that youcan review easily.
So that's for the developers.
For the non-developers and I'mtalking to you, the CEOs, who

(42:47):
declare yourself decidedlynon-technical and you delegate
all of your technology stuff toyour IT folks I'm talking to you
guys, as well as everyone elsewho's in the non-technical camp
Get yourself access to Cloud andinstall the desktop version of
Cloud, which works both on Macand on PC.
This does work.
What I'm about to describe doeswork in the web version as well
.
It's just not nearly as good.

(43:08):
So, in the desktop version ofClaude, go in there and have a
conversation about some kind ofan app that you want to build,
or maybe you just built on yourwebsite.
Like a common thingassociations do is they put
their prospective membersthrough a whole lot of pain to
sign up.
So it's very typical thatyou'll have an e-commerce
process where it's like oh, Ihave to go through this step and

(43:29):
this step and this step.
Or maybe you have a membershipapplication that you paid tens
or even hundreds of thousands ofdollars to some developer to
build, and that thing has beensitting out there for years and
it's, you know, really crustyold software and it's, it's.
It's maybe wasn't even greatinitially, and it's really not
great now.
And the problem is is that it'snot the highest thing on your

(43:49):
priority list, because it kindof works, but your members and
prospective members do not likethis thing, and so what I'd
recommend you do is take a fewscreenshots of the current
application on your website, youpaste them into Claude, the
desktop app.
And then what you do is you sayhey, claude, this is my current
membership application.
It kind of sucks.
I want your help improving it.

(44:10):
Give me a prototype of what anew membership application will
look like.
Hit enter.
What you'll see very quickly isClaude will start and, by the
way, put Claude into extendedthinking mode, which is this
thing where Claude thinks moredeeply, as we were talking about
earlier, and Claude will veryquickly come back to you with a
fully interactive prototype.

(44:30):
It won't be functional yet, butit'll show you.
Hey, this is what I imagine youcould do for your membership
app.
And you can go back and forthand say well, you know, right
now it's like 18 steps in orderto become a member.
I want to really reduce that asmuch as possible.
Can you take a look at the flowand give me some suggestions on
how to improve it?
So Claude puts on his UX UI hatand is deeply empathetic with
your business.

(44:50):
And, by the way, if you tellClaude, give Claude a URL to
your website and say readfunctional in terms of connected
to your data, but like visuallyfunctional prototype of a new
membership application, or, ifthat's not your problem, maybe
people wanting to sign up to bespeakers, or people that are

(45:16):
searching for volunteeropportunities.
Prototype these member-facingthings that are giant pain
points for you and ask CloudCode to help you build a better
way.
Now you might say, well, that'sreally cool, this is really
exciting, but now do I stillhave to hire a developer to take
that prototype and make it real?
That's where it gets really,really exciting, because you can
say, hey, claude, thanks somuch for this beautiful

(45:39):
prototype.
It's exactly what I want.
I love this thing.
Now I really want to goimplement it.
And, let's say, you're a littlebit technical.
At this point you could say,hey, claude, talk to your buddy,
claude Code.
Claude Code has an MCP server,so your Claude desktop app can
talk to Claude Code and say, hey, I want you to create this as a
React project or an Angularproject or a Vue project these
are just different softwaredevelopment frameworks and go

(46:01):
create it here and build ituntil it runs.
Software development frameworksand go create it here and build
it until it runs.
And, by the way, here, use alocal database for now.
Just prototype the database andwe'll have someone else later
on securely connected to thedata.
And you could do that, and Iknow you actually have like a
functioning true app, which thenyou can actually use cloud code
directly to rewire it toconnect to the real data source
by API or whatever.

(46:22):
So you can go through this crazyfast iteration and build stuff
and you can also say hey, claude, I want three different
versions of a member application, right, which might take a
human like three weeks to do youknow, a week each or something
and cost a lot of money.
You're never going to do that.
Well, you can do that.
What about using Claude?
But also go to Google, geminior OpenAI at the same time and

(46:42):
ask for the same thing and comeup with the best answer for you.
So these AI tools can be used tosolve business problems that
are at this intersection oftechnology.
That's normally your Achillesheel as an association.
You say all the time you knowwe don't have technical strength
, we don't have a thousanddevelopers, we're not Amazon,
we're not Netflix, but thefield's leveling and you now

(47:03):
have the ability to do this ifyou take the time just to go and
experiment with this stuff.
So for the non-techies, gocheck out Claude.
I point to this one not becauseit's necessarily better at
coding than the others.
I mean it's one of the best,but it's just the easiest one to
use.
It's just so simple to useClaude's desktop app to see what
they call an interactiveartifact, and it's just pretty
damn cool too.

Speaker 2 (47:24):
Well, that's incredibly exciting and I feel
it could also be a bitoverwhelming if we have some
association leaders here whohave traditionally, like you
said, outsourced dev work.
They just don't have thatstrength within.
This almost makes me think thatyou would need to reevaluate
that whole part of your business, because is it worth maybe
hiring a few technical-ishpeople and having like a really

(47:47):
small team that can just iterateon projects like this all the
time?
What do you think, amit?
Or is there still value inhaving some of that work
outsourced?

Speaker 1 (47:56):
If you have a dev team, you need to make it their
mission in life to becomeexperts at using these tools.
If they tell you they don'tlike AI or they use AI but
they're still slow in developingresponses for you or building
things, they're not using AI theway I'm talking about using AI,
there's ways to help your devteam come up to speed on this
stuff if they're willing.
I do know some folks that aredeeply technical but are,

(48:18):
frankly, giant skeptics of AIand say no, no, no, it's not
going to be perfect, it's notgoing to be as good as my code.
And the reality is is that mayhave been true three years ago
and it's not true now.
So if you have a dev team, youhave to push them and pull them
If you need to demand from themthat they become truly native AI
developers.
It's critical because yourvelocity is going to go up by
probably a factor of 5X, maybeeven 50X.

(48:40):
That's what we've seen in ourdev teams.
It's just ridiculous how muchproductivity you have.
So if you have a dev team, getthem up to speed on this stuff.
By the way, we're thinking ofbuilding an entire series of
software development coursesspecifically tuned for the
association world in the SidecarLearning Hub.
If you think that's aninteresting idea, we'd love to
hear from you.
So please drop us a line.
We have a feedback loop throughthe pod that Mallory can

(49:02):
explain in a bit, because Iforgot how that works.
But we also have, obviously,email, or you can do hello at
sidecarai and you can also hitus on LinkedIn.
We'd love to hear from you ifyou think that's a useful idea.
We'll probably put somepublic-facing videos on YouTube
as well that give littlesnippets of things like what I
just described with using CloudDesktop.
Now, if you don't have a devteam and you outsource this
stuff which is very, verytypical just make sure the team

(49:24):
that you outsource to is up tospeed, because if you're
spending tens or hundreds ofthousands of dollars to get
these little micro features outof people and it takes weeks,
months or forever to get themand they aren't that great when
you get them which isunfortunately the common
experience with custom softwaredevelopers or people who do
configuration or customizationsof package software go demand
more, you know now that you canget more and you should expect

(49:46):
more.
And finally, if you don't fallinto either of those buckets
where you don't have your owndev team and you don't
traditionally outsource softwaredevelopment work and you just
kind of don't do anything withsoftware.
You just kind of useout-of-the-box software and you
kind of make your members paythe price in terms of high
friction.
Just go try this and see whathappens and you can see that it
is actually quite possible foryou to do almost all of it

(50:08):
yourself.
Maybe you go hire a freelanceron Upwork or you hire a team
that knows associations reallywell.
We obviously know people who dothat kind of stuff, but the
point is there are ways to dothis that are dramatically
different than what you think isthe way to do this.
So it's not only lower cost,but faster, higher quality, more
reliable.
It's just an exciting time.
We can do things for ourmembers, for our audiences, that

(50:30):
there's no way we would havebeen able to do.
Even the largest ofassociations with the largest
budgets and the largesttechnical teams cannot do what
now the very smallestassociation with the smallest
budget can do, literally in days.

Speaker 2 (50:43):
And even all my non-techies out there the fact
that we can go to Claude andcreate an interactive prototype
of an idea that we have.
I mean that just wasn't evenpossible, quite literally, a
year ago.
So to think that, like any ideawe have as it pertains to your
work, even if you'renon-technical, you can
potentially build or start tobuild, that's very exciting.

(51:04):
And what Amit just mentioned isin the show notes.
If you're listening audio onlyright at the top of the show
notes there's a send a textbutton and you can text Amit and
me in the Sidecar Sync podcast.
And let us know if you thinkthat software development course
would be interesting.
I think, personally, amit,maybe we could do a route for,
like technical folks andnon-techn interesting.

(51:24):
I think, personally, amit,maybe we could do a route for,
like, technical folks andnon-technical.
I think that would be a reallycool way to get everybody
involved in that conversation.
And, yes, let us know if that'ssomething you'd be interested
in.

Speaker 1 (51:34):
We've covered a lot of ground here, and if Sleep
Time Compute didn't put you tosleep, maybe the technical
conversation did, but I hopeneither did.
And it's a great time to bealive, it's a great time to be
an association leader, it's agreat time to explore and
experiment.

Speaker 2 (51:50):
Absolutely Everybody.
We will see you all next weekafter some good sleep, hopefully
.

Speaker 1 (52:06):
Thanks for tuning into Sidecar Sync this week.
Looking to dive deeper?
Download your free copy of ournew book Ascend Unlocking the
Power of AI for Associations atascendbookorg.
It's packed with insights topower your association's journey
with AI.
And remember, sidecar is herewith more resources, from

(52:26):
webinars to boot camps, to helpyou stay ahead in the
association world.
We'll catch you in the nextepisode.
Until then, keep learning, keepgrowing and keep disrupting.
Advertise With Us

Popular Podcasts

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Amy Robach & T.J. Holmes present: Aubrey O’Day, Covering the Diddy Trial

Introducing… Aubrey O’Day Diddy’s former protege, television personality, platinum selling music artist, Danity Kane alum Aubrey O’Day joins veteran journalists Amy Robach and TJ Holmes to provide a unique perspective on the trial that has captivated the attention of the nation. Join them throughout the trial as they discuss, debate, and dissect every detail, every aspect of the proceedings. Aubrey will offer her opinions and expertise, as only she is qualified to do given her first-hand knowledge. From her days on Making the Band, as she emerged as the breakout star, the truth of the situation would be the opposite of the glitz and glamour. Listen throughout every minute of the trial, for this exclusive coverage. Amy Robach and TJ Holmes present Aubrey O’Day, Covering the Diddy Trial, an iHeartRadio podcast.

Betrayal: Season 4

Betrayal: Season 4

Karoline Borega married a man of honor – a respected Colorado Springs Police officer. She knew there would be sacrifices to accommodate her husband’s career. But she had no idea that he was using his badge to fool everyone. This season, we expose a man who swore two sacred oaths—one to his badge, one to his bride—and broke them both. We follow Karoline as she questions everything she thought she knew about her partner of over 20 years. And make sure to check out Seasons 1-3 of Betrayal, along with Betrayal Weekly Season 1.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.