Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing

Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing

Insights for financial services leaders who want to enhance fairness and accuracy in their use of data, algorithms, and AI. Each episode explores challenges and solutions related to algorithmic integrity, including discussions on navigating independent audits. The goal of this podcast is to give leaders the knowledge they need to ensure their data practices benefit customers and other stakeholders, reducing the potential for harm and upholding industry standards.

Episodes

December 21, 2025 4 mins

Spoken by a human version of this article.

TL;DR (TL;DL?)

  • Technical stakeholders need detailed explanations.
  • Non-technical stakeholders need plain language.
  • Visuals, layering, literacy, and feedback are among the techniques we can use.

To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

About this podcast

A podcast for Financial Services leaders, where we discuss fairness and accuracy in the...

Mark as Played

Spoken by a human version of this article.

TL;DR (TL;DL?)

  • Algorithmic systems create challenges in balancing explainability with privacy and confidentiality.
  • Key challenges include protecting sensitive information, preserving proprietary algorithms, and securing fraud detection systems.
  • Focusing on what audiences need, with a few specific considerations, can help address these.

To subscribe to the weekly articles: https://risk...

Mark as Played

Spoken by a human version of this article.

TL;DR (TL;DL?)

  • Explainability is necessary to build trust in AI systems.
  • There is no universally accepted definition of explainability.
  • So we focus on key considerations that don't require us to select any particular definition.

To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

About this podcast

A podcast for Financial Services leaders, wher...

Mark as Played

Spoken by a human version of this article.

TL;DR (TL;DL?)

  • Algorithmic processes are often complicated by intricate data flows and transformations.
  • Data flow diagrams and documentation can help make processes simpler.

To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

About this podcast

A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algo...

Mark as Played

Spoken by a human version of this article.

TL;DR (TL;DL?)

  • Complexity must be actively managed rather than passively accepted.
  • Data relevance directly impacts both accuracy and explainability.
  • Technical “visibility” techniques can be useful.

To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

About this podcast

A podcast for Financial Services leaders, where we discuss fairness and accuracy i...

Mark as Played

Spoken by a human version of this article.

TL;DR (TL;DL?)

  • Why Explainability Matters: It builds trust, is needed to meet compliance obligations, and can help identify errors faster.
  • Key Challenges: Complex algorithms, intricate workflows, privacy concerns, and making explanations understandable for all stakeholders.
  • What’s Next: Future articles will explore practical solutions to these challenges.


To subscribe to the week...

Mark as Played

Spoken by a human version of this article.

TL;DR (TL;DL?)

  • Testing is a core basic step for algorithmic integrity.
  • Testing involves various stages, from developer self-checks to UAT. Where these happen will depend on whether the system is built in-house or bought.
  • Testing needs to cover several integrity aspects, including accuracy, fairness, security, privacy, and performance.
  • Continuous testing is needed for AI systems, differing f...
Mark as Played

Spoken by a human version of this article.

One question that comes up often is “How do we obtain assurance about third party products or services?”

Depending on the nature of the relationship, and what you need assurance for, this can vary widely.

This article attempts to lay out the options, considerations, and key steps to take.

TL;DR (TL;DL?)

  • Third-party assurance for algorithm integrity varies based on the nature of the relation...
Mark as Played

Navigating AI Audits with Dr. Shea Brown

Dr. Shea Brown is Founder and CEO of BABL AI
BABL specializes in auditing and certifying AI systems, consulting on responsible AI practices, and offering online education.

Shea shares his journey from astrophysics to AI auditing, the core services provided by BABL AI including compliance audits, technical testing, and risk assessments, and the importance of governance in AI.

Mark as Played

Spoken by a human version of this article.

AI literacy is growing in importance (e.g., EU AI Act, IAIS).
AI literacy needs vary across roles.
Even "AI professionals" need AI Risk training.


Links

Mark as Played

Navigating AI Governance and Compliance

Patrick Sullivan is Vice President of Strategy and Innovation at A-LIGN and an expert in cybersecurity and AI compliance with over 25 years of experience.

Patrick shares his career journey, discusses his passion for educating executives and directors on effective governance, and explains the critical role of management systems like ISO 42001 in AI compliance.

We discuss the...

Mark as Played

Mitigating AI Risks

Ryan Carrier is founder and executive director of ForHumanity, a non-profit focused on mitigating the risks associated with AI, autonomous, and algorithmic systems.

With 25 years of experience in financial services, Ryan discusses ForHumanity's mission to analyze and mitigate the downside risks of AI to benefit society.

The conversation includes insights on the foundation of Fo...

Mark as Played

Spoken (by a human) version of this article.

  • Public AI audit reports aren't universally required; they mainly apply to high-risk applications and/or specific jurisdictions.
  • The push for transparency primarily concerns independent audits, not internal reviews.
  • Prepare by implementing ethical AI practices and conducting regular reviews.

Note: High-risk AI systems in banking and insurance are subject to specific requirements...

Mark as Played

Spoken by a human version of this article.

  • Knowing the basics of substantive testing vs. controls testing can help you determine if the review will meet your needs.
  • Substantive testing directly identifies errors or unfairness, while controls testing evaluates governance effectiveness. The results/conclusions are different.
  • Understanding these differences can also help you anticipate the extent of your team's involve...
Mark as Played

Spoken by a human version of this article.

Ongoing education helps everyone understand their role in responsibly developing and using algorithmic systems.

Regulators and standard-setting bodies emphasise the need for AI literacy across all organisational levels.

Links

Mark as Played

Spoken by a human version of this article.

The terminology – “audit” vs “review” - is important, but clarity about deliverables is more important when commissioning algorithm integrity assessments.

Audits are formal, with an opinion or conclusion that can often be shared externally. Reviews come in various forms and typically produce recommendations, for internal use.

Regardless of the terminology you use, when commissioning...

Mark as Played

Spoken (by a human) version of this article.

  • Outcome-focused accuracy reviews directly verify results, offering more robust assurance than process-focused methods.
  • This approach can catch translation errors, unintended consequences, and edge cases that process reviews might miss.
  • While more time-consuming and complex, outcome-focused reviews provide deeper insights into system reliability and accuracy.

This article e...

Mark as Played

Spoken (by a human) version of this article.

Documentation makes it easier to consistently maintain algorithm integrity.

This is well known.

But there are lots of types of documents to prepare, and often the first hurdle is just thinking about where to start.

So this simple guide is meant to help do exactly that – get going.

To subscribe to the weekly articles: https://riskinsights.com.au/blog#subscribe

About this podcast
...

Mark as Played
November 12, 2024 7 mins

Spoken (by a human) version of this article.

Banks and insurers are increasingly using external data; using them beyond their intended purpose can be risky (e.g. discriminatory).

Emerging regulations and regulatory guidance emphasise the need for active oversight by boards, senior management to ensure responsible use of external data.

Keeping the customer top of mind, asking the right questions, and focusing on the intended ...

Mark as Played

Spoken (by a human) version of this article.

Banks and insurers sometimes lose sight of their customer-centric purpose when assessing AI/algorithm risks, focusing instead on regular business risks and regulatory concerns.

Regulators are noticing this disconnect.

This article aims to outline why the disconnect happens and how we can fix it.

Report mentioned in the article: ASIC, REP 798 Beware the gap: Governance arr...

Mark as Played

Popular Podcasts

    If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

    My Favorite Murder with Karen Kilgariff and Georgia Hardstark

    My Favorite Murder is a true crime comedy podcast hosted by Karen Kilgariff and Georgia Hardstark. Each week, Karen and Georgia share compelling true crimes and hometown stories from friends and listeners. Since MFM launched in January of 2016, Karen and Georgia have shared their lifelong interest in true crime and have covered stories of infamous serial killers like the Night Stalker, mysterious cold cases, captivating cults, incredible survivor stories and important events from history like the Tulsa race massacre of 1921. My Favorite Murder is part of the Exactly Right podcast network that provides a platform for bold, creative voices to bring to life provocative, entertaining and relatable stories for audiences everywhere. The Exactly Right roster of podcasts covers a variety of topics including historic true crime, comedic interviews and news, science, pop culture and more. Podcasts on the network include Buried Bones with Kate Winkler Dawson and Paul Holes, That's Messed Up: An SVU Podcast, This Podcast Will Kill You, Bananas and more.

    Dateline NBC

    Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

    The Clay Travis and Buck Sexton Show

    The Clay Travis and Buck Sexton Show. Clay Travis and Buck Sexton tackle the biggest stories in news, politics and current events with intelligence and humor. From the border crisis, to the madness of cancel culture and far-left missteps, Clay and Buck guide listeners through the latest headlines and hot topics with fun and entertaining conversations and opinions.

Advertise With Us
Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.