muckrAIkers

muckrAIkers

Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.

Episodes

April 13, 2026 58 mins

In this episode, Jacob and Igor break down the DoD vs. Anthropic standoff, tracing how Claude's use in military operations led to Anthropic being designated a supply chain security risk. Perhaps more importantly, why did Anthropic choose to take a stand now, and what can that tell us about the corporations behavior moving forward. The investigation is used as a case study in how to read the real motivations behind big inst...

Listen
Mark as Played

This week, Jacob and Igor dissect the "mythical AI bear," the strawman version of AI criticism that gets thrown around in tech discourse. Working through a viral blog post that typifies the genre, they examine how legitimate concerns about code quality, labor displacement, intellectual property, and the erosion of craft get flattened into caricature. Plus: Sam Altman writes ten paragraphs about how unbothered he is by an a...

Listen
Mark as Played

We're talking about developments in AI while those in power have unapologetically revealed their true fascist intensions; are we spending our time in the right way? Igor and I discuss the importance of shining a light on the techno-authoritarians who have played a very significant role in current state-of-the-world.

While we discuss the murders of Nicole Good and Alex Pretti during this episode, it's important that we also ...

Listen
Mark as Played
January 12, 2026 38 mins

Igor shares a significant shift in his perspective on AI coding tools after experiencing the latest Claude Code release. While he's been the stronger AI skeptic between the two of us, recent developments have shown him genuine utility in specific coding tasks, but this doesn't validate the hype or change the fundamental critiques.

We discuss what "rote tasks" are and why they're now automatable with enough investment, the d...

Listen
Mark as Played
December 15, 2025 45 mins

OpenAI is pivoting to porn while public sentiment turns decisively against AI. Pew Research shows Americans are now concerned over excited by a 2:1 margin. We trace how we got here: broken promises of cancer cures replaced by addiction mechanics and expensive APIs. Meanwhile, data centers are hiding a near-recession, straining power grids, and literally breaking your household appliances. Drawing parallels to the 1970s AI ...

  • Listen
    Mark as Played
    October 13, 2025 49 mins

    Jacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF, anthropomorphize AI systems such they take advantage of our instincts as social beings. At the same time, Big Tech companies push these systems for "wellness" while dodging healthcare liability, causing real harms today We discuss what actual safety would lo...

    Listen
    Mark as Played
    August 21, 2025 84 mins

    We dig into how the concept of AI "safety" has been co-opted and weaponized by tech companies. Starting with examples like Mecha-Hitler Grok, we explore how real safety engineering differs from AI "alignment," the myth of the alignment tax, and why this semantic confusion matters for actual safety.

    • (00:00) - Intro
    • (00:21) - Mecha-Hitler Grok
    • (10:07) - "Safety"
    • (19:40) - Under-specification
    • (53:56) - This time isn't ...
    Listen
    Mark as Played
    July 14, 2025 71 mins

    In this episode, we redefine AI's "reasoning" as mere rambling, exposing the "illusion of thinking" and "Potemkin understanding" in current models. We contrast the classical definition of reasoning (requiring logic and consistency) with Big Tech's new version, which is a generic statement about information processing. We explain how Large Rambling Models generate extensive, often irrelevant, rambling traces that appear to ...

    Listen
    Mark as Played
    June 23, 2025 53 mins

    In this episode, we break down Trump's "One Big Beautiful Bill" and its dystopian AI provisions: automated fraud detection systems, centralized citizen databases, military AI integration, and a 10-year moratorium blocking all state AI regulation. We explore the historical parallels with authoritarian data consolidation and why this represents a fundamental shift away from limited government principles once held by US conse...

  • May 26, 2025 66 mins

    Jacob and Igor tackle the wild claims about AI's economic impact by examining three main clusters of arguments: automating expensive tasks like programming, removing "cost centers" like call centers and corporate art, and claims of explosive growth. They dig into the actual data, debunk the hype, and explain why most productivity claims don't hold up in practice. Plus: MIT denounces a paper with fabricated data, and Grok r...

  • April 9, 2025 91 mins

    DeepSeek has been out for over 2 months now, and things have begun to settle down. We take this opportunity to contextualize the developments that have occurred in its wake, both within the AI industry and the world economy. As systems get more "agentic" and users are willing to spend increasing amounts of time waiting for their outputs, the value of supposed "reasoning" models continues to be peddled by AI system develope...

  • February 10, 2025 15 mins

    DeepSeek R1 has taken the world by storm, causing a stock market crash and prompting further calls for export controls within the US. Since this story is still very much in development, with follow-up investigations and calls for governance being released almost daily, we thought it best to hold of for a little while longer to be able to tell the whole story. Nonetheless, it's a big story, so we provide a brief overview of...

    Listen
    Mark as Played

    Chris Canal, co-founder of EquiStamp, joins muckrAIkers as our first ever podcast guest! In this ~3.5 hour interview, we discuss intelligence vs. competencies, the importance of test-time compute, moving goalposts, the orthogonality thesis, and much more.

    A seasoned software developer, Chris started EquiStamp as a way to improve our current understanding of model failure modes and capabilities in late 2023. Now a key contra...

  • Listen
    Mark as Played
    December 30, 2024 86 mins

    What happens when you bring over 15,000 machine learning nerds to one city? If your guess didn't include racism, sabotage and scandal, belated epiphanies, a spicy SoLaR panel, and many fantastic research papers, you wouldn't have captured my experience. In this episode we discuss the drama and takeaways from NeurIPS 2024.

    Posters available at time of episode preparation can be found on the episode webpage.

    EPISODE RECORDED 2...

  • Listen
    Mark as Played

    The idea of model cards, which was introduced as a measure to increase transparency and understanding of LLMs, has been perverted into the marketing gimmick characterized by OpenAI's o1 system card. To demonstrate the adversarial stance we believe is necessary to draw meaning from these press-releases-in-disguise, we conduct a close read of the system card. Be warned, there's a lot of muck in this one.

    Note: All figures/tab...

  • Listen
    Mark as Played
    December 2, 2024 58 mins

    While on the campaign trail, Trump made claims about repealing Biden's Executive Order on AI, but what will actually be changed when he gets into office? We take this opportunity to examine policies being discussed or implemented by leading governments around the world.


    • (00:00) - Intro
    • (00:29) - Hot off the press
    • (02:59) - Repealing the AI executive order?
    • (11:16) - "Manhattan" for AI
    • (24:33) - EU
    • (30:47) - UK
    • ...
    Listen
    Mark as Played
    November 19, 2024 67 mins

    Multiple news outlets, including The Information, Bloomberg, and Reuters [see sources] are reporting an "end of scaling" for the current AI paradigm. In this episode we look into these articles, as well as a wide variety of economic forecasting, empirical analysis, and technical papers to understand the validity, and impact of these reports. We also use this as an opportunity to contextualize the realized versus promised f...

  • Listen
    Mark as Played

    October 2024 saw a National Security Memorandum and US framework for using AI in national security contexts. We go through the content so you don't have to, pull out the important bits, and summarize our main takeaways.

    • (00:48) - The memorandum
    • (06:28) - What the press is saying
    • (10:39) - What's in the text
    • (13:48) - Potential harms
    • (17:32) - Miscellaneous notable stuff
    • (31:11) - What's the US governments take on A...
    Listen
    Mark as Played
    October 30, 2024 60 mins

    Frontier developers continue their war on sane versioning schema to bring us Claude 3.5 Sonnet (New), along with "computer use" capabilities. We discuss not only the new model, but also why Anthropic may have released this model and tool combination now.


    • (00:00) - Intro
    • (00:22) - Hot off the press
    • (05:03) - Claude 3.5 Sonnet (New) Two 'o' 3000
    • (09:23) - Breaking down "computer use"
    • (13:16) - Our understanding
    • (...
    Listen
    Mark as Played
    October 22, 2024 82 mins

    Brace yourselves, winter is coming for OpenAI - atleast, that's what we think. In this episode we look at OpenAI's recent massive funding round and ask "why would anyone want to fund a company that is set to lose net 5 billion USD for 2024?" We scrape through a whole lot of muck to find the meaningful signals in all this news, and there is a lot of it, so get ready!


    • (00:00) - Intro
    • (00:28) - Hot off the press
    • (02:...
    Listen
    Mark as Played

    Popular Podcasts

      If you've ever wanted to know about champagne, satanism, the Stonewall Uprising, chaos theory, LSD, El Nino, true crime and Rosa Parks, then look no further. Josh and Chuck have you covered.

      Dateline NBC

      Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

      The Girlfriends: Trust Me Babe

      When a group of women from all over the country realise they all dated the same prolific romance scammer they vow to bring him to justice. In this brand new season of global number 1 hit podcast, The Girlfriends, Anna Sinfield meets a group of funny, feisty, determined women who all had the misfortune of dating a mysterious man named Derek Alldred. Trust Me Babe is a story about the protective forces of gossip, gut instinct, and trusting your besties and the group of women who took matters into their own hands to take down a fraudster when no one else would listen. If you’re affected by any of the themes in this show, our charity partners NO MORE have available resources at https://www.nomore.org. To learn more about romance scams, and to access specialised support, visit https://fightcybercrime.org/ The Girlfriends: Trust Me Babe is produced by Novel for iHeartPodcasts. For more from Novel, visit https://novel.audio/. You can listen to new episodes of The Girlfriends: Trust Me Babe completely ad-free and 1 week early with an iHeart True Crime+ subscription, available exclusively on Apple Podcasts. Open your Apple Podcasts app, search for “iHeart True Crime+, and subscribe today!

      The Joe Rogan Experience

      The official podcast of comedian Joe Rogan.

      The Breakfast Club

      The World's Most Dangerous Morning Show, The Breakfast Club, With DJ Envy, Jess Hilarious, And Charlamagne Tha God!

    Advertise With Us
    Music, radio and podcasts, all free. Listen online or download the iHeart App.

    Connect

    © 2026 iHeartMedia, Inc.

    • Help
    • Privacy Policy
    • Terms of Use
    • AdChoicesAd Choices