PodcastsTechnologyThe MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck

Matt Turck
The MAD Podcast with Matt Turck
Latest episode

114 episodes

  • The MAD Podcast with Matt Turck

    Anthropic’s Felix Rieseberg: Claude Cowork, Mythos, and the SaaS Extinction

    04/10/2026 | 58 mins.
    Felix Rieseberg leads engineering for Claude Cowork at Anthropic, one of the most important new agentic AI products in the market today. In this episode of The MAD Podcast, Matt Turck sits down with Felix to discuss Anthropic’s newly announced Claude Mythos Preview, why Felix sees it as a genuine step-function change, and what it means when frontier AI starts showing outsized cybersecurity capabilities.

    The conversation then goes deep on Claude Cowork: how it emerged from Claude Code, what the famous “10-day” story really means, why Anthropic believes AI needs access to the local computer, and how Cowork actually works under the hood. Felix explains why skills are just text files, why memory is often just text files too, and how Anthropic thinks about building trust in AI agents.

    They also explore some of the biggest questions in AI product design and the future of software: why UX may matter as much as the model itself, why execution is becoming dramatically cheaper, what that means for product management and startups, and why Felix believes taste, alignment, and understanding humans may matter more than ever.

    (00:00) Intro
    (01:53) Claude Mythos Preview and the “step-function change”
    (06:16) Why Anthropic is treating Mythos differently
    (11:19) The real story behind Claude Cowork’s “10-day” build
    (12:42) Why Anthropic realized Claude Code needed a non-technical version
    (15:44) What Claude Cowork actually is
    (17:03) Under the hood: virtual machines, tools, skills
    (18:36) Where Cowork’s memory actually lives
    (19:26) How Cowork connects to files, apps, and the internet
    (20:45) Why Felix thinks the local computer is under-appreciated
    (24:49) Trust: how do you get users comfortable with AI agents?
    (28:45) What UX actually means for AI agents
    (31:27) Anthropic Cowork's roadmap is only one month long
    (34:12) Building 100 prototypes
    (35:10) If execution is free, what becomes the bottleneck?
    (37:25) Does it come down to taste?
    (40:12) The hardest part of building Claude Cowork
    (41:43) Advice for founders building AI agents
    (44:21) SaaSpocalypse: what’s left for software startups?
    (49:30) Where AI agents are going next
    (51:20) Regulated industries and enterprise adoption
    (54:15) Hot takes: what's underrated, overrated, and what Felix would build today
  • The MAD Podcast with Matt Turck

    AI is Already Building AI | Google DeepMind’s Mostafa Dehghani

    04/02/2026 | 1h 4 mins.
    Are we truly on the verge of AI automating its own research and development? In this deep-dive episode of the MAD Podcast, Matt Turck sits down with Mostafa Dehghani, a pioneering AI researcher at Google DeepMind whose work on Universal Transformers and Vision Transformers (ViT) helped lay the groundwork for today's frontier models.

    Moving past the hype, Mostafa breaks down the actual mechanics of "thinking in loops" and Recursive Self-Improvement (RSI). He explores the critical bottlenecks holding back true AGI—from evaluation limits and formal verification to the brutal math of long-horizon reliability.

    Mostafa and Matt also discuss the shift from pre-training to post-training, how Gemini's Nano Banana 2 processes pixels and text simultaneously, and why the "frozen" nature of today's models means Continual Learning is the next massive frontier for enterprise AI and data pipelines.

    (00:00) Intro
    (01:17) What “loops” in AI actually mean
    (05:04) Self-improvement as the next chapter of machine learning
    (07:32) Are Karpathy’s autoresearch agents an early form of AI self-improvement?
    (08:56) AI building AI: how close are we?
    (10:02) The biggest bottlenecks: evals, automation, and long horizons
    (12:36) Can formal verification unlock recursive self-improvement?
    (14:06) What is model collapse?
    (15:33) Generalization vs specialization in AI
    (18:04) What is a specialized model today?
    (20:57) Could top AI researchers themselves be automated?
    (24:02) If AI builds AI, does data matter less than compute?
    (26:22) Post-training vs pre-training: where will progress come from?
    (28:14) Why pre-training is not dead
    (29:45) What is continual learning?
    (31:53) How real is continual learning today?
    (33:43) Mostafa Dehghani’s background and path into AI
    (36:13) The story behind Universal Transformers
    (39:56) How Vision Transformers changed AI
    (43:47) Gemini, multimodality, and Nano Banana
    (47:46) Why multimodality helps build a world model
    (52:44) Why image generation is getting faster and more efficient
    (54:44) Hot takes
    (54:53) What the AI field is getting wrong
    (56:17) Why continual learning is underrated
    (57:26) Does RAG go away over time?
    (58:21) What people are too confident about in AI
    (59:56) If he were starting from scratch today
  • The MAD Podcast with Matt Turck

    Benedict Evans: OpenAI’s Moat Problem & the Future of Software

    03/19/2026 | 1h 1 mins.
    Is OpenAI trapped without a defensible moat? World-renowned independent tech analyst Benedict Evans returns to the MAD Podcast and argues that foundation models have zero network effects, making them closer to commodity infrastructure than the next iOS. We unpack OpenAI’s "mile wide, inch deep" usage problem, why simply having a "better model" does not solve the core UX challenge, and whether the hyperscalers' massive CapEx spending is a sustainable strategy or a fast track to financial gravity.

    We also explore the reality behind the recent "SaaSpocalypse", the structural shift from traditional enterprise systems to "improvised" and "ephemeral" software, and where the actual white space lies for founders and investors navigating the artificial intelligence hype cycle.

    (00:00) Intro
    (01:06) OpenAI's Focus Shift
    (03:12) ChatGPT usage: a "mile wide, inch deep"
    (09:03) Why better models do not solve the real problem
    (13:58) Why AI product teams are strategy takers, not strategy setters
    (15:38) Do agents help create defensibility?
    (20:06) OpenClaw and the "Desktop Linux" moment for AI
    (25:52) Why "everyone will build their own software" is completely wrong
    (28:09) Improvised software vs. institutionalized software
    (29:23) The Jevons Paradox: Why there will be more software, not less
    (36:15) Are we heading toward value destruction before value creation?
    (38:03) Circular revenue, leverage, and AI bubble dynamics
    (38:53) Big Tech's Trillion-Dollar CapEx Crisis & Financial Gravity
    (45:23) Why AI job exposure charts can be misleading
    (52:15) How Fortune 500 Execs are actually deploying AI today
    (56:45) The White Space: What this means for founders and investors
  • The MAD Podcast with Matt Turck

    Everything Gets Rebuilt: The New AI Agent Stack | Harrison Chase, LangChain

    03/12/2026 | 46 mins.
    Harrison Chase, co-founder and CEO of LangChain, joins the MAD Podcast to explain why everything in AI is getting rebuilt. As agents evolve from simple prompt-based systems into software that can plan, use tools, write code, manage files, and remember things over time, the real frontier is shifting from the model itself to the stack around the model. In this conversation, we go deep on harnesses, subagents, filesystems, sandboxes, observability, memory, and the new infrastructure required to make AI agents actually work in the real world.

    (00:00) Intro - meet Harrison Chase
    (01:32) What changed in agents over the last year
    (03:57) Why coding agents are ahead
    (06:26) Do models commoditize the framework layer?
    (08:27) Harnesses, in plain English
    (10:11) Why system prompts matter so much
    (13:11) The upside — and downside — of subagents
    (15:31) Why a useful agent needs a filesystem
    (18:13) The core primitives of modern agents
    (19:12) Skills: the new primitive
    (20:19) What context compaction actually means
    (23:02) How memory works in agents
    (25:16) One mega-agent or many specialized agents?
    (27:46) Has MCP won?
    (29:38) Why agents need sandboxes
    (32:35) How sandboxes help with security
    (33:32) How Harrison Chase started LangChain
    (37:24) LangChain vs LangGraph vs Deep Agents
    (40:17) Why observability matters more for agents
    (41:48) Evals, no-code, and continuous improvement
    (44:41) What LangChain is building next
    (45:29) Where the real moat in AI lives
  • The MAD Podcast with Matt Turck

    AI That Can Prove It’s Right: Verification as the Missing Layer in AI — Carina Hong

    02/26/2026 | 1h 3 mins.
    What if AI didn’t just sound right — but could prove it? In this episode of the MAD Podcast, Matt Turck sits down with Carina Hong, a 24-year-old former math olympiad competitor and Rhodes Scholar, and the founder/CEO of Axiom Math, to unpack how AxiomProver earned a perfect 12/12 on the Putnam 2025 and why formal verification (via Lean) may be the missing layer for reliable reasoning. Carina argues we’re entering a “math renaissance” where verified reasoning systems can tackle problems that currently take researchers months — and potentially push beyond math into verified code, hardware, and high-stakes software. They go inside the “generation + verification” loop, what it means to build AI that can be trusted, and what this approach could unlock on the road to superintelligent reasoning.

    (00:00) Intro
    (01:25) Why the World Needs an AI Mathematician
    (02:57) Scoring 12/12 on the World's Hardest Math Test (Putnam)
    (04:05) The First AI to Solve Open Research Conjectures
    (06:59) Does AI Solve Math in "Alien" Ways? (The Move 37 Effect)
    (08:59) "Lean": The Programming Language of Proofs Explained
    (10:51) How Axiom's Approach Differs from DeepMind & OpenAI
    (16:06) Formal vs. Informal Reasoning (And Auto-Formalization)
    (17:37) The AI "Reward Hacking" Problem
    (20:18) Building an AI That is 100% Correct, 100% of the Time
    (23:23) Beyond Math: Verified Code & Hardware Verification
    (25:12) The Brutal Reality of Competitive Math Olympiads
    (29:30) From Neuroscience to Stanford Law to Dropout Founder
    (33:57) How Axiom Actually Works Under the Hood (The Architecture)
    (37:51) The Secret to Generating Perfect Synthetic Data
    (40:14) Tokens, Proof Length, and Inference Cost
    (42:58) The "Everest" of Mathematics: Scaling Reasoning Trees
    (46:32) Can an AI Win a Fields Medal?
    (47:25) "Math Renaissance": What Changes if This Works
    (55:47) How Mathematicians React to AI (And Why Proof Certificates Matter)
    (57:30) Becoming a CEO: Dropping Ego and Building Culture
    (1:00:42) Recruiting World-Class Talent & Building the Axiom "Tribe"

More Technology podcasts

About The MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Podcast website

Listen to The MAD Podcast with Matt Turck, All-In with Chamath, Jason, Sacks & Friedberg and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The MAD Podcast with Matt Turck: Podcasts in Family

Social
v8.8.6| © 2007-2026 radio.de GmbH
Generated: 4/11/2026 - 9:12:37 AM