PodcastsBusinessThe Road to Accountable AI

The Road to Accountable AI

Kevin Werbach
The Road to Accountable AI
Latest episode

62 episodes

  • The Road to Accountable AI

    Walter Haydock, StackAware: In Search Of AI Governance Certification

    04/09/2026 | 32 mins.
    Walter Haydock draws a direct line from military risk management to the enterprise AI challenge. His argues that organizations need to stop doing "math with colors," and move toward quantitative assessment that assigns dollar values to potential AI failures. Much of the conversation in this episode focuses on ISO 42001, the global standard for AI management systems, which Haydock has championed and which his own firm has gone through. He draws a three-part taxonomy of AI governance frameworks: legislation you either comply with or don't, voluntary self-attestable frameworks like the NIST AI RMF, and externally certifiable standards like ISO 42001 that bring independent verification.

    Haydock outlines a forward-looking vision in which certification, insurance, and legal safe harbors reinforce one another. Machine-readable audit data will eventually allow insurers to make informed underwriting decisions about AI risk, reducing uncertainty for both enterprises and their customers.  Though, as he acknowledges, we are still far from that environment, with AI audits today still roughly 90% manual.
    Walter Haydock is the founder of StackAware, which helps AI-powered companies manage security, compliance, and privacy risk. Before entering the private sector, he served as a reconnaissance and intelligence officer in the U.S. Marine Corps, as a professional staff member for the Homeland Security Committee of the U.S. House of Representatives, and as an analyst at the National Counterterrorism Center. He is a graduate of the United States Naval Academy, Georgetown University's School of Foreign Service, and Harvard Business School.
    Transcript


    Deploy Securely (Haydock's Substack)
  • The Road to Accountable AI

    Richa Kaul, Complyance: Asking the Right Questions

    04/02/2026 | 33 mins.
    Richa Kaul breaks down the AI risk landscape for enterprises, and argues that the key to managing all of them is resisting the urge to sensationalize. Kaul offers a candid assessment of where enterprise AI governance committees are falling short, noting that many  lack the technical fluency to ask vendors the right questions, such as where customer data goes, whether it trains other clients' models, and what specific steps reduce hallucination. She suggests that market-driven security standards like SOC-2 and ISO 27001 often matter more in practice than government regulation, creating a "beautiful ecosystem" where risk management runs ahead of the law. Looking forward, she addresses the growing challenge of agentic AI systems that make decisions autonomously, offering a deceptively simple prescription: Map every action an agent can take, know where your highest risk sits, identify the critical decision points, and demand human sign-off at each one/
    Richa Kaul is the founder and CEO of Complyance, an AI-native enterprise governance, risk, and compliance (GRC) platform. Before founding Complyance, she was Chief Strategy Officer at ContractPodAi, a legal technology company, and previously served as Managing Director at the Virginia Economic Development Partnership and as a management consultant at McKinsey.
    Transcript


    Complyance Raises $20M to Help Companies Manage Risk and Compliance (TechCrunch, February 11, 2026)
  • The Road to Accountable AI

    Michael Horowitz, UPenn: Governing AI That's Designed to Kill

    03/26/2026 | 33 mins.
    How AI is, could, and shouldn't be used in military and other national security contexts is a topic of growing importance. Recent conflicts on the battlefield, and between the U.S. military and a major AI lab, are forcing conversations about legal, ethical, and appropriate business limitations for increasingly powerful AI tools. Michael Horowitz, a Political Science professor and Director of Perry World House at the University of Pennsylvania, is one of the world's leading experts on military AI and autonomous weapons. In this episode, drawing on his two stints in the U.S. Department of Defense, Horowitz walks through the major buckets of military AI use. He explains why militaries are, in some ways, more incentivized than any other institution to get AI governance right, but genuine tensions among speed, effectiveness, and meaningful human control can make responsible military AI difficult in practice. We cover Anthropic's recent dispute with the Pentagon as a case study in the fragile and increasingly consequential relationship between Silicon Valley and the defense establishment. 
    Michael C. Horowitz is the Richard Perry Professor of Political Science and Director of Perry World House at the University of Pennsylvania, and a Senior Fellow for Technology and Innovation at the Council on Foreign Relations. From 2022 to 2024, he served as U.S. Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities, where he was the principal author of the U.S. Political Declaration on Responsible Military Use of AI and Autonomy. He is the author of The Diffusion of Military Power: Causes and Consequences for International Politics and co-author of Why Leaders Fight.
    Transcript


    Battles of Precise Mass: Technology Is Remaking War — and America Must Adapt (Foreign Affairs, 2024)
    The Ethics & Morality of Robotic Warfare: Assessing the Debate over Autonomous Weapons (Daedalus, 2016)
    Rules of Engagement (Penn Gazette, 2025)
  • The Road to Accountable AI

    Tanvi Singh, Ekta AI: The Case for Sovereign AI

    03/19/2026 | 33 mins.
    Tanvi Singh draws on over two decades of building and governing AI systems inside global banks to make a provocative case: you cannot be accountable for decisions you do not control. Enterprises are consuming intelligence through models they don't own, can't explain, and didn't train. Singh reframes sovereignty beyond data center locations and infrastructure, to control across the entire stack, so that an organization's AI reflects its own values, laws, and culture. Whlile frontier LLMs will continue to dominate the consumer and retail market, she argues that domain-specific models will be important for enterprise and regulated use cases, offering better accuracy at dramatically lower cost. The conversation also touches on Singh's engagement with the Vatican's Pontifical Academy of Sciences around AI ethics, which has worked on benchmarks that reflect institutional values rather than defaulting to the cultural norms baked into large internet-trained models.
    Tanvi Singh is the Co-Founder and CEO of Ekta Inc., a sovereign AI platform company building domain-specific foundation models for governments and regulated industries. She previously served as Group Head of AI, Data & Analytics at UBS and held senior technology leadership roles at Credit Suisse, GE, and Monsanto. She is the founder and managing partner of Nirmata-ai Ventures, a Zurich-based deep-tech venture fund, and serves as a board member of the Global Blockchain Business Council and GirlsCanCode. 

    Transcript


    Sovereign AI: Why States and Institutions Have to Take Back Their Digital Intelligence (HSToday, co-authored with Thomas Cellucci)
    Ekta AI
  • The Road to Accountable AI

    Ray Eitel-Porter, Co-Author of Governing the Machine: The Confidence to Use AI

    03/12/2026 | 32 mins.
    Ray Eitel-Porter, former Global Lead for Responsible AI at Accenture and co-author of the new book, Governing the Machine, discusses how enterprises can move from abstract AI principles to practical governance. He emphasizes that organizations can only realize AI's benefits if responsibility is embedded into everyday business processes rather than treated as a standalone compliance exercise. Drawing on his experience leading global data and AI programs, Eitel-Porter explains how the release of ChatGPT transformed enterprise attitudes toward AI, accelerating adoption while exposing risks such as hallucinations, reliability failures, and reputational harm. Effective governance has evolved from static principles to operational controls, including workflow checkpoints, red teaming, and technical guardrails, particularly for generative AI systems with inherently probabilistic outputs. On risk, he stresses that not all AI use cases require the same level of scrutiny; governance should scale with potential impact and harm, focusing on what an AI system is intended to do so that non-technical teams can surface high-risk use cases without incentives to downplay risk.
    On regulation, Eitel-Porter notes that despite uncertainty around the EU AI Act, many multinational companies are treating it as a global baseline, similar to GDPR, while contrasting this with more deregulatory signals from the United States and questioning the global influence of the UK's middle-ground approach. He also shares insights from Governing the Machine, co-authored with Miriam Bogle and Paul Donkhan, emphasizing that AI governance is not a barrier to innovation but the foundation that allows organizations to deploy AI at scale with confidence and control.
    Ray Eitel-Porter is a Senior Advisor at Accenture and the former Global Lead for Responsible AI, where he designed and scaled AI governance programs for multinational organizations. He previously led Accenture's data and AI practice in the UK and has over a decade of experience advising companies on responsible AI, data governance, and emerging technology risk. Eitel-Porter is the co-author of Governing the Machine: How to Navigate the Risks of AI and Unlock Its True Potential (Bloomsbury, 2025) and has led multi-year programs across public and private sectors, including global banks, retailers, and health brands.
    Transcript

    Governing the Machine (Bloomsbury 2025)
    Lessons from the Frontline – Designing and Implementing AI Governance (AI Journal)

More Business podcasts

About The Road to Accountable AI

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.
Podcast website

Listen to The Road to Accountable AI, 3 Takeaways™ and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v8.8.9| © 2007-2026 radio.de GmbH
Generated: 4/14/2026 - 8:57:13 PM