Powered by RND
PodcastsTechnologyDoom Debates
Listen to Doom Debates in the App
Listen to Doom Debates in the App
(36,319)(250,152)
Save favorites
Alarm
Sleep timer

Doom Debates

Podcast Doom Debates
Liron Shapira
It's time to talk about the end of the world! lironshapira.substack.com

Available Episodes

5 of 65
  • How an AI Doomer Sees The World — Liron on The Human Podcast
    In this special cross-posted episode of Doom Debates, originally posted here on The Human Podcast, we cover a wide range of topics including the definition of “doom”, P(Doom), various existential risks like pandemics and nuclear threats, and the comparison of rogue AI risks versus AI misuse risks.00:00 Introduction01:47 Defining Doom and AI Risks05:53 P(Doom)10:04 Doom Debates’ Mission16:17 Personal Reflections and Life Choices24:57 The Importance of Debate27:07 Personal Reflections on AI Doom30:46 Comparing AI Doom to Other Existential Risks33:42 Strategies to Mitigate AI Risks39:31 The Global AI Race and Game Theory43:06 Philosophical Reflections on a Good Life45:21 Final ThoughtsShow NotesThe Human Podcast with Joe Murray: https://www.youtube.com/@thehumanpodcastofficialWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Don’t miss the other great AI doom show, For Humanity: https://youtube.com/@ForHumanityAIRiskDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    45:50
  • Gödel's Theorem Says Intelligence ≠ Power? AI Doom Debate with Alexander Campbell
    Alexander Campbell claims that having superhuman intelligence doesn’t necessarily translate into having vast power, and that Gödel's Incompleteness Theorem ensures AI can’t get too powerful. I strongly disagree.Alex has a Master's of Philosophy in Economics from the University of Oxford and an MBA from the Stanford Graduate School of Business, has worked as a quant trader at Lehman Brothers and Bridgewater Associates, and is the founder of Rose AI, a cloud data platform that leverages generative AI to help visualize data.This debate was recorded in August 2023.00:00 Intro and Alex’s Background05:29 Alex's Views on AI and Technology06:45 Alex’s Non-Doomer Position11:20 Goal-to-Action Mapping15:20 Outcome Pump Thought Experiment21:07 Liron’s Doom Argument29:10 The Dangers of Goal-to-Action Mappers34:39 The China Argument and Existential Risks45:18 Ideological Turing Test48:38 Final ThoughtsShow NotesAlexander Campbell’s Twitter: https://x.com/abcampbellWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    50:08
  • Alignment is EASY and Roko's Basilisk is GOOD?!
    Roko Mijic has been an active member of the LessWrong and AI safety community since 2008. He’s best known for “Roko’s Basilisk”, a thought experiment he posted on LessWrong that made Eliezer Yudkowsky freak out, and years later became the topic that helped Elon Musk get interested in Grimes.His view on AI doom is that:* AI alignment is an easy problem* But the chaos and fighting from building superintelligence poses a high near-term existential risk* But humanity’s course without AI has an even higher near-term existential riskWhile my own view is very different, I’m interested to learn more about Roko’s views and nail down our cruxes of disagreement.00:00 Introducing Roko03:33 Realizing that AI is the only thing that matters06:51 Cyc: AI with “common sense”15:15 Is alignment easy?21:19 What’s Your P(Doom)™25:14 Why civilization is doomed anyway37:07 Roko’s AI nightmare scenario47:00 AI risk mitigation52:07 Market Incentives and AI Safety57:13 Are RL and GANs good enough for superalignment?01:00:54 If humans learned to be honest, why can’t AIs?01:10:29 Is our test environment sufficiently similar to production?01:23:56 AGI Timelines01:26:35 Headroom above human intelligence01:42:22 Roko’s Basilisk01:54:01 Post-Debate MonologueShow NotesRoko’s Twitter: https://x.com/RokoMijicExplanation of Roko’s Basilisk on LessWrong: https://www.lesswrong.com/w/rokos-basiliskWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:59:10
  • Roger Penrose is WRONG about Gödel's Theorem and AI Consciousness
    Sir Roger Penrose is a mathematician, mathematical physicist, philosopher of science, and Nobel Laureate in Physics.His famous body of work includes Penrose diagrams, twistor theory, Penrose tilings, and the incredibly bold claim that intelligence and consciousness are uncomputable physical phenomena related to quantum wave function collapse. Dr. Penrose is such a genius that it's just interesting to unpack his worldview, even if it’s totally implausible. How can someone like him be so wrong? What exactly is it that he's wrong about? It's interesting to try to see the world through his eyes, before recoiling from how nonsensical it looks.00:00 Episode Highlights01:29 Introduction to Roger Penrose11:56 Uncomputability16:52 Penrose on Gödel's Incompleteness Theorem19:57 Liron Explains Gödel's Incompleteness Theorem27:05 Why Penrose Gets Gödel Wrong40:53 Scott Aaronson's Gödel CAPTCHA46:28 Penrose's Critique of the Turing Test48:01 Searle's Chinese Room Argument52:07 Penrose's Views on AI and Consciousness57:47 AI's Computational Power vs. Human Intelligence01:21:08 Penrose's Perspective on AI Risk01:22:20 Consciousness = Quantum Wave Function Collapse?01:26:25 Final ThoughtsShow NotesSource video — Feb 22, 2025 Interview with Roger Penrose on “This Is World” — https://www.youtube.com/watch?v=biUfMZ2dts8Scott Aaronson’s “Gödel CAPTCHA” — https://www.scottaaronson.com/writings/captcha.htmlMy recent Scott Aaronson episode — https://www.youtube.com/watch?v=xsGqWeqKjEgMy explanation of what’s wrong with arguing “by definition” — https://www.youtube.com/watch?v=ueam4fq8k8IWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:31:38
  • We Found AI's Preferences — What David Shapiro MISSED in this bombshell Center for AI Safety paper
    The Center for AI Safety just dropped a fascinating paper — they discovered that today’s AIs like GPT-4 and Claude have preferences! As in, coherent utility functions. We knew this was inevitable, but we didn’t know it was already happening.This episode has two parts:In Part I (48 minutes), I react to David Shapiro’s coverage of the paper and push back on many of his points.In Part II (60 minutes), I explain the paper myself.00:00 Episode Introduction05:25 PART I: REACTING TO DAVID SHAPIRO10:06 Critique of David Shapiro's Analysis19:19 Reproducing the Experiment35:50 David's Definition of Coherence37:14 Does AI have “Temporal Urgency”?40:32 Universal Values and AI Alignment49:13 PART II: EXPLAINING THE PAPER51:37 How The Experiment Works01:11:33 Instrumental Values and Coherence in AI01:13:04 Exchange Rates and AI Biases01:17:10 Temporal Discounting in AI Models01:19:55 Power Seeking, Fitness Maximization, and Corrigibility01:20:20 Utility Control and Bias Mitigation01:21:17 Implicit Association Test01:28:01 Emailing with the Paper’s Authors01:43:23 My TakeawayShow NotesDavid’s source video: https://www.youtube.com/watch?v=XGu6ejtRz-0The research paper: http://emergent-values.aiWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:48:25

More Technology podcasts

About Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Podcast website

Listen to Doom Debates, Lex Fridman Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.13.0 | © 2007-2025 radio.de GmbH
Generated: 4/2/2025 - 7:52:00 AM