Powered by RND
PodcastsTechnologyDoom Debates
Listen to Doom Debates in the App
Listen to Doom Debates in the App
(36,319)(250,152)
Save favorites
Alarm
Sleep timer

Doom Debates

Podcast Doom Debates
Liron Shapira
Urgent disagreements that must be resolved before the world ends, hosted by Liron Shapira. lironshapira.substack.com
More

Available Episodes

5 of 48
  • Liron Reacts to Subbarao Kambhampati on Machine Learning Street Talk
    Today I’m reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they can’t truly reason.The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.00:00 Introduction02:54 Essentially N-Gram Models?10:31 The Manhole Cover Question20:54 Reasoning vs. Approximate Retrieval47:03 Explaining Jokes53:21 Caesar Cipher Performance01:10:44 Creativity vs. Reasoning01:33:37 Reasoning By Analogy01:48:49 Synthetic Data01:53:54 The ARC Challenge02:11:47 Correctness vs. Style02:17:55 AIs Becoming More Robust02:20:11 Block Stacking Problems02:48:12 PlanBench and Future Predictions02:58:59 Final ThoughtsShow NotesRao’s interview on Machine Learning Street Talk: https://www.youtube.com/watch?v=y1WnHpedi2ARao’s Twitter: https://x.com/rao2zPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    2:59:34
  • This Yudkowskian Has A 99.999% P(Doom)
    In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).00:00 Nethys Introduction04:47 The Vulnerable World Hypothesis10:01 What’s Your P(Doom)™14:04 Nethys’s Banger YouTube Comment26:53 Living with High P(Doom)31:06 Losing Access to Distant Stars36:51 Defining AGI39:09 The Convergence of AI Models47:32 The Role of “Unlicensed” Thinkers52:07 The PauseAI Movement58:20 Lethal Intelligence Video ClipShow NotesEliezer Yudkowsky’s post on “Death with Dignity”: https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategyPauseAI Website: https://pauseai.infoPauseAI Discord: https://discord.gg/2XXWXvErfAWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:04:11
  • Cosmology, AI Doom, and the Future of Humanity with Fraser Cain
    Fraser Cain is the publisher of Universe Today, co-host of Astronomy Cast, a popular YouTuber about all things space, and guess what… he has a high P(doom)! That’s why he’s joining me on Doom Debates for a very special AI + space crossover episode.00:00 Fraser Cain’s Background and Interests5:03 What’s Your P(Doom)™07:05 Our Vulnerable World15:11 Don’t Look Up22:18 Cosmology and the Search for Alien Life31:33 Stars = Terrorists39:03 The Great Filter and the Fermi Paradox55:12 Grabby Aliens Hypothesis01:19:40 Life Around Red Dwarf Stars?01:22:23 Epistemology of Grabby Aliens01:29:04 Multiverses01:33:51 Quantum Many Worlds vs. Copenhagen Interpretation01:47:25 Simulation Hypothesis01:51:25 Final ThoughtsSHOW NOTESFraser’s YouTube channel: https://www.youtube.com/@frasercainUniverse Today (space and astronomy news): https://www.universetoday.com/Max Tegmark’s book that explains 4 levels of multiverses: https://www.amazon.com/Our-Mathematical-Universe-Ultimate-Reality/dp/0307744256Robin Hanson’s ideas:Grabby Aliens: https://grabbyaliens.comThe Great Filter: https://en.wikipedia.org/wiki/Great_FilterLife in a high-dimensional space: https://www.overcomingbias.com/p/life-in-1kdhtml---Watch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:57:45
  • AI Doom Debate: Vaden Masrani & Ben Chugg vs. Liron Shapira
    Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are back for a Part II! This time we’re going straight to debating my favorite topic, AI doom.00:00 Introduction02:23 High-Level AI Doom Argument17:06 How Powerful Could Intelligence Be?22:34 “Knowledge Creation”48:33 “Creativity”54:57 Stand-Up Comedy as a Test for AI01:12:53 Vaden & Ben’s Goalposts01:15:00 How to Change Liron’s Mind01:20:02 LLMs are Stochastic Parrots?01:34:06 Tools vs. Agents01:39:51 Instrumental Convergence and AI Goals01:45:51 Intelligence vs. Morality01:53:57 Mainline Futures02:16:50 Lethal Intelligence VideoShow NotesVaden & Ben’s Podcast: https://www.youtube.com/@incrementspodRecommended playlists from their podcast:* The Bayesian vs Popperian Epistemology Series* The Conjectures and Refutations SeriesVaden’s Twitter: https://x.com/vadenmasraniBen’s Twitter: https://x.com/BennyChuggWatch the Lethal Intelligence video and check out LethalIntelligence.ai! It’s an AWESOME new animated intro to AI risk.Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    2:21:22
  • Andrew Critch vs. Liron Shapira: Will AI Extinction Be Fast Or Slow?
    Dr. Andrew Critch is the co-founder of the Center for Applied Rationality, a former Research Fellow at the Machine Intelligence Research Institute (MIRI), a Research Scientist at the UC Berkeley Center for Human Compatible AI, and the co-founder of a new startup called Healthcare Agents.Dr. Critch’s P(Doom) is a whopping 85%! But his most likely doom scenario isn’t what you might expect. He thinks humanity will successfully avoid a self-improving superintelligent doom scenario, only to still go extinct via the slower process of “industrial dehumanization”.00:00 Introduction01:43 Dr. Critch’s Perspective on LessWrong Sequences06:45 Bayesian Epistemology15:34 Dr. Critch's Time at MIRI18:33 What’s Your P(Doom)™26:35 Doom Scenarios40:38 AI Timelines43:09 Defining “AGI”48:27 Superintelligence53:04 The Speed Limit of Intelligence01:12:03 The Obedience Problem in AI01:21:22 Artificial Superintelligence and Human Extinction01:24:36 Global AI Race and Geopolitics01:34:28 Future Scenarios and Human Relevance01:48:13 Extinction by Industrial Dehumanization01:58:50 Automated Factories and Human Control02:02:35 Global Coordination Challenges02:27:00 Healthcare Agents02:35:30 Final Thoughts---Show NotesDr. Critch’s LessWrong post explaining his P(Doom) and most likely doom scenarios: https://www.lesswrong.com/posts/Kobbt3nQgv3yn29pr/my-motivation-and-theory-of-change-for-working-in-aiDr. Critch’s Website: https://acritch.com/Dr. Critch’s Twitter: https://twitter.com/AndrewCritchPhD---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    2:37:22

More Technology podcasts

About Doom Debates

Podcast website

Listen to Doom Debates, Shell Game and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Radio
Social
v6.29.0 | © 2007-2024 radio.de GmbH
Generated: 12/4/2024 - 2:43:14 PM