Partner im RedaktionsNetzwerk Deutschland
Radio Logo
The station's stream will start in null sec.
Listen to Future of Life Institute Podcast in the App
Listen to Future of Life Institute Podcast in the App
(13,284)(171,489)
Save favorites
Alarm
Sleep timer
Save favorites
Alarm
Sleep timer
HomePodcastsTechnology
Future of Life Institute Podcast

Future of Life Institute Podcast

Podcast Future of Life Institute Podcast
Podcast Future of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute
add
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focu... More
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focu... More

Available Episodes

5 of 174
  • Roman Yampolskiy on Objections to AI Safety
    Roman Yampolskiy joins the podcast to discuss various objections to AI safety, impossibility results for AI, and how much risk civilization should accept from emerging technologies. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Objections to AI safety 15:06 Will robots make AI risks salient? 27:51 Was early AI safety research useful? 37:28 Impossibility results for AI 47:25 How much risk should we accept? 1:01:21 Exponential or S-curve? 1:12:27 Will AI accidents increase? 1:23:56 Will we know who was right about AI? 1:33:33 Difference between AI output and AI model Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
    5/26/2023
    1:42:13
  • Nathan Labenz on How AI Will Transform the Economy
    Nathan Labenz joins the podcast to discuss the economic effects of AI on growth, productivity, and employment. We also talk about whether AI might have catastrophic effects on the world. You can read more about Nathan's work at https://www.cognitiverevolution.ai Timestamps: 00:00 Economic transformation from AI 11:15 Productivity increases from technology 17:44 AI effects on employment 28:43 Life without jobs 38:42 Losing contact with reality 42:31 Catastrophic risks from AI 53:52 Scaling AI training runs 1:02:39 Stable opinions on AI? Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
    5/11/2023
    1:06:54
  • Nathan Labenz on the Cognitive Revolution, Red Teaming GPT-4, and Potential Dangers of AI
    Nathan Labenz joins the podcast to discuss the cognitive revolution, his experience red teaming GPT-4, and the potential near-term dangers of AI. You can read more about Nathan's work at https://www.cognitiverevolution.ai Timestamps: 00:00 The cognitive revolution 07:47 Red teaming GPT-4 24:00 Coming to believe in transformative AI 30:14 Is AI depth or breadth most impressive? 42:52 Potential near-term dangers from AI Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
    5/4/2023
    59:43
  • Maryanna Saenko on Venture Capital, Philanthropy, and Ethical Technology
    Maryanna Saenko joins the podcast to discuss how venture capital works, how to fund innovation, and what the fields of investing and philanthropy could learn from each other. You can read more about Maryanna's work at https://future.ventures Timestamps: 00:00 How does venture capital work? 09:01 Failure and success for startups 13:22 Is overconfidence necessary? 19:20 Repeat entrepreneurs 24:38 Long-term investing 30:36 Feedback loops from investments 35:05 Timing investments 38:35 The hardware-software dichotomy 42:19 Innovation prizes 45:43 VC lessons for philanthropy 51:03 Creating new markets 54:01 Investing versus philanthropy 56:14 Technology preying on human frailty 1:00:55 Are good ideas getting harder to find? 1:06:17 Artificial intelligence 1:12:41 Funding ethics research 1:14:25 Is philosophy useful? Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
    4/27/2023
    1:17:46
  • Connor Leahy on the State of AI and Alignment Research
    Connor Leahy joins the podcast to discuss the state of the AI. Which labs are in front? Which alignment solutions might work? How will the public react to more capable AI? You can read more about Connor's work at https://conjecture.dev Timestamps: 00:00 Landscape of AI research labs 10:13 Is AGI a useful term? 13:31 AI predictions 17:56 Reinforcement learning from human feedback 29:53 Mechanistic interpretability 33:37 Yudkowsky and Christiano 41:39 Cognitive Emulations 43:11 Public reactions to AI Social Media Links: ➡️ WEBSITE: https://futureoflife.org ➡️ TWITTER: https://twitter.com/FLIxrisk ➡️ INSTAGRAM: https://www.instagram.com/futureoflifeinstitute/ ➡️ META: https://www.facebook.com/futureoflifeinstitute ➡️ LINKEDIN: https://www.linkedin.com/company/future-of-life-institute/
    4/20/2023
    52:07

More Technology podcasts

About Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Podcast website

Listen to Future of Life Institute Podcast, Perth Live with Oliver Peterson and Many Other Stations from Around the World with the radio.net App

Future of Life Institute Podcast

Future of Life Institute Podcast

Download now for free and listen to the radio easily.

Google Play StoreApp Store