Partner im RedaktionsNetzwerk Deutschland
Radio Logo
The station's stream will start in null sec.
Listen to The Gradient: Perspectives on AI in the App
Listen to The Gradient: Perspectives on AI in the App
(13,284)(171,489)
Save favorites
Alarm
Sleep timer
Save favorites
Alarm
Sleep timer
HomePodcastsTechnology
The Gradient: Perspectives on AI

The Gradient: Perspectives on AI

Podcast The Gradient: Perspectives on AI
Podcast The Gradient: Perspectives on AI

The Gradient: Perspectives on AI

The Gradient
add
Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more. thegradientpub.substack.com More
Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more. thegradientpub.substack.com More

Available Episodes

5 of 75
  • Talia Ringer: Formal Verification and Deep Learning
    In episode 74 of The Gradient Podcast, Daniel Bashir speaks to Professor Talia Ringer.Professor Ringer is an Assistant Professor with the Programming Languages, Formal Methods, and Software Engineering group at the University of Illinois at Urbana Champaign. Their research leverages proof engineering to allow programmers to more easily build formally verified software systems.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Daniel’s long annoying intro* (02:15) Origin Story* (04:30) Why / when formal verification is important* (06:40) Concerns about ChatGPT/AutoGPT et al failures, systems for accountability* (08:20) Difficulties in making formal verification accessible* (11:45) Tactics and interactive theorem provers, interface issues* (13:25) How Prof Ringer’s research first crossed paths with ML* (16:00) Concrete problems in proof automation* (16:15) How ML can help people verifying software systems* (20:05) Using LLMs for understanding / reasoning about code* (23:05) Going from tests / formal properties to code* (31:30) Is deep learning the right paradigm for dealing with relations for theorem proving? * (36:50) Architectural innovations, neuro-symbolic systems* (40:00) Hazy definitions in ML* (41:50) Baldur: Proof Generation & Repair with LLMs* (45:55) In-context learning’s effectiveness for LLM-based theorem proving* (47:12) LLMs without fine-tuning for proofs* (48:45) Something ~ surprising ~ about Baldur results (maybe clickbait or maybe not)* (49:32) Asking models to construct proofs with restrictions, translating proofs to formal proofs* (52:07) Methods of proofs and relative difficulties* (57:45) Verifying / providing formal guarantees on ML systems* (1:01:15) Verifying input-output behavior and basic considerations, nature of guarantees* (1:05:20) Certified/verifies systems vs certifying/verifying systems—getting LLMs to spit out proofs along with code* (1:07:15) Interpretability and how much model internals matter, RLHF, mechanistic interpretability* (1:13:50) Levels of verification for deploying ML systems, HCI problems* (1:17:30) People (Talia) actually use Bard* (1:20:00) Dual-use and “correct behavior”* (1:24:30) Good uses of jailbreaking* (1:26:30) Talia’s views on evil AI / AI safety concerns* (1:32:00) Issues with talking about “intelligence,” assumptions about what “general intelligence” means* (1:34:20) Difficulty in having grounded conversations about capabilities, transparency* (1:39:20) Great quotation to steal for your next thinkpiece + intelligence as socially defined* (1:42:45) Exciting research directions* (1:44:48) OutroLinks:* Talia’s Twitter and homepage* Research* Concrete Problems in Proof Automation* Baldur: Whole-Proof Generation and Repair with LLMs* Research ideas Get full access to The Gradient at thegradientpub.substack.com/subscribe
    5/25/2023
    1:45:35
  • Brigham Hyde: AI for Clinical Decision-Making
    In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Brigham Hyde.Brigham is Co-Founder and CEO of Atropos Health. Prior to Atropos, he served as President of Data and Analytics at Eversana, a life sciences commercialization service provider. He led the investment in Concert AI in the oncology real-world data space at Symphony AI. Brigham has also held research faculty positions at Tufts University and the MIT Media Lab.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:55) Brigham’s background* (06:00) Current challenges in healthcare* (12:33) Interpretablity and delivering positive patient outcomes* (17:10) How Atropos surfaces relevant data for patient interventions, on personalized observational research studies* (22:10) Quality and quantity of data for patient interventions* (27:25) Challenges and opportunities for generative AI in healthcare* (35:17) Database augmentation for generative models* (36:25) Future work for Atropos* (39:15) Future directions for AI + healthcare* (40:56) OutroLinks:* Atropos Health homepage* Brigham’s Twitter and LinkedIn Get full access to The Gradient at thegradientpub.substack.com/subscribe
    5/18/2023
    41:43
  • Scott Aaronson: Against AI Doomerism
    In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Professor Scott Aaronson. Scott is the Schlumberger Centennial Chair of Computer Science at the University of Texas at Austin and director of its Quantum Information Center. His research interests focus on the capabilities and limits of quantum computers and computational complexity theory more broadly. He has recently been on leave to work at OpenAI, where he is researching theoretical foundations of AI safety. Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:45) Scott’s background* (02:50) Starting grad school in AI, transitioning to quantum computing and the AI / quantum computing intersection* (05:30) Where quantum computers can give us exponential speedups, simulation overhead, Grover’s algorithm* (10:50) Overselling of quantum computing applied to AI, Scott’s analysis on quantum machine learning* (18:45) ML problems that involve quantum mechanics and Scott’s work* (21:50) Scott’s recent work at OpenAI* (22:30) Why Scott was skeptical of AI alignment work early on* (26:30) Unexpected improvements in modern AI and Scott’s belief update* (32:30) Preliminary Analysis of DALL-E 2 (Marcus & Davis)* (34:15) Watermarking GPT outputs* (41:00) Motivations for watermarking and language model detection* (45:00) Ways around watermarking* (46:40) Other aspects of Scott’s experience with OpenAI, theoretical problems* (49:10) Thoughts on definitions for humanistic concepts in AI* (58:45) Scott’s “reform AI alignment stance” and Eliezer Yudkowsky’s recent comments (+ Daniel pronounces Eliezer wrong), orthogonality thesis, cases for stopping scaling* (1:08:45) OutroLinks:* Scott’s blog* AI-related work* Quantum Machine Learning Algorithms: Read the Fine Print* A very preliminary analysis of DALL-E 2 w/ Marcus and Davis* New AI classifier for indicating AI-written text and Watermarking GPT Outputs* Writing* Should GPT exist?* AI Safety Lecture* Why I’m not terrified of AI Get full access to The Gradient at thegradientpub.substack.com/subscribe
    5/11/2023
    1:09:32
  • Ted Underwood: Machine Learning and the Literary Imagination
    In episode 71 of The Gradient Podcast, Daniel Bashir speaks to Ted Underwood.Ted is a professor in the School of Information Sciences with an appointment in the Department of English at the University of Illinois at Urbana Champaign. Trained in English literary history, he turned his research focus to applying machine learning to large digital collections. His work explores literary patterns that become visible across long timelines when we consider many works at once—often, his work involves correcting and enriching digital collections to make them more amenable to interesting literary research.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:42) Ted’s background / origin story, * (04:35) Context in interpreting statistics, “you need a model,” the need for data about human responses to literature and how that manifested in Ted’s work* (07:25) The recognition that we can model literary prestige/genre because of ML* (08:30) Distant reading and the import of statistics over large digital libraries* (12:00) Literary prestige* (12:45) How predictable is fiction? Scales of predictability in texts* (13:55) Degrees of autocorrelation in biography and fiction and the structure of narrative, how LMs might offer more sophisticated analysis* (15:15) Braided suspense / suspense at different scales of a story* (17:05) The Literary Uses of High-Dimensional Space: how “big data” came to impact the humanities, skepticism from humanists and responses, what you can do with word count* (20:50) Why we could use more time to digest statistical ML—how acceleration in AI advances might impact pedagogy* (22:30) The value in explicit models* (23:30) Poetic “revolutions” and literary prestige* (25:53) Distant vs. close reading in poetry—follow-up work for “The Longue Durée”* (28:20) Sophistication of NLP and approaching the human experience* (29:20) What about poetry renders it prestigious?* (32:20) Individualism/liberalism and evolution of poetic taste* (33:20) Why there is resistance to quantitative approaches to literature* (34:00) Fiction in other languages* (37:33) The Life Cycles of Genres* (38:00) The concept of “genre”* (41:00) Inflationary/deflationary views on natural kinds and genre* (44:20) Genre as a social and not a linguistic phenomenon* (46:10) Will causal models impact the humanities? * (48:30) (Ir)reducibility of cultural influences on authors* (50:00) Machine Learning and Human Perspective* (50:20) Fluent and perspectival categories—Miriam Posner on “the radical, unrealized potential of digital humanities.”* (52:52) How ML’s vices can become virtues for humanists* (56:05) Can We Map Culture? and The Historical Significance of Textual Distances* (56:50) Are cultures and other social phenomena related to one another in a way we can “map”? * (59:00) Is cultural distance Euclidean? * (59:45) The KL Divergence’s use for humanists* (1:03:32) We don’t already understand the broad outlines of literary history* (1:06:55) Science Fiction Hasn’t Prepared us to Imagine Machine Learning* (1:08:45) The latent space of language and what intelligence could mean* (1:09:30) LLMs as models of culture* (1:10:00) What it is to be a human in “the age of AI” and Ezra Klein’s framing* (1:12:45) Mapping the Latent Spaces of Culture* (1:13:10) Ted on Stochastic Parrots* (1:15:55) The risk of AI enabling hermetically sealed cultures* (1:17:55) “Postcards from an unmapped latent space,” more on AI systems’ limitations as virtues* (1:20:40) Obligatory GPT-4 section* (1:21:00) Using GPT-4 to estimate passage of time in fiction* (1:23:39) Is deep learning more interpretable than statistical NLP?* (1:25:17) The “self-reports” of language models: should we trust them?* (1:26:50) University dependence on tech giants, open-source models* (1:31:55) Reclaiming Ground for the Humanities* (1:32:25) What scientists, alone, can contribute to the humanities* (1:34:45) On the future of the humanities* (1:35:55) How computing can enable humanists as humanists* (1:37:05) Human self-understanding as a collaborative project* (1:39:30) Is anything ineffable? On what AI systems can “grasp”* (1:43:12) OutroLinks:* Ted’s blog and Twitter* Research* The literary uses of high-dimensional space* The Longue Durée of literary prestige* The Historical Significance of Textual Distances* Machine Learning and Human Perspective* The life cycles of genres* Can We Map Culture?* Cohort Succession Explains Most Change in Literary Culture* Other Writing* Reclaiming Ground for the Humanities* We don’t already understand the broad outlines of literary history* Science fiction hasn’t prepared us to imagine machine learning.* How predictable is fiction?* Mapping the latent spaces of culture* Using GPT-4 to measure the passage of time in fiction Get full access to The Gradient at thegradientpub.substack.com/subscribe
    5/4/2023
    1:43:59
  • Irene Solaiman: AI Policy and Social Impact
    In episode 70 of The Gradient Podcast, Daniel Bashir speaks to Irene Solaiman.Irene is an expert in AI safety and policy and the Policy Director at HuggingFace, where she conducts social impact research and develops public policy. In her former role at OpenAI, she initiated and led bias and social impact research at OpenAI in addition to leading public policy. She built AI policy at Zillow group and advised poilcymakers on responsible autonomous decision-making and privacy as a fellow at Harvard’s Berkman Klein Center.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at [email protected] to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:00) Intro to Irene and her work* (03:45) What tech people need to learn about policy, and vice versa* (06:35) Societal impact—words and reality, Irene’s experience* (08:30) OpenAI work on GPT-2 and release strategies (yes, this was recorded on Pi Day)* (11:00) Open-source proponents and release* (14:00) What does a multidisciplinary approach to working on AI look like? * (16:30) Thinking about end users and enabling contributors with different sets of expertise* (18:00) “Preparing for AGI” and current approaches to release* (21:00) Who constitutes a researcher? What constitutes safety and who gets resourced? Limitations in red-teaming potentially dangerous systems. * (22:35) PALMS and Values-Targeted Datasets* (25:52) PALMS and RLHF* (27:00) Homogenization in foundation models, cultural contexts* (29:45) Anthropic’s moral self-correction paper and Irene’s concerns about marketing “de-biasing” and oversimplification* (31:50) Data work, human systemic problems → AI bias* (33:55) Why do language models get more toxic as they get larger? (if you have ideas, let us know!)* (35:45) The gradient of generative AI release, Irene’s experience with the open-source world, tradeoffs along the release gradient* (38:40) More on Irene’s orientation towards release* (39:40) Pragmatics of keeping models closed, dealing with open-source by force* (42:22) Norm setting for release and use, normalization of documentation on social impacts* (46:30) Race dynamics :(* (49:45) Resource allocation and advances in ethics/policy, conversations on integrity and disinformation* (53:10) Organizational goals, balancing technical research with policy work* (58:10) Thoughts on governments’ AI policies, impact of structural assumptions* (1:04:00) Approaches to AI-generated sexual content, need for more voices represented in conversations about AI* (1:08:25) Irene’s suggestions for AI practitioners / technologists* (1:11:24) OutroLinks:* Irene’s homepage and Twitter* Papers* Release Strategies and the Social Impacts of Language Models* Hugh Zhang’s open letter in The Gradient from 2019* Process for Adapting Large Models to Society (PALMS) with Values-Targeted Datasets* The Gradient of Generative AI Release: Methods and Considerations Get full access to The Gradient at thegradientpub.substack.com/subscribe
    4/27/2023
    1:12:11

More Technology podcasts

About The Gradient: Perspectives on AI

Interviews with various people who research, build, or use AI, including academics, engineers, artists, entrepreneurs, and more.

thegradientpub.substack.com
Podcast website

Listen to The Gradient: Perspectives on AI, Podcasting Smarter and Many Other Stations from Around the World with the radio.net App

The Gradient: Perspectives on AI

The Gradient: Perspectives on AI

Download now for free and listen to the radio easily.

Google Play StoreApp Store