"The Emergent Mind: How Intelligence Arises in People and Machines" | Jay McClelland
The AI revolution of the past few years is built on brain-inspired neural network models originally developed to study our own minds. The question is, what should we make of the fact that our own rich mental lives are built on the same foundations as the seemingly soulless chat-bots we now interact with on a daily basis?Our guest this week is Stanford cognitive scientist Jay McClelland, who has been a leading figure in this field since the 1980s, when he developed some of the first of these artificial neural network models. Now McClelland has a new book, co-authored with SF State University computational neuroscientist Gaurav Suri, called "The Emergent Mind: How Intelligence Arises in People and Machines." We spoke with McClelland about the entangled history of neuroscience and AI, and whether the theory of the emergent mind described in the book can help us better understand ourselves and our relationship with the technology we've created.Learn More New book sheds light on human and machine intelligence | Stanford ReportHow Intelligence – Both Human and Artificial – Happens | KQED Forum From Brain to Machine: The Unexpected Journey of Neural Networks | Stanford HAIWu Tsai Neuro's Center for Mind, Brain, Computation and TechnologyMcClelland, J. L. & Rumelhart, D. E. (1981). An interactive activation model of context effects in letter perception: Part 1. An account of basic findings. Psychological Review, 88, 375-407. [PDF]Rumelhart, D. E., McClelland, J. L., & the PDP research group. (1986). Parallel distributed processing: Explorations in the microstructure of cognition. Volumes I & II. Cambridge, MA: MIT Press.McClelland, J. L. & Rogers, T. T. (2003). The parallel distributed processing approach to semantic cognition. Nature Reviews Neuroscience, 4, 310-322. [PDF]McClelland, J. L., Hill, F., Rudolph, M., Baldridge, J., & Schuetze, H. (2020). Placing language in and integrated understanding system: Next steps toward human-level performance in neural language models. Proceedings of the National Academy of Sciences, 117(42), 25966-25974. [Send us a text!Thanks for listening! If you're enjoying our show, please take a moment to give us a review on your podcast app of choice and share this episode with your friends. That's how we grow as a show and bring the stories of the frontiers of neuroscience to a wider audience. We want to hear from your neurons! Email us at at
[email protected] Learn more about the Wu Tsai Neurosciences Institute at Stanford and follow us on Twitter, Facebook, and LinkedIn.