Stephen Grossberg

Wang Professor of Cognitive and Neural Systems
Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering
Founding Chairman, Department of Cognitive and Neural Systems, (http://cns.bu.edu)
Founder and Director, Center for Adaptive Systems (http://cns.bu.edu/about/cas.html)
Founding Director, Center of Excellence for Learning in Education, Science, and Technology (CELEST)
Founding President, International Neural Network Society (http://www.inns.org/)
Founding Editor-In-Chief, Neural Networks (http://www.journals.elsevier.com/neural-networks/)
Founder and General Chairman, International Conference on Cognitive and Neural Systems (ICCNS), 1997-2013 (http://cns.bu.edu/cns-meeting/conference.html)

PhD, Mathematics, Rockefeller University
Departments of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering
Boston University, Boston, Massachusetts, USA
Title: Explainable and Reliable AI and Autonomous Adaptive Intelligence: Deep Learning, Adaptive Resonance, and Models of Perception, Emotion and Action

Abstract: Biological neural network models whereby brains make minds provide a blueprint for autonomous adaptive intelligence. This lecture summarizes why the dynamics and emergent properties of such models for perception, cognition, emotion, and action are explainable, and thus amenable to being confidently implemented in large-scale applications. Key to their explainability is how they combine fast activations, or short-term memory (STM) traces, and learned weights, or long-term memory (LTM) traces. Visual and auditory perceptual models have explainable conscious STM representations of visual surfaces and auditory streams in surface-shroud resonances and stream-shroud resonances, respectively. In contrast, learned predictions by Deep Learning are not explainable and thus cannot be used with confidence. Deep Learning can also experience catastrophic forgetting and is thus unreliable. Adaptive Resonance Theory, or ART, overcomes these computational problems. ART is a self-organizing production system that incrementally learns, using arbitrary combinations of unsupervised and supervised learning and only locally computable quantities, to rapidly classify large non-stationary databases without experiencing catastrophic forgetting. ART classifications and predictions are explainable using the attended critical feature patterns in STM that they learn. LTM adaptive weights of the fuzzy ARTMAP algorithm induce fuzzy IF-THEN rules that explain what feature combinations predict successful outcomes. ART is successfully used in multiple large-scale applications, including remote sensing, medical database prediction, and social media data clustering. Also explainable are the MOTIVATOR model of reinforcement learning and cognitive-emotional interactions, and the VITE, DIRECT, DIVA, and SOVEREIGN models for reaching, speech production, spatial navigation, and autonomous adaptive intelligence.

Biography: Stephen Grossberg is Wang Professor of Cognitive and Neural Systems; Professor Emeritus of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering; and Director of the Center for Adaptive Systems at Boston University. He is a principal founder and current research leader in computational neuroscience, theoretical psychology and cognitive science, and neuromorphic technology and AI. In 1957-1958, he introduced the paradigm of using systems of nonlinear differential equations to develop models that link brain mechanisms to mental functions, including widely used equations for short-term memory (STM), or neuronal activation; medium-term memory (MTM), or activity-dependent habituation; and long-term memory (LTM), or neuronal learning. His work focuses upon how individuals, algorithms, and machines adapt autonomously in real-time to unexpected environmental challenges. The neural network models discovered in this way together provide a blueprint for designing autonomous adaptive intelligent agents. Grossberg founded key infrastructure of the field of neural networks, including the International Neural Network Society and the journal Neural Networks, and has served on the editorial boards of 30 journals. He was General Chairman of the first IEEE International Conference on Neural Networks in 1987, and played a key role while serving as the first INNS President in organizing the INNS First Annual Meeting in 1988. These two meetings fused to become IJCNN. His lecture series at MIT Lincoln Lab led to the national DARPA Study of Neural Networks. He is a fellow of AERA, APA, APS, IEEE, INNS, MDRS, and SEP. He has published 17 books or journal special issues, over 550 research articles, and has 7 patents. He was most recently awarded the 2015 Norman Anderson Lifetime Achievement Award of SEP, the 2017 Frank Rosenblatt computational neuroscience award of IEEE, and the 2019 Donald O. Hebb award for biological learning of INNS.