Psychophysics and Statistical Physics of Learning in Higher Level Cognition
Understanding how our thoughts and actions arise from the interactions of billions of neurons is one of the great scientific challenges of our time. Remarkably, most of the objects, concepts, and plans that populate the inner world of our mind are learned. Unravelling the influence of learning on mental representations is a fundamental goal in psychology, because learning underpins a great diversity of behaviours; and understanding representation learning dynamics is a fundamental goal in theoretical machine learning, due to its significance to applications like computer vision and natural language processing. Indeed, the capabilities of deep learning systems have increasingly transcended perceptual tasks to include aspects of complex cognition such as certain linguistic abilities. This program seeks to bring together surprisingly separate subcommunities in psychology, machine learning, and neuroscience that nevertheless share core concerns, with the goal of developing quantitative theories of learned cognition. Given recent advances in these areas, we believe a focused program intermixing them can make substantial progress in furthering our theories of the mind.
In contrast to many programs that have sought to link neuroscience to artificial neural networks and deep learning models, our goal is instead to focus on psychology–the systematic study of complex behaviour. To achieve this goal we have invited participants from four subcommunities with strongly overlapping objectives and methods. First, the rich tradition of connectionist modelling (with contributions from researchers with physics backgrounds like Geoffrey Hinton, Terence Sejnowski, and Paul Smolensky) has developed accounts of complex cognition spanning phenomena in vision, memory, semantic cognition, linguistic abilities, and cognitive control. However, it has mainly relied on computer simulations of artificial neural network models. Second, deep learning theory has developed new theoretical tools for mathematically understanding aspects of learning in artificial neural networks, drawing on methods from statistical physics, and providing tools to formalise connectionist insights into exactly solvable theories. Third, the tradition of psychophysics, stretching back to Helmholtz, has conducted controlled experiments investigating aspects of learning behaviour that constrain theoretical accounts. Recent efforts have scaled up these approaches to obtain datasets of thousands of human subjects learning perceptual and cognitive tasks over several weeks. Fourth, a paradigmatic example of learned complex cognition is cognitive development. We have invited key figures in developmental psychology who have collected large-scale longitudinal datasets of the raw experience of children over development. At present these four subcommunities are distinct, despite a shared commitment to understanding the role of learning in generating complex behaviour and partial convergence to artificial neural networks as a modelling framework. With the recent progress in scaling artificial networks to complex cognitive phenomena, rapidly improving theoretical tools for their analysis, and large-scale well-controlled behavioural experiments studying learning over long time scales, this is the right time to integrate these communities. The proposed program will ideally forge a new shared community and framework to drive the development of quantitative theories of learning in higher-level cognition and psychology. KITP programs provide the ideal venue and time span for this concerted effort.
The specific aims of the workshop will be to formalise theories of learning in higher-level cognition, drawing on statistical physics-based analyses of learning systems, and to place the resulting theories in contact with experimental psychophysics and developmental psychology probing higher-level cognition. In particular, the success of diverse deep learning models with a variety of architectural choices in a range of tasks suggests that there may be interesting interactions that are not sensitive to aspects of neural implementation. This workshop will provide a forum for exploring theories that are less closely tied to neuroscience, but very tightly tied to empirical studies of behaviour, allowing the identification of shared constraints and differences between learning systems. Further, as artificial neural networks are themselves abstractions of certain neural processes, it may be unnecessary to look for close neural links before obtaining phenomenological models of aspects of behaviour. By remaining somewhat agnostic to the relation between deep learning systems and neural processes, connectionism has made progress in addressing aspects of higher-level cognition–and in time, these models have often been shown to have plausible neural instantiations.
Our goal is to create a focused workshop consisting of participants who already see the value of combining these approaches and are seeking ways to do so.
More details to follow in 2024.
Co-Organizers
University College London
Hebrew University of Jerusalem
University College London