2023 School on Analytical Connectionism

August 28 to September 7, 2023

A 2-week summer course hosted at University College London on analytical tools for probing neural networks and higher-level cognition.

Overview

Analytical Connectionism is a 2-week summer course on analytical tools, including methods from statistical physics and probability theory, for probing neural networks and higher-level cognition. The course brings together neuroscience, psychology and machine-learning communities, and introduces attendees to analytical methods for neural-network analysis and connectionist theories of higher-level cognition and psychology.

Connectionism, a key theoretical approach in psychology, uses neural-network models to simulate a wide range of phenomena, including perception, memory, decision-making, language, and cognitive control. However, most connectionist models remain, to a certain extent, black boxes, and we lack a mathematical understanding of their behaviors. Recent progress in theoretical neuroscience and machine learning has provided novel analytical tools that have advanced our mathematical understanding of deep neural networks, and have the potential to help make these “black boxes” more transparent.

This course will introduce:

  • mathematical methods for neural-network analysis, providing a solid overview of the analytical tools available to understand neural-network models;
  • key connectionist models with links to experimental observations, which provide targets for analytical results.

During the course, you will:

  • attend lectures given by leading researchers on theoretical methods and applications, key connectionist models, and experimental observations;
  • participate in tutorials, Q&A sessions, and panel discussions;
  • present to and engage with lecturers, organizers, and other participants during a poster session;
  • work in a group with other participants on a novel research project, mentored by the course organizers and lecturers.

The course will run full-day Mondays-Fridays and end with a 1.5-day workshop, during which you will hear about current state-of-the-art and the limits of our understanding.

Important dates

Applications open:
April 4, 2023
Application deadline:
May 15, 2023
Outcome communicated:
July 7, 2023

Application details

Applications to participate in the 2023 School on Analytical Connectionism are now closed.

Target audience

This course is appropriate for graduate students, postdoctoral fellows and early-career faculty in a number of fields, including psychology, neuroscience, physics, computer science, and mathematics. Attendees are expected to have a strong background in one of these disciplines and to have made some effort to introduce themselves to a complementary discipline.

The course is limited to 40 attendees, who will be chosen to balance the representation of different fields. In circumstances where all other things are equal, priority will be given to applicants from underrepresented groups in STEM fields, using positive action under the UK Equality Act 2010 where appropriate.

Course fees

There are no course fees, but attendees are expected to cover their own travel, accommodation and subsistence expenses.

Financial assistance may be available for successful applicants who find it difficult to take up a place for financial reasons. If funding becomes available, successful applicants will be asked to complete a financial aid request form if they need assistance. The amount of financial aid available will depend on the course funding from grants and sponsors.

Lecturers

Course Content

This school offers an in-depth exploration of both theoretical and practical aspects of cognition, neural networks, and machine learning. It will focus on analytical models to understand neural network, with connections to real-world machine learning challenges. Key topics include connectionism and how neural networks simulate human cognition, memory, and learning processes, as well as the intersection of computational neuroscience and decision-making.

Topic lectures will cover the role of memory and learning in the brain, insights into large language models (LLMs) and their application to language tasks, and the cognitive mechanisms behind multitasking and decision-making. The course will also examine the neural mechanisms of decision-making and the cognitive neuroscience of language and semantic memory, offering a comprehensive overview of foundational theories and cutting-edge research in cognition and machine learning.

Schedule

Monday, August 28
Time (BST)
Event
Speaker
Title
9:00 am
Registration
9:20 am
Welcome
9:30 am
Lecture
Sompolinsky
Neural networks as a model for neuroscience (I)
11:00 am
Lecture
Sompolinsky
Neural networks as a model for neuroscience (II)
1:30 pm
Lecture
McClelland
Neural network models of human cognition (I)
2:45 pm
Lecture
McClelland
Neural network models of human cognition (II)
4:00 pm
Tutorial
Satchel
McClelland's lectures' tutorial
Tuesday, August 29
Time (BST)
Event
Speaker
Title
9:30 am
Lecture
Sompolinsky
Neural networks as a model for neuroscience (III)
11:00 am
Lecture
Sompolinsky
Neural networks as a model for neuroscience (IV)
1:30 pm
Lecture
Krzakala
Exact methods for the study of neural networks (I)
2:45 pm
Lecture
McClelland
Neural network models of human cognition (III)
4:00 pm
Poster Session
Poster session with blitz presentations
Wednesday, August 30
Time (BST)
Event
Speaker
Title
9:30 am
Lecture
Sompolinsky
Neural networks as a model for neuroscience (V)
11:00 am
Synthesis
Saxe
1:30 pm
Lecture
Krzakala
Exact methods for the study of neural networks (II)
2:45 pm
Lecture
Krzakala
Exact methods for the study of neural networks (III)
4:00 pm
Tutorial
Satchel
McClelland's lectures' tutorial
Thursday, August 31
Time (BST)
Event
Speaker
Title
9:30 am
Lecture
Sompolinsky
Neural networks as a model for neuroscience (VI)
11:00 am
Lecture
Krzakala
Exact methods for the study of neural networks (IV)
1:30 pm
Lecture
Krzakala
Exact methods for the study of neural networks (V)
2:45 pm
Lecture
McClelland
Neural network models of human cognition (IV)
4:00 pm
Tutorial
Satchel
McClelland's lectures' tutorial
Friday, September 1
Time (BST)
Event
Speaker
Title
9:30 am
Lecture
Krzakala
Exact methods for the study of neural networks (VI)
11:00 am
Lecture
Krzakala
Exact methods for the study of neural networks (VII)
1:30 pm
Lecture
McClelland
Neural network models of human cognition (V)
2:45 pm
Lecture
McClelland
Neural network models of human cognition (VI)
4:00 pm
Projects
Group project work
Monday, September 4
Time (BST)
Event
Speaker
Title
9:30 am
Panel
Chung, Lambon-Ralph, Summerfield
Panel discussion
11:00 am
Panel
Chung, Lambon-Ralph, Summerfield
Panel discussion
1:30 pm
Projects
Group project work
2:45 pm
Projects
Group project work
Tuesday, September 5
Time (BST)
Event
Speaker
Title
9:30 am
Lecture
Musslick
A graph-theoretic analysis of parallel processing in neural network architectures
11:00 am
Lecture
Akrami
Understanding memory at a neuroscientific level
1:30 pm
Lecture
Akrami
Understanding memory at a neuroscientific level
2:45 pm
Projects
Group project work
Wednesday, September 6
Time (BST)
Event
Speaker
Title
9:30 am
Lecture
Musslick
A graph-theoretic analysis of parallel processing in neural network architectures
11:00 am
Lecture
Eckstein
Computational cognitive modeling, reinforcement learning, and neural networks
1:30 pm
Lecture
Eckstein
Computational cognitive modeling, reinforcement learning, and neural networks
2:45 pm
Projects
Group project work
4:00 pm
Projects
Group project presentations
Thursday, September 7
Time (BST)
Event
Speaker
Title
9:30 am
Lecture
Rogers
Natural language processing
11:00 am
Lecture
Rogers
Natural language processing

Participants

Contributed posters

  1. Máté Aller, “Efficiency and (lack of) flexibility in a deep learning model of human spoken word recognition”
  2. Jan Philipp Bauer, “Quantifying rich and robust inductive biases in chaotic recurrent neural networks”
  3. Anna-Lea Beyer, “The relationship between behavioural tasks and brain space”
  4. Victoria Bosche, “The brain can’t copy-paste: End-to-end topographic neural networks as a way forward for modelling cortical map formation and behaviour”
  5. Chi-Ning Chou, “Probing biological and artificial neural networks with task-dependent neural manifolds”
  6. Marianne de Heer Kloots, “What components of NLP models drive similarity to brain activity in language processing? Layer- and head-level analyses”
  7. Mani Hamidi, “Using representation-learning to guide efficient exploration”
  8. Michael Hanna, “Understanding subject-verb agreement in pre-trained language models: A circuits approach”
  9. Eghbal Hosseini, “Teasing apart the representational spaces of ANN language models to discover key axes of model-to-brain alignment”
  10. Jaedong Hwang, “Efficient exploration via fragmentation and recall”
  11. Akshay Kumar Jagadish, “Using large-language models to meta-learn human inductive biases”
  12. Maximilian Mittenbühler, “Human resource-rational planning: A neural network approach”
  13. Turan Orujlu, “VividDreamer: Tokenized world model with stochastic attention”
  14. Mitchell Ostrow, “Beyond geometry: Comparing the temporal structure of computation in neural circuits with dynamic mode representational similarity analysis”
  15. Alexandra Proca, “Synergistic information supports modality integration and flexible learning in neural networks solving multiple tasks”
  16. Safura Rashid Shomali, “Revealing hidden neuronal microcircuits from correlations among spiking neurons”
  17. Jirko Rubruck, “Learning dynamics of semantic knowledge in humans and neural networks”
  18. Ábel Ságodi, “An interpretable language for robust neural computation”
  19. Quilee Simeon, “Dimensionality and dynamics of abstract representations”
  20. Sushrut Thorat, “Characterising representation dynamics in recurrent neural networks for object recognition”
  21. Elia Turner, “The simplicity bias in multi-task RNNs: Shared attractors, reuse of dynamics, and geometric representation”
  22. Sven Wientjes, “Strategic cognitive control is bound to representations of temporal context”

Participant list

  1. Máté Aller
  2. Jan Philipp Bauer
  3. Ari Benjamin
  4. Anna-Lea Beyer
  5. Victoria Bosch
  6. Abdulkadir Canatar
  7. Elise Chang
  8. Brandon Chen
  9. Chi-Ning Chou
  10. Zach Cohen
  11. Marianne de Heer Kloots
  12. Tala Fakhoury
  13. Dirk Goldschmitt
  14. Mani Hamidi
  15. Jerome Han
  16. Michael Hanna
  17. Eghbal Hosseini
  18. Jaedong Hwang
  19. Akshay Kumar Jagadish
  20. Hajer Karoui
  21. Jin Lee
  22. Xiaoxuan Lei
  23. Huidi Li
  24. Maximilian Mittenbühler
  25. Turan Orujlu
  26. Mitchell Ostrow
  27. Alexandra Proca
  28. Safura Rashid Shomali
  29. Joséphine Raugel
  30. Jirko Rubruck
  31. Ábel Ságodi
  32. Kai Sandbrink
  33. Quilee Simeon
  34. Sushrut Thorat
  35. Elia Turner
  36. Sven Wientjes

Organizers

Sponsors

This summer course is made possible by the generous support of the Gatsby Computational Neuroscience Unit (funded by the Gatsby Charitable Foundation), the Flatiron Institute (funded by the Simons Foundation), and Guarantors of Brain.