Core Projects

In its first year (2024-2025), NAM will focus on launching two research efforts. The first, led by NAM co-director Sarah-Jane Leslie, focuses on developing and testing AI models of human cognitive function. The second, led by NAM co-director Tania Lombrozo, focuses on explanation and intelligibility in humans and machines. 

Developing and Testing AI Models of Human Cognitive Function

Project Lead: Sarah-Jane Leslie

One of the largest gaps that remains between natural and artificial minds is the efficiency with which natural minds — or at least human ones — learn, and the flexibility with which they can generalize what they have learned to novel circumstances, as compared to contemporary artificial ones. These capabilities reflect the ability of humans to efficiently learn low-dimensional, abstract representations of task-relevant structure, and to apply and recombine such representations in new settings that share similar elements of structure. Artificial systems have yet to exhibit these capabilities, requiring massive amounts of data to train (several orders of magnitude more than humans), and achieving proficiency in focused domains of function (e.g., language versus motor skills) that do not generalize to others. This project will directly address this gap, under the assumption that natural minds are imbued with inductive biases toward the efficient learning of task-relevant abstract representations. It will draw on insights from psychologists about the functional components of human cognition, and from neuroscientists about principles of computation in neural network architectures gleaned from the architecture of the brain.

Explanation and Intelligibility

Project Lead: Tania Lombrozo

Deep learning systems and other advances in AI have raised questions about “explainability”: How can an engineer or end-user understand the basis for some algorithmic judgment or decision when it comes from a largely opaque process? While research on explainability within computer science has made important advances, it has proceeded largely independently from existing work in educational, cognitive, and social psychology, or philosophy of science and epistemology, on the nature of explanation and understanding. This disconnect is unfortunate, as these fields have a great deal to learn from each other. This project will bring an interdisciplinary team together to tackle these new questions and to generate a taxonomy of forms of understanding relevant for human minds to better understand artificial systems, and for artificial systems to better mimic human explanation and understanding.