The Challenge of Compositionality for AI

June 29-30, 2022

Logo

[Day 1 · Day 2 · References]

A two-day online workshop on compositionality and artificial intelligence organized by Gary Marcus and Raphaël Millière.

Day 1: Why Compositionality Matters for AI

Gary Marcus (New York University, Emeritus)

“Compositionality and Natural Language Understanding” [Slides]

Allyson Ettinger (University of Chicago)

“Shades of Meaning Composition: Defining Compositionality Goals in NLU” [Slides]

Paul Smolensky (Johns Hopkins University/Microsoft Research Redmond)

“Human-Level Intelligence Requires Continuous, Robustly Compositional Representations: Neurocompositional Computing for NECST-Generation AI” [Slides]

Raphaël Millière (Columbia University)

“Compositionality Without Classical Constituency” [Slides]

Day 2: Can Language Models Handle Compositionality?

Dieuwke Hupkes (European Laboratory for Learning and Intelligent Systems / Meta AI)

“Are Neural Networks Compositional, and How Do We Even Know?” [Slides]

Tal Linzen (New York University / Google AI)

“Successes and Failures of Compositionality in Neural Networks for Language” [Slides]

Stephanie Chan (DeepMind)

“Data Distributions Drive Emergent In-Context Learning in Transformers” [Slides]

Ellie Pavlick (Brown University / Google AI)

“No One Metric is Enough! Combining Evaluation Techniques to Uncover Latent Structure” [Slides]

Brenden Lake (New York University / Meta AI)

“Human-Like Compositional Generalization Through Meta-Learning” [Slides]

References

We have listed some relevant papers discussed by each speaker below.

Gary Marcus

Slides

Allyson Ettinger

Slides

Paul Smolensky

Slides

Raphaël Millière

Slides

Dieuwke Hupkes

Slides

Tal Linzen

Slides

Stephanie Chan

Slides

Ellie Pavlick

Slides

Brenden Lake

Slides