João Loula

I am a second year PhD student in Brain and Cognitive Sciences at MIT advised by Josh Tenenbaum, working at the intersection of robotics and developmental psychology.

Previously, I was a research intern at Facebook AI Research, working with Brenden Lake and Marco Baroni, at Harvard with Sam Gershman, and at the Inria Parietal team with Bertrand Thirion and Gaël Varoquaux. I've studied at École Normale Supérieure Paris-Saclay, École Polytechnique and Universidade de São Paulo.

Email  /  CV  /  Google Scholar  /  Github


My work focuses on how children's rich theories of the world and sophisticated mental simulations are used to support action. Key areas of interest include planning, tool use, and spatial reasoning.

Inference and Planning with Virtual and Physical Constraints for Object Manipulation
João Loula, Kelsey Allen, Alberto Rodriguez, Josh Tenenbaum Nima Fazeli
Under review

We propose a framework for manipulation that decomposes tasks into kinematic graphs comprised of virtual and physical kinematic constraints. To this end, we first infer a set of producible constraints during an exploration phase. Next, we demonstrate an efficient planning procedure that uses kinematic graphs built from these constraints for object manipulation.

A Task and Motion Approach to the Development of Planning
João Loula, Kelsey Allen, Josh Tenenbaum
CogSci, 2020

Developmental psychology presents us with a puzzle: though children are remarkably apt at planning their actions, they suf- fer from surprising yet consistent shortcomings. We argue that these patterns of triumph and failure can be broadly captured by the framework of task and motion planning, where plans are hybrid entities consisting of both a structured, symbolic skeleton and a continuous, low-level trajectory.

Learning constraint-based planning models from demonstrations
João Loula, Kelsey Allen, Tom Silver, Josh Tenenbaum
IROS, 2020

We present a framework for learning constraint-based task and motion planning models using gradient descent. Our model observes expert demonstrations of a task and decomposes them into modes—segments which specify a set of constraints on a trajectory optimization problem.

Discovering a symbolic planning language from continuous experience
João Loula, Tom Silver, Kelsey Allen, Josh Tenenbaum
CogSci, 2019

We present a model that starts out with a language of low-level physical constraints and, by observing expert demonstrations, builds up a library of high-level concepts that afford planning and action understanding.

Rearranging the Familiar: Testing Compositional Generalization in Recurrent Networks
João Loula, Marco Baroni, Brenden Lake,
EMNLP BlackboxNLP Workshop, 2018

We extend the study of systematic compositionality in seq2seq models to settings where the model needs only to recombine well-trained functional words. Our findings confirm and strengthen the earlier ones: seq2seq models can be impressively good at generalizing to novel combinations of previously-seen input, but only when they receive extensive training on the specific pattern to be generalized

Human Learning of Video Games
Pedro Tsividis, João Loula, Jake Burga, Thomas Pouncy, Sam Gershman, Josh Tenenbaum
NIPS Workshop on Cognitively Informed Artificial Intelligence (Spotlight Talk), 2017

Work on human-level learning in Atari-like games, learning theories from gameplay and using them to plan in a model-based manner.

Decoding fMRI activity in the time domain improves classification performance
João Loula, Gaël Varoquaux, Bertrand Thirion
NeuroImage, 2017

We show that fMRI decoding can be cast as a regression problem: fitting a design matrix with BOLD activation: event classification is then easily obtained from the predicted design matrices. Our experiments show this approach outperforms state of the art solutions, especially for designs with low inter-stimulus intervals, and the two-step nature of the model brings time-domain interpretability.

Loading and plotting of cortical surface representations in Nilearn
Julia Huntenburg, Alexandre Abraham, João Loula, Franziskus Liem, Kamalaker Dadi, Gaël Varoquaux
Research Ideas and Operations, 2017

We present an initial support of cortical surfaces in Python within the neuroimaging data processing toolbox Nilearn. We provide loading and plotting functions for different surface data formats with minimal dependencies, along with examples of their application. Limitations of the current implementation and potential next steps are discussed.

website template credit