Thoughts from AAAI 2019

Added: 3rd June 2019 by Jane Street

At Jane Street, for the last several years, we have been increasingly interested in machine learning and its many use cases.

This is why it was exciting when earlier this year myself and a few of my colleagues had the opportunity to attend the AAAI 2019 conference.

We’d like to take this space to share with you some of the interesting projects and themes we saw at the conference.

Interpreting Neural Networks

Neural networks can achieve superhuman results in a number of problem domains, such as image classification and game-playing. However, the learned network is often a black box that “magically” computes a good answer. Attempting to understand what networks are doing and using that knowledge to build interpretable AI is therefore a large research problem.

Some recent results (Gradient visualisation, DeepLIFT, InfoGAN) show it is possible to gain some insight into neural networks by looking at layer activations. But at AAAI we saw Ghorbani, et al. show that these techniques are fragile and adversarial techniques can be used to generate inputs that arbitrarily move layer activations while still giving the correct classification result. We also saw several papers, such as this one on climate, discussing the benefits of classical AI over deep networks with respect to interpretability.

At the same time many presentations showed improvements in the ability to interpret more types of models models such as deep q-networks and building interpretability into the training by using specialized network structures to encourage the learned model to take on semantically meaningful grammar-based behavior at each layer.

AI for Social Good

An exciting topic that we saw much discussion of at AAAI was the use of artificial intelligence and machine learning for social good. There were interesting papers on a number of applications including fake news detection and filtering; using image classification to detect and curtail human trafficking; and statistical methods for feature engineering of data collected by citizen scientists.

Two larger areas which I found fascinating were the areas of social AI and fairness in machine learning. Social AI is concerned with the construction of robots and conversational agents that exhibit social characteristics (e.g. small talk, facial expressions, give-and-take conversation, …). It’s shown that these social agents perform better than agents without social cues in early literacy education and applications in this field were discussed at length in Cynthia Breazeal’s keynote.

Fairness in machine learning attempts to resolve the growing concern about automated decision models with respect to protected classes like race, gender, or other axes and the resulting policies that come from these automated models. S. Ghili, et al. and C. Dimitrakakis both described models that use latent variables in a bayesian setting to describe fairness for supervised tasks, and M. Olfat, et al. showed how to enforce fairness in unsupervised problems (such as clustering insurance actuarial data).