My future in AI

Under Construction

How can I help advance AI? This blog post summarises some key areas I’m passionate about, as a kind of personal roadmap for the next few years of my effort and attention.

Human-level AI probably requires the fusion of multiple disparate techniques, theories, and forms of human capital. I can summarize and gather these strands under the rubric of “other voices”, “other tools”, and “other methods”.


Other voices

Why should we care about diversity in AI research? Pragmatically, diverse groups are likely1 to think2 better3. The second reason is ethical: the beneficaries of historical advantages and imbalances have a social responsibility to those on the other end of those historical forces. Third, that many groups remain deprived of opportunity represents an unclaimed dividend of talent, setting the field back from where it could be.

The fourth and perhaps most important reason relates to the basic princple of democracy. To ensure the benefits of AI acrue to all, we need researchers from diverse socio-economic groups to act as stakeholders, advocates, and leaders – rather than depending solely on the developed world’s researchers and educators to interpret the need and concerns of all humans.

These values motivate my involvement in the Deep Learning Indabadeeplearningindaba.com, where I help run the practical sessions.


Other tools

Swift and differentiable programming

Swift is an attractive host for scientific code. It approaches the performance of C++ and the clarity of Python. It is equipped with a type system that can capture high-level abstractions that would otherwise remain latent, and using a more fluent and flexible composition style than inheritance-based programming. Capitalising on these advantages, the Swift for TensorFlow project offers to inject the use of gradients into a much wider set of applications than earlier DL frameworks anticipated. How to exploit gradients on arbitrary code remains to be seen, though there fascinating hints [^7]. This is a frontier I would like to explore deeply over the coming years.

AlphaZero

Software engineers place great value in clear abstractions, because they faciliate and enrich thought. AlphaZero is an attractive test-case for abstraction, because it lies at the intersection of self-play, policy iteration, tree search, and gradient descent; all have elegant and (mostly) orthogonal representations. The ideal “reference implementation” is hence a clarification and embodiment of how best to think about an algorithm — and new technologies like Swift offer this boon without any crippling penalty in performance. I’ve approached my implementation of AlphaZero for OpenSpiel with these points in mind.


Other methods

Bayesian Inference

Ideal probabilistic reasoning is Bayesian, and most such reasoning is intractable. Yet we are confronted by stochastic worlds, illuminated by unreliable and partial observations, and populated by unpredictable agents. The role the former plays in dealing with the latter is currently a rich field of neuroscientific research, and a key challenge for AI research. The picture is murky, however: MuZero showed that learned models can facilitate planning without explicit stochasticity, distributional RL helps but for reasons that are unclear, and VAEs have lagged behind other methods for generative modelling. Intriguingly, though, distributional RL appears to explain dopaminergic activity 4.

Meanwhile, technology for probabilistic programming continues to advance, and techniques like HMC offer ways to exploit DP for faster inference. A confluence of these various methods seems likely, with VI and MCMC judiciously combined within complex hierarchical models. Automatic tuning of inference and amortisation recipes on such models could be way to empower “probabilistic programmers” to tackle new domains with powerful and sample-efficient models.

Multi-Agent Methods

Another powerful and complementary tool is game theory and its application to multi-agent learning. The theory is less developed here, but experience from the natural world testifies to the role of multi-agent dynamics at many scales: social learning within groups, intra-group collaboration and inter-group competition, complex ecologies to support niching and “natural curricula”, the ratcheting of complexity via red-queen dynamics, and more. GAN equilibria provided the first hint of these dynamics, which B has so elegantly elaborated. Exploitability descent, Alpha-Rank and PSRO-RN give us more tools that we can wield in quite abstract ways.

Structured Representations

A seemingly settled wisdom is the shift away from task-specific engineering and representation to flexible, task-agnostic architectures trained via gradient descent. However, inducing disentangled, interpretable representations could be a powerful way to promote or even guarantee the safety, fairness, transfer efficiency, generalisation, and explainability of our models.

How do we reconcile these? A delicate balancing act exists: one one hand, choosing the right inductive priors, representations, and architectures to support explainable, high-level cognition; on the other, matching and exceeding the flexibility and efficiency of today’s models without excessive hand-holding. New paradigms of programming and architectural search might be required to walk this tightrope within the bounds of a reasonable computational budget.

Neuroscience

AI and neuroscience have long courted each other. The neocognitron inspired convnets, conversely convnets predict human visual cortex activity. The hippocampus inspired experience replay, and TD error seemed to matched up with dopamine. Distributional RL inspired a new interpretation of dopaminergic neurons. The connection between backprop and biological learning is heating up. DL is capable of recapitulating mammalian grid cell navigation. Transfer learning connects to theories of plasticity. In the background, a new philosopy of neuroscience beckons from the apex of Marr’s levels, cast as the study of architectures and learning rules. The traffic between these fields grows more rich and exciting by the year.

It’s not clear what form future cross-pollination will take, but the value of basic literacy in neuroscience seems clear. The next productive frontier might even advance into the territory of cognitive neuroscience, following research like that of Alison Gopnick in the field of baby cognition.


  1. Ethnic diversity deflates price bubbleswww.pnas.org ↩︎

  2. Is the Pain Worth the Gain? The Advantages and Liabilities of Agreeing With Socially Distinct Newcomersjournals.sagepub.com ↩︎

  3. How Diversity Makes Us Smarterwww.scientificamerican.com ↩︎

  4. A distributional code for value in dopamine-based reinforcement learningwww.nature.com ↩︎