# Projects

2023

- Rainbow Arrays blog post series After a lot of work (and software engineering) I finished up the first two episodes in what is hopefully a longer series of illustrated blogs. The theme is multidimensional arrays and how we manipulate them conceptually. I know the deep learning crowd like to call them tensors, but this brazen theft of terminology from mathematical physics has some some big downsides. First, it makes everyone assume that array programming is just linear algebra, which it definitely is not. Second, it means that …
- Neuroevolution Paper gene expression, connectomes, and metalearning A few years into my mid-career pivot into science, I just landed my first published paper! What started as an idea I wrote on a napkin at the Cosyne neuroscience conference is finally out in Nature Communications: “Complex computation from developmental priors”. […] In the paper, my neuroscience collaborator Dániel and I describe how to cast neuroevolution as a metalearning problem, by treating his XOX model of connectome generation as a differentiable step in the two-stage …

2019

- DLI: "Build Your Own TensorFlow" TensorFlow practicals for the Deep Learning Indaba in Kenya For the 2019 Deep Learning Indaba (held in Nairobi, Kenya), I was privileged to co-organize the tutorial sessions with Jamie Allingham, with the extensive help and support of Avishkar Boopchand, Stephan Gouws, and Ulrich Paquet of DeepMind. We had more than 50 tutors this year, covering more 500 students across 2 parallel sessions each day. Our goal for this year was to expand the range of material that the tutorials covered, to address pain points we saw in previous years, and to further …
- Spieeltjie experiment with multi-agent RL on zero-sum differentiable games Spieeltjie is a single-file package for doing simple experiments with multi-agent reinforcement learning on symmetric zero-sum games. For more information see “Open-ended learning in Symmetric Zero-Sum Games” and “A Unified Game-Theoretic Approach to Multiagent Reinforcement Learning”. The name “spieeltjie” comes from the Afrikaans word for “tournament”. […] This first set of images shows trajectories when starting from a set of random …
- Wasserstein GAN DFL contributed to Depth First Learning course on Wasserstein GANs While tutoring at the 2019 Deep Learning Indaba, I got to know the multi-talented Cinjon Resnik, who is currently doing his PhD with Kyunghyun Cho at NYU. After the Indaba, Cinjon invited me to join an experiment he is running in distributed teaching and learning called Depth First Learning. One innovation that particularly resonated with me was the effort DFL makes to plot a path through “paper space”, as a way to explain a core idea or story. The story we chose as the backbone for …
- Biologically Plausible Backprop feedback alignment & activity perturbation in PyTorch SOAP (Second Order Activity Perturbation) is a package for experimenting with a computational neuroscience phenomenon called feedback alignment in PyTorch. It formed my project for the 3-week IBRO-Simons Computational Neuroscience Summer School in 2019. It relates to an important challenge in reconciling how learning works in artificial neural networks with what we know about how real neurons behave, a topic called biologically plausible back-propogation. […] When a neuron fires, its axon …

2015

2013

- Facebook Data Analysis analysis and visualization for popular blog by Stephen Wolfram For this project, I helped Stephen Wolfram analyze a dataset of Facebook profiles we collected from volunteers who engaged with a previous project I worked on. We looked at the influence of the friendship paradox, correlations between various demographic variables, performed topic modeling on people’s facebook posts, and did cluster analysis on their friend graphs. The friendship paradox summarizes that, on average, your friends have more friends than you do, because popular people are …
- Wikispider tool for breadth-first crawling of matching Wikipedia articles

2012