Projects / Wasserstein GAN DFL

While tutoring for the Deep Learning Indaba I was fortunate to get to know Cinjon Resniktwitter.com, who is currently doing his PhD with Kyunghyun Chotwitter.com at NYU.

After the Indaba, Cinjon invited me to join an experiment in distributed teaching and learning that he founded, known as Depth First Learningwww.depthfirstlearning.com. One thing I particularly like about DFL is its emphasis on navigating a path through “paper space” to explain a core idea or story. The story we chose as the backbone for our DFL was that of the Wasserstein GANarxiv.org, which in case you don’t know is an insightful twist on the ordinary Generative Adversarial Netpapers.nips.cc, involving many interesting ideas in probability theoryWikipedia and optimal transportWikipedia.

Depth First Learning, which is as much about generating learning materials as it is about actually teaching the participants, works in a most interesting way. The recipe is still evolving, but in our case, a distributed team of about 7 of us, led by James Allinghamtwitter.com, met regularly over the course of a month. We focused on one paper each week, preparing beforehand so we could review and discuss in an hour-long Google Hangouts session, chaired and coordinated by James.

Each paper gave us the foundation and intuition to understand the next, until we were ready to tackle the final WGAN-GParxiv.org paper. It amounted to a kind of protracted, socratic journal club, spiced up with the appearance of mystery guests who joined us in the Hangout session to explain finer details, including Martin Arjovsky (a first author on the WGAN papers), Tim Salimanstwitter.com, and Ishaan Gulrajanitwitter.com. A real treat!

You can find the finished product herewww.depthfirstlearning.com. James did all the work — I merely asked the occasional foolish question and helped transcribe some of the video we recorded.