An Addendum to Alchemy

This post is an addendum to our “test of time” talk at NIPS 2017. We’d like to expand on a few points about the talk we gave at NIPS last week. The talk highlighted the... Continue

Reflections on Random Kitchen Sinks

Ed. Note: Ali Rahimi and I won the test of time award at NIPS 2017 for our paper “Random Features for Large-scale Kernel Machines”. This post is the text of the acceptance speech we wrote.... Continue

Nesterov's Punctuated Equilibrium

Ed. Note: this post is co-written with Roy Frostig. Following the remarkable success of AlphaGo, there has been a groundswell of interest in reinforcement learning for games, robotics, parameter tuning, and even computer networking. In... Continue

The Fall of BIG DATA

I’m still in total shock from the decision my country made last Tuesday. We elected a hateful, bigoted, misogynistic, incompetent demagogue to lead us into a dark and foreboding future. While the internet has been... Continue

Embracing the Random

Ed. Note: this post is again in my voice, but co-written with Kevin Jamieson. Kevin provided all of the awesome plots, and has a great tutorial for implementing the algorithm I’ll describe in this post... Continue

The News on Auto-tuning

Ed. Note: this post is in my voice, but it was co-written with Kevin Jamieson. Kevin provided the awesome plots too. It’s all the rage in machine learning these days to build complex, deep pipelines... Continue

The Role of Convergence Analysis

This year marks the retirement of Dimitri Bertsekas from MIT. Dimitri is an idol of mine, having literally written the book on every facet of optimization. His seminal works on distributed optimization, dynamic programming, and... Continue

Mechanics of Lagrangians

In my last post, I used a Lagrangian to compute derivatives of constrained optimization problems in neural nets and control. I took it for granted that the procedure was correct. But why is it correct?... Continue