About

I’m a computer science PhD student at MIT. I make fast machine learning algorithms.

I do this because 1) I’ve spent most of my waking hours on this since 2013 and now have a comparative advantage, and 2) I claim that it’s extremely important. The reason it’s important is that speed helps us get all the other qualities we like in machine learning, such as accuracy, privacy, fairness, and safety. It does this in three ways:

  1. It directly lets you get more of these qualities through usage of larger models, exploration of more architectures or hyperparameters, slower but privacy-preserving training, slower but adversarially robust inference, etc.
  2. A corollary is that it improves incentives. The history of technology shows that desirable features take a back seat to essential features. In machine learning today, we’re seeing model accuracy and inference time dominate desirable traits like safety and privacy. By making it easy to obtain high-quality models in the essential respects, we make it worthwhile to consider the “non-essential” respects as well.
  3. It facilitates research, especially in academia. The bottleneck in a great deal of machine learning research, particularly for those of us who lack tech-giant-level resources, is experiment time. Faster machine learning means faster research progress. The disproportionate aid to academics also (at least in theory) disproportionately facilitates research on socially desirable aspects of machine learning such as privacy, safety, and fairness that private companies may have less incentive to pursue.

There’s more specific reasoning behind my individual projects, but these points hopefully give you a taste of why I think speed is important.

I’ve had a lot of failures in pursuing this, but also some successes. My favorite successes so far include:

  • Learning to recognize spoken words from five unlabeled examples in under two seconds [1]
  • Training on data at 5GB/s in a single thread [2]
  • Nearest-neighbor searching through billions of images per second in one thread with no indexing [3]
  • Multiplying matrices 10-100x faster than a matrix multiply (with some approximation error) [4]

[1] https://arxiv.org/abs/1609.09196
[2] https://arxiv.org/abs/1808.02515
[3] https://arxiv.org/abs/1706.10283
[4] https://arxiv.org/abs/2106.10860

More Info

  • My CV is here.
  • I was profiled in UVA Today after winning a Goldwater scholarship.
  • If you need something official-looking to say about me, here’s a bio:

    “Davis Blalock is a PhD student at MIT, advised by Professor John Guttag. His primary work is designing high-performance machine learning algorithms, with the goal of eliminating tradeoffs between speed, accuracy, privacy, and safety in machine learning. He received his M.S. from MIT in 2016 and his B.S. from the University of Virginia in 2014. He is a Qualcomm Innovation Fellow, NSF Graduate Research Fellow, and Barry M. Goldwater Scholar.”