Disclaimer: this site focuses mostly on my academic / pre-industry work, so is dated. I still update it occasionally.

I was VP of AI Infrastructure at NVIDIA for the past 5 years. In that time I led 2 major projects: (1) MagLev - NVIDIA's internal AI infrastructure, which powers Drive/NVIDIA's Autonomous Vehicle SW development, and (2) RAPIDS - NVIDIA's open-source Data Science platform. I talk about MagLev live here, and share some thoughts on the general challenge of building infrastructure for AI development here.

Before joining NVIDIA, I was running a team at Twitter called Cortex Core. The team focuses on building a high-leverage Machine Learning/Deep Learning platform to power every aspect of the Twitter product (recommendation systems, search, timeline ranking, etc.). There I focused on (1) developing models of text, images, video, users, and (2) making these models seamlessly importable as components of user-facing ML systems. Some interesting press on our earlier efforts around media analysis: NSFW content and Live Video classification.

Before working on large-scale ML problems at Twitter, I co-founded a company called Madbits, which focused on image/video classification, to enable searchable media collections (whether for end users' photo libraries or stock photographies / social-media / online media). We sold MadBits to Twitter to kickstart Twitter's ability to understand its media content.

Before that I was a research scientist at the Courant Institute, New York University, for 5 years. During that time I worked towards my PhD (Université Paris-Est), under the supervision of Yann LeCun and Laurent Najman. The title of my thesis is: "Towards Real-Time Image Understanding with Convolutional Networks".

My research focused on artificial vision in general: from the design and understanding of trainable vision systems, to their computation on efficient, low-power hardware. Today, I focus almost entirely on how to transform that type of research into high-leverage platforms that can transform our industry and create value.

I am @clmt on Twitter. I also occasionally post pictures here.


+ Some press on our work to classify live video content in real-time.

+ Some press on our work to catch bad content on Twitter.

+ We sold Madbits to Twitter!

+ Here's our latest journal paper on scene labeling: PDF, to appear in PAMI.

+ New paper from Martial Hebert's group, that implements temporal consistency for video analysis. It uses some of our results from our scene parsing paper.

+ I gave a talk at the Rowland Institute at Harvard, while visiting David Cox' lab. Pretty crazy setup, I was impressed by the parallel rat-based computers :-).

+ I gave a series of tutorials on Torch7 , at the IPAM Graduate Summer School (Deep Learning, Feature Learning) , jointly with James Bergstra , who will be talking about Theano . The final set of tutorials has been moved to this page.

+ Our new paper on efficient scene parsing: Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers was presented at ICML this year: poster and slides .

+ I gave a talk at Gatsby, London, here are the slides: Real-time Scene Parsing, with multi-scale feature learning and a touch of custom hardware .

+ We won the Optimization Challenge, held at the Transfer Learning and Optimization Workshop (in conjunction with NIPS 2011). Sixin Zhang presented some cool results on SGD, ASGD, weight initialization and weight tying for multi-layer auto-encoders.

+ Yann gave a cool 'discussant' talk at the Non-Parametric Bayesian (NPB) talk. It includes a concise summary of my work here (my stuff starts getting described at 10:00).

+ Yann and I gave an invited talk at the Big Learn Workshop (held in conjunction with NIPS 2011). The talk was about our latest work on neuFlow, convolutional networks, and some real-time scene parsing/understanding results. Here are my slides.

+ Scaling Up Machine Learning is ready for preorder. Our chapter, Large-Scale FPGA-Based Convolutional Networks is available here.

+ press release about neuFlow