Unsupervised learning of spatiotemporally coherent metrics

2015

Abstract

Current state-of-the-art classification and detection algorithms train deep convolutional networks using labeled data. In this work we study unsupervised feature learning with convolutional networks in the context of temporally coherent unlabeled data. We focus on feature learning from unlabeled video data, using the assumption that adjacent video frames contain semantically similar information. This assumption is exploited to train a convolutional pooling auto-encoder regularized by slowness and sparsity. We establish a connection between slow feature learning and metric learning. Using this connection we define" temporal coherence"--a criterion which can be used to select hyper-parameters automatically. In a transfer learning experiment, we show that the resulting encoder can be used to define a more semantically coherent metric without the use of labeled data.

Authors

Ross Goroshin
Joan Bruna
Jonathan Tompson
David Eigen
D Eigen
Yann LeCun