For Participants


For authors



COLT 2009 Invited Speakers

Piotr Indyk, MIT.

Sparse recovery using sparse matrices

Over the recent years, a new *linear* method for compressing high-dimensional data has been discovered. For any high-dimensional vector x, its *sketch* is equal to Ax, where A is a carefully designed m x n matrix (possibly chosen at random). Although typically the sketch length m is much smaller than the number of dimensions n, the sketch often contains enough information to recover a *sparse approximation* to x. At the same time, the linearity of the sketching method is very convenient for many applications, such as data stream computing and compressed sensing.

The major sketching approaches can be classified as either combinatorial (using sparse sketching matrices) or geometric (using dense sketching matrices). Traditionally they have achieved different trade-offs, notably between the compression rate and the running time. In this talk we show that, in a sense, the combinatorial and geometric approaches are based on different manifestations of the same phenomenon. This connection will enable us to obtain several novel algorithms and constructions, including the first known algorithm that provably achieves the optimal measurement bound and near-linear recovery time.

Piotr Indyk joined MIT in September 2000, after earning PhD from Stanford University. Earlier, he received Magister degree from Uniwersytet Warszawski in 1995. As of July 2007, he holds the title of Associate Professor with Tenure in the Department of Electrical Engineering and Computer Science.

Piotr's research interests include computational geometry in high dimensional spaces, sublinear time, space or streaming algorithms and algorithmic coding theory. He is a recipient of NSF CAREER Award, Sloan Fellowship and Packard Research Fellowship.

Adam Tauman Kalai, Microsoft

Towards Agnostic and Interactive Machine Learning

Agnostic Learning (Kearns, Schapire and Sellie '92; Haussler '90) is a branch of Computational Learning Theory in which little or no assumptions are made about the true function being learned. The results are classification algorithms that tolerate realistic noise, and furthermore have the asymptotic sample-complexity and computational efficiency guarantees of Valiant's PAC model. The agnostic perspective sheds light on several traditional learning notions, such as Fourier learning, SVMs, Decision Trees, and Boosting.

On many "hard" classification problems, machine learning is stuck, whether it be for computational or information-theoretic reasons. Agnostic learning models both of these problems. Furthermore, we argue that agnostic learning is a good model for going beyond the current reaches of machine learning accuracy by using interaction with humans as a means to sidestep computationally intractable problems.

Adam Tauman Kalai is a Senior Researcher and Founding Member of Microsoft Research New England. Kalai received his BA (1996) from Harvard, and MA (1998) and PhD (2001) under the supervision of Avrim Blum from CMU. After an NSF postdoctoral fellowship at M.I.T. with Santosh Vempala, he served as an assistant professor at the Toyota Technological institute at Chicago and at Georgia Tech. His honors include an NSF CAREER award and an Alfred P. Sloan fellowship. His research focuses on computational learning theory, game theory, algorithms, and online optimization.