Contact Us

I-X Research Presentations: Ryan Cory-Wright

Key Details:

Time: 15.30-16.30
Date: Thursday, 12 December
Location: In Person | I-X Conference Room | Level 5
Translation and Innovation Hub (I-HUB)
Imperial White City Campus
84 Wood Lane
London W12 0BZ

15:30 - 16:30
12/12/2024
Register Now

Speaker

Ryan Cory-Wright

Dr Ryan Cory-Wright is an Assistant Professor of Analytics and Operations at Imperial College Business School, affiliated with Imperial-X. His research focuses on optimization, machine learning, and statistics, and their applications in business analytics and renewable energy. Ryan was previously a Herman Goldstine postdoctoral fellow at IBM Research, and obtained his PhD from MIT’s Operations Research Center.  He has published in journals including Operations Research, Mathematical Programming, M&SOM, and the Journal of Machine Learning Research, and is a recipient of awards including the M&SOM practice-based research competition, the INFORMS Nicholson Prize, and the INFORMS Pierskalla Award.

Talk Title

Sparse PCA with Multiple Components

Talk Summary

Sparse Principal Component Analysis (sPCA) is a cardinal technique for obtaining combinations of features, or principal components (PCs), that explain the variance of high-dimensional datasets in an interpretable manner. This involves solving a sparsity and orthogonality constrained convex maximization problem, which is extremely computationally challenging. Most existing works address sparse PCA via methods—such as iteratively computing one sparse PC and deflating the covariance matrix—that do not guarantee the orthogonality, let alone the optimality, of the resulting solution when we seek multiple mutually orthogonal PCs. We challenge this status by reformulating the orthogonality conditions as rank constraints and optimizing over the sparsity and rank constraints simultaneously. We design tight semidefinite relaxations to supply high-quality upper bounds. We exploit these relaxations and bounds to propose exact methods and rounding mechanisms that, together, obtain solutions with a bound gap on the order of 0%–15% for real-world datasets with p = 100s or 1000s of features and r ∈ {2, 3} components. Numerically, our algorithms match (and sometimes surpass) the best performing methods in terms of fraction of variance explained and systematically return PCs that are sparse and orthogonal. In contrast, we find that existing methods like deflation return solutions that systematically violate the orthogonality constraints, even when the data is generated according to sparse orthogonal PCs. Altogether, our approach solves sparse PCA problems with multiple components to certifiable (near) optimality in a practically tractable fashion. This is joint work with Jean Pauphilet (London Business School).

More Events

Dec
05

In this talk, Dr Kanta Dihal explores differences in cultural approaches towards AI.

Jan
08

In his Inaugural Lecture, Professor Hamed Haddadi discusses his academic journey towards building networked systems.