Tuesday, June 12, 2007

Compressed Sensing: Universal Dimensionality Reduction, Pattern Matching and a Biology Inspired Faster Reconstruction Algorithm

[March 2, 2010: Wired/Slashdot readers, you may want to read this dedicated blog entry first!]

Mike Wakin and Richard Baraniuk have improved the initial draft on Random Projections of Smooth Manifolds and they are now fully mentioning the point that Compressed Sensing should be seen as a Universal Dimensionality Reduction tool. While they write about how that technique can help in the rapid computation of other schemes such as ISOMAP or MVU, it also propels the technique in the world of Machine Learning. There are several advantages for using this technique for manifold discovery. Not the least is the ability to avoid computing pairwise distances first. Another aspect that will certainly be investigated is how the approach of Shihao Ji, Ya Xue, and Lawrence Carin on Bayesian compressive sensing could be used to discover the dimension of the manifold. In effect, most current techniques in dimensionality reduction cannot do this since the dimension of the manifold is left as a parameter to be discovered through fitting. One of the most significant idea to take from the paper is that the manifold itself allows for reduced number of element to describe the data. This is new and tighter result in the world of randomized dimensionality reduction:

Additionally, our results suggest that, for processing large volumes of data concentrated on manifolds, the number of requisite dimensions for a structure-preserving mapping should derive from the properties of the manifold itself, rather than the number of data points (in contrast to the JL lemma).

Indeed, the Johnson-Lindenstrauss (JL) Lemma has been known since 1984 and while a Fast Transform has been developed for it [2], it is dependent on the number of samples rather than on the manifold on which these samples are from. To put it in a different light: ever since JL, random projections have been a subject of much interest. But because the bound on the minimum number of dimensions grows with the number of points, one is still facing what people call the curse of dimensionality (albeit a small version of it.) What the compressed sensing framework and what this paper bring to the table is that there is a limit on the number of random projections you need to compute (or obtain) in order to obtain all the information about the object you are studying. And that limit is not connected to the size of the sample set. This is big.

Another item caught my limited attention on the Rice Compressed Sensing page. It is an article by Jong Chul Ye on Compressed sensing shape estimation of star-shaped objects in Fourier imaging. The reason it caught my eyes is because one could begin to use Compressed Sensing as part of a pattern matching algorithm used for the rapid detection of extraterrestrial objects or flower constellations.

Compressed Sensing seems to get more traction in the MRI field with an entire session at a meeting dedicated to it , for more information check out: HYPR -HighlY constrained back PRojection- and Compressed Sensing.

My attention and writings are limited these days because of the upcoming DARPA site visit for our vehicle but I am sure I will come back and write something more in-depth about the very interesting paper by Christopher Rozell, Don Johnson, Richard Baraniuk and Bruno Olshausen on Locally Competitive Algorithms For Sparse Approximation. It is interesting because, for the first time, a scheme is devised toward solving the sparsity issue in time and has the potential to imitate the visual cortex and produce new codecs.

References:
[1] Rice Compressed Sensing page.
[2] Approximate Nearest Neighbors and the Fast Johnson-Lindenstrauss Transform [ pdf ] [slides (ppt)] by Nir Ailon, Bernard Chazelle.

No comments:

Printfriendly