Wednesday, October 06, 2010

CS: Clouds, Concentration of Measure for Block Diagonal Matrices, Deterministic Compressed Sensing Matrices, some meetings

To answer yesterday's question: Clouds, the probability of flying 100,000 feet up in the air and seeing only clouds is pretty high.  This is the reason why
I ain't talking about doing NIR nor SWIR to see through clouds, but rather what if clouds could be used as lenses ?

Oh well, today we have  several papers and some announcements about meetings: First, I had thought I had seen that paper back in March but Michael Wakin tells me in what way the following paper is different from the previous installement:
This paper significantly expands upon our previous ICASSP and CISS publications. Section 3 contains a more general analysis for Distinct Block Diagonal (DBD) matrices and explains a connection between our concentration bounds for DBD matrices and Repeated Block Diagonal (RBD) matrices. Sections 4 and 5 of this paper contain almost all new material, including a list of signal families that have favorable properties for measurement via block diagonal matrices, and a discussion of connections with the compressive channel sensing problem.
Theoretical analysis of randomized, compressive operators often depends on a concentration of measure inequality for the operator in question. Typically, such inequalities quantify the likelihood that a random matrix will preserve the norm of a signal after multiplication. When this likelihood is very high for any signal, the random matrices have a variety of known uses in dimensionality
reduction and Compressive Sensing. Concentration of measure results are well-established for unstructured compressive matrices, populated with independent and identically distributed (i.i.d.) random entries. Many real-world acquisition systems, however, are subject to architectural constraints that make such matrices impractical. In this paper we derive concentration of measure bounds for two types of block diagonal compressive matrices, one in which the blocks along the main diagonal are random and independent, and one in which the blocks are random but equal. For both types of matrices, we show that the likelihood of norm preservation depends on certain properties of the signal being measured, but that for the best case signals, both types of block diagonal matrices can offer concentration performance on par with their unstructured, i.i.d. counterparts. We support our theoretical results with illustrative simulations as well as (analytical and empirical) investigations of several signal classes that are highly amenable to measurement using block diagonal matrices. Finally, we discuss applications of these results in establishing performance guarantees for solving signal processing tasks in the compressed domain (e.g., signal detection), and in establishing the Restricted Isometry Property for the Toeplitz matrices that arise in compressive channel sensing.

Compressed sensing is a novel technique where one can recover sparse signals from the undersampled measurements. In this correspondence, a $K \times N$ measurement matrix for compressed sensing is deterministically constructed via additive character sequences. The Weil bound is then used to show that the matrix has asymptotically optimal coherence for $N=K^2$, and to present a sufficient condition on the sparsity level for unique sparse recovery. Also, the restricted isometry property (RIP) is statistically studied for the deterministic matrix. Using additive character sequences with small alphabets, the compressed sensing matrix can be efficiently implemented by linear feedback shift registers. Numerical results show that the deterministic compressed sensing matrix guarantees reliable matching pursuit recovery performance for both noiseless and noisy measurements.

Compressed sensing is a new emerging field dealing with the reconstruction of a sparse or, more precisely, a compressed representation of a signal from a relatively small number of observations, typically less than the signal dimension. In our previous work we have shown how the Kalman filter can be naturally applied for obtaining an approximate Bayesian solution for the compressed sensing problem. The resulting algorithm, which was termed CSKF, relies on a pseudo-measurement technique for enforcing the sparseness constraint. Our approach raises two concerns which are addressed in this paper. The first one refers to the validity of our approximation technique. In this regard, we provide a rigorous treatment of the CSKF algorithm which is concluded with an upper bound on the discrepancy between the exact (in the Bayesian sense) and the approximate solutions. The second concern refers to the computational overhead associated with the CSKF in large scale settings. This problem is alleviated here using an efficient measurement update scheme based on Krylov subspace method.

Derandomization and Group Testing by Mahdi Cheraghchi. The abstract reads:
The rapid development of derandomization theory, which is a fundamental area in theoretical computer science, has recently led to many surprising applications outside its initial intention. We will review some recent such developments related to combinatorial group testing. In its most basic setting, the aim of group testing is to identify a set of "positive" individuals in a population of items by taking groups of items and asking whether there is a positive in each group. In particular, we will discuss explicit constructions of optimal or nearly-optimal group testing schemes using "randomness-conducting" functions. Among such developments are constructions of error-correcting group testing schemes using randomness extractors and condensers, as well as threshold group testing schemes from lossless condensers.


Interference Alignment as a Rank Constrained Rank Minimization by Dimitris S. Papailiopoulos, Alexandros G. Dimakis. The abstract reads:
We show that the maximization of the sum degrees-of-freedom for the static flat-fading multiple-input multiple-output (MIMO) interference channel is equivalent to a rank constrained rank minimization problem, when the signal spaces span all available dimensions. The rank minimization corresponds to maximizing interference alignment (IA) such that interference spans the lowest dimensional subspace possible. The rank constraints account for the useful signal spaces spanning all available spatial dimensions. That way, we reformulate all IA requirements to requirements involving ranks. Then, we present a convex relaxation of the RCRM problem inspired by recent results in compressed sensing and low-rank matrix completion theory that rely on approximating rank with the nuclear norm. We show that the convex envelope of the sum of ranks of the interference matrices is the sum of their corresponding nuclear norms and introduce tractable constraints that are asymptotically equivalent to the rank constraints for the initial problem. We also show that our heuristic relaxation can be also tuned to the multi-cell interference channel. Furthermore, we experimentally show that the proposed algorithm outperforms previous approaches for finding precoding and zero-forcing matrices for interference alignment.

Regularizers for Structured Sparsity by: Charles A. Micchelli, Jean M. Morales, Massimiliano Pontil. The abstract reads:
We study the problem of learning a sparse linear regression vector under additional conditions on the structure of its sparsity pattern. This problem is relevant in machine learning, statistics and signal processing. It is well known that a linear regression can benefit from the knowledge that the underlying regression vector is sparse. The combinatorial problem of selecting the nonzero components of this vector can be ``relaxed'' by regularizing the squared error with a convex penalty function like the $\ell_1$ norm. However, in many applications, additional conditions on the structure of the regression vector and its sparsity pattern are available. By incorporating this information into the learning method, may lead to a significant decrease of the estimation error. In this paper, we present a family of convex penalty functions, which encode prior knowledge on the structure of the vector formed by the absolute values of the regression coefficients. This family subsumes the $\ell_1$ norm and is flexible enough to include different models of sparsity patterns, which are of practical and theoretical importance. We establish the basic properties of these penalty functions and discuss some examples where they can be computed explicitly. Moreover, we present a convergent optimization algorithm for solving regularized least squares with these penalty functions. Numerical simulations highlight the benefit of structured sparsity and the advantage offered by our approach over the Lasso method and other related methods.

Real-time Robust Principal Components' Pursuit by Chenlu Qiu, Namrata Vaswani. The abstract reads:
In the recent work of Candes et al, the problem of recovering low rank matrix corrupted by i.i.d. sparse outliers is studied and a very elegant solution, principal component pursuit, is proposed. It is motivated as a tool for video surveillance applications with the background image sequence forming the low rank part and the moving objects/persons/abnormalities forming the sparse part. Each image frame is treated as a column vector of the data matrix made up of a low rank matrix and a sparse corruption matrix. Principal component pursuit solves the problem under the assumptions that the singular vectors of the low rank matrix are spread out and the sparsity pattern of the sparse matrix is uniformly random. However, in practice, usually the sparsity pattern and the signal values of the sparse part (moving persons/objects) change in a correlated fashion over time, for e.g., the object moves slowly and/or with roughly constant velocity. This will often result in a low rank sparse matrix.
For video surveillance applications, it would be much more useful to have a real-time solution. In this work, we study the online version of the above problem and propose a solution that automatically handles correlated sparse outliers. The key idea of this work is as follows. Given an initial estimate of the principal directions of the low rank part, we causally keep estimating the sparse part at each time by solving a noisy compressive sensing type problem. The principal directions of the low rank part are updated every-so-often. In between two update times, if new Principal Components' directions appear, the "noise" seen by the Compressive Sensing step may increase. This problem is solved, in part, by utilizing the time correlation model of the low rank part. We call the proposed solution "Real-time Robust Principal Components' Pursuit".

Sparse data models, where data is assumed to be well represented as a linear combination of a few elements from a dictionary, have gained considerable attention in recent years, and their use has led to state-of-the-art results in many signal and image processing tasks. It is now well understood that the choice of the sparsity regularization term is critical in the success of such models. Based on a codelength minimization interpretation of sparse coding, and using tools from universal coding theory, we propose a framework for designing sparsity regularization terms which have theoretical and practical advantages when compared to the more standard `0 or `1 ones. The presentation of the framework and theoretical foundations is complemented with examples that show its practical advantages in image denoising, zooming and classification.
Structured Sparse Principal Component Analysis by Rodolphe Jenatton, Guillaume Obozinski, Francis Bach. Theabstract
We present an extension of sparse PCA, or sparse dictionary learning, where the sparsity patterns of all dictionary elements are structured and constrained to belong to a prespecified set of shapes. This structured sparse PCA is based on a structured regularization recently introduced by Jenatton et al. (2009). While classical sparse priors only deal with cardinality, the regularization we use encodes higher-order information about the data. We propose an efficient and simple optimization procedure to solve this problem. Experiments  with two practical tasks, the denoising of sparse structured signals and face recognition, demonstrate the benefits of the proposed structured approach over unstructured approaches.

Network Flow Algorithms for Structured Sparsity by Julien Mairal Rodolphe Jenatton, Guillaume Obozinski, Francis Bach.. The abstract reads:

We consider a class of learning problems that involve a structured sparsity-inducing norm defined as the sum of $\ell_\infty$-norms over groups of variables. Whereas a lot of effort has been put in developing fast optimization methods when the groups are disjoint or embedded in a specific hierarchical structure, we address here the case of general overlapping groups. To this end, we show that the corresponding optimization problem is related to network flow optimization. More precisely, the proximal problem associated with the norm we consider is dual to a quadratic min-cost flow problem. We propose an efficient procedure which computes its solution exactly in polynomial time. Our algorithm scales up to millions of variables, and opens up a whole new range of applications for structured sparse models. We present several experiments on image and video data, demonstrating the applicability and scalability of our approach for various problems.

While waiting for the videos of the LVA meeting, I found the following list of meetings related to sparsity here.
And here is another one:
Spring-Summer School
June 20-21-22, 2011
Random matrices - Stochastic geometry - Compressed sensing


Speakers

* Jared Tanner (U. Edinburgh, UK)
Stochastic Geometry and Random Matrix Theory in Compressed Sensing
Stochastic geometry and random matrix theory can be used to model and analyse the efficacy of a new paradigm in signal processing, compressed sensing. This new perspective shows that if a signal is sufficiently simple, it can be acquired at a rate proportional to its information content rather than the worst case rate dictated by a larger ambient space containing it. These lectures will show how this phenomenon can be modelled using stochastic geometry, and will also show how standard eigen-analysis in random matrix theory can give similar results.
* Roman Vershynin (U. Michigan, USA)
Introduction to the non-asymptotic analysis of random matrices
This is a mini-course on basic non-asymptotic methods and concepts in random matrix theory. We will develop several tools for the analysis of the extreme singular values of random matrices with independent rows or columns. Many of these methods sprung off from the development of geometric functional analysis in the 1970-2000's. They have applications in several fields, most notably in theoretical computer science, statistics and signal processing. Two applications will be discussed: for the problem of estimating covariance matrices in statistics, and for validating probabilistic constructions of measurement matrices in compressed sensing.

Location and period

Institut Henri Poincaré, Paris, from June 20 to June 22, 2011. Room 201 9:00am-05:30pm.

Audience

This school is addressed to non-specialists: participation of postdocs and PhD students is strongly encouraged.

Registration

The number of participants is limited to 30.

* Registration form
* List of participants

Organizers

D. Chafaï & A. Pajor (U. Paris-Est Marne-la-Vallée), A. Tsybakov (U. Pierre et Marie Curie Paris VI & CREST)

No comments:

Printfriendly