Monday, October 18, 2010

CS: The long entry of the week.

 Here are the papers and preprints that popped up in the past few days, some are connected directly to compressive others ...not so much.

Compressed sensing (CS) samples signals at a much lower rate than the Nyquist rate if they are sparse in some basis. In this paper, the CS methodology is applied to sinusoidally modeled audio signals. As this model is sparse by definition in the frequency domain (being equal to the sum of a small number of sinusoids), we investigate whether CS can be used to encode audio signals at low bitrates. In contrast to encoding the sinusoidal parameters (amplitude, frequency, phase) as current state-of-the-art methods do, we propose encoding few randomly selected samples of the time-domain description of the sinusoidal component (per signal segment). The potential of applying compressed sensing both to single-channel and multichannel audio coding is examined. The listening test results are encouraging, indicating that the proposed approach can achieve comparable performance to that of state-of-the-art methods. Given that CS can lead to novel coding systems where the sampling and compression operations are combined into one low complexity step, the proposed methodology can be considered as an important step towards applying the CS framework to audio coding applications.
Compressed Sensing for OFDM UWB Systems by Tanish Agrawal, Vishwas Lakkundi, Anthony Griffin, and Panagiotis Tsakalides. The abstract reads:
This paper considers compressed sensing (CS) techniques for signal reconstruction and channel estimation in OFDM-based high-rate ultra wideband (UWB) communication systems. We employ a parallel CS structure that exploits frequency domain sparsity. We also consider multipath UWB channels in both the line-of-sight and non line-of-sight environments. UWB signal detection and channel estimation from sub-Nyquist analog projections is carried out using an optimized orthogonal matching pursuit algorithm and the smoothed l_0 algorithm. Simulation results demonstrate significant gains in the form of reliable signal recovery and channel estimation as well as dramatically sub-Nyquist sampling rates for the analog-to-digital converters while maintaining high data rates.

This paper exploits recent developments in sparse approximation and compressive sensing to efficiently perform localization in wireless networks. Particularly, we re-formulate the localization problem as a sparse approximation problem using the compressive sensing theory that provides a new paradigm for recovering a sparse signal solving an l_1 minimization problem. The proposed received signal strength-based method does not require any time specific/propriatery hardware since the location estimation is performed at the Access Points (APs). The experimental results show that our proposed method, when compared with traditional localization schemes results in a better accuracy in terms of the mean localization error.

Recovering or estimating the initial state of a highdimensional system can require a potentially large number of measurements. In this paper, we explain how this burden can be significantly reduced for certain linear systems when randomized measurement operators are employed. Our work builds upon recent results from the field of Compressive Sensing (CS), in which a high-dimensional signal containing few nonzero entries can be efficiently recovered from a small number of random measurements. In particular, we develop concentration of measure bounds for the observability matrix and explain circumstances under which this matrix can satisfy the Restricted Isometry Property (RIP), which is central to much analysis in CS. We also illustrate our results with a simple case study of a diffusion system. Aside from permitting recovery of sparse initial states, our analysis has potential applications in solving inference problems such as detection and classification of more general initial states.

In the theory of compressed sensing, restricted isometry analysis has become a standard tool for studying how efficiently a measurement matrix acquires information about sparse and compressible signals. Many recovery algorithms are known to succeed when the restricted isometry constants of the sampling matrix are small. Many potential applications of compressed sensing involve a data-acquisition process that proceeds by convolution with a random pulse followed by (nonrandom) subsampling. At present, the theoretical analysis of this measurement technique is lacking. This paper demonstrates that the $s$th order restricted isometry constant is small when the number $m$ of samples satisfies $m \gtrsim (s \log n)^{3/2}$, where $n$ is the length of the pulse. This bound improves on previous estimates, which exhibit quadratic scaling.


Spectrum sensing is a fundamental component in cognitive radio. A major challenge in this area is the requirement of a high sampling rate in the sensing of a wideband signal. In this paper a wideband spectrum sensing model is presented that utilizes a sub-Nyquist sampling scheme to bring substantial savings in terms of the sampling rate. The correlation matrix of a finite number of noisy samples is computed and used by a subspace estimator to detect the occupied and vacant channels of the spectrum. In contrast with common methods, the proposedmethod does not need the knowledge of signal properties that mitigates the uncertainty problem. We evaluate the performance of this method by computing the probability of detecting signal occupancy in terms of the number of samples and the SNR of randomly generated signals. The results show a reliable detection even in low SNR and small number of samples.

Sampling theories lie at the heart of signal processing devices and communication systems. To accommodate high operating rates while retaining low computational cost, efficient analog-to digital (ADC) converters must be developed. Many of limitations encountered in current converters are due to a traditional assumption that the sampling state needs to acquire the data at the Nyquist rate, corresponding to twice the signal bandwidth. In this thesis a method of sampling far below the Nyquist rate for sparse spectrum multiband signals is investigated. The method is called periodic non-uniform sampling, and it is useful in a variety of applications such as data converters, sensor array imaging and image compression. Firstly, a model for the sampling system in the frequency domain is prepared. It relates the Fourier transform of observed compressed samples with the unknown spectrum of the signal. Next, the reconstruction process based on the topic of compressed sensing is provided. We show that the sampling parameters play an important role on the average sample ratio and the quality of the reconstructed signal. The concept of condition number and its effect on the reconstructed signal in the presence of noise is introduced, and a feasible approach for choosing a sample pattern with a low condition number is given. We distinguish between the cases of known spectrum and unknown spectrum signals respectively. One of the model parameters is determined by the signal band locations that in case of unknown spectrum signals should be estimated from sampled data. Therefore, we applied both subspace methods and non-linear least square methods for estimation of this parameter. We also used the information theoretic criteria (Akaike and MDL) and the exponential fitting test techniques for model order selection in this case.

$\ell_1$ minimization can be used to recover sufficiently sparse unknown signals from compressed linear measurements. In fact, exact thresholds on the sparsity, as a function of the ratio between the system dimensions, so that with high probability almost all sparse signals can be recovered from i.i.d. Gaussian measurements, have been computed and are referred to as "weak thresholds" \cite{D}. It has also been known that there is a tradeoff between the sparsity and the $\ell_1$ minimization recovery stability in terms of the signal vector tail component. In this paper, we give a \emph{closed-form} characterization for this tradeoff which we call the scaling law for compressive sensing recovery stability. In a nutshell, we are able to show that as the sparsity backs off $\varpi$ ($0<\varpi<1$) from the weak threshold of $\ell_1$ recovery, the parameter for the recovery stability will scale as $\frac{1}{\sqrt{1-\varpi}}$. Our result is based on a careful analysis through the Grassmann angle framework for the Gaussian measurement matrix. We will further discuss how this scaling law helps in analyzing the iterative reweighted $\ell_1$ minimization algorithms. If the nonzero elements over the signal support follow a probability density function (pdf) $f(\cdot)$ whose $t$-th derivative $f^{t}(0) \neq 0$ for some $t \geq 0$, then a certain iterative reweighted $\ell_1$ minimization algorithm can be analytically shown to lift the phase transition thresholds (weak thresholds) of the plain $\ell_1$ minimization algorithm. Using the scaling law for the sparse recovery stability, this extends our earlier results of weak threshold improvements for sparse vectors with nonzero elements following the Gaussian distribution, whose pdf is itself nonzero at the origin (namely its $0$-th derivative is nonzero).

An information theoretic perspective on group testing problems has recently been proposed by Atia and Saligrama, in order to characterise the optimal number of tests. Their results hold in the noiseless case, where only false positives occur, and where only false negatives occur. We extend their results to a model containing both false positives and false negatives, developing simple information theoretic bounds on the number of tests required. Based on these bounds, we obtain an improved order of convergence in the case of false negatives only. Since these results are based on (computationally infeasible) joint typicality decoding, we propose a belief propagation algorithm for the detection of defective items and compare its actual performance to the theoretical bounds.

Error Prediction and Model Selection via Unbalanced Expander Graphs by Yohann de Castro. The abstract reads:
We investigate deterministic design matrices for the fundamental problems of error prediction and model selection. Our deterministic design matrices are constructed from unbalanced expander graphs, and we wonder if it is possible to accurately estimate the response and the support of our target vector using computationally tractable algorithms. We show that for any adjacency matrix of an unbalanced expander graph and any target vector, the lasso and the Dantzig selector satisfy oracle inequalities in error prediction and model selection involving the largest (in magnitude) coefficients of the target, i.e. upper bounds in term of the best sparse approximation. Our oracle inequalities allow error prediction with an accuracy which is the best, up to a logarithmic factor, one could expect knowing the support of the target. From a practical standpoint, these estimators can be computed by solving, either a simple quadratic program for the lasso, or a linear program for the Dantzig selector. Our results are non-asymptotic and describe the performance one can expect in all cases.
Low-rank matrix recovery via iteratively reweighted least squares minimization by Massimo Fornasier, Holger Rauhut, Rachel Ward. The abstract reads:
We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively low-rank solution. Under the assumption that the linear measurements fulfill a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best k-rank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments that confirm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniques proposed recently in the literature.

Despite the recent progress towards efficient multiple kernel learning (MKL), the structured output case remains an open research front. Current approaches involve repeatedly solving a batch learning problem, which makes them inadequate for large scale scenarios. We propose a new family of online proximal algorithms for MKL (as well as for group-lasso and variants thereof), which overcomes that drawback. We show regret, convergence, and generalization bounds for the proposed method. Experiments on handwriting recognition and dependency parsing testify for the successfulness of the approach.

On the Construction of Finite Oscillator Dictionary by Rongquan Feng, Zhenhua Gu, Zilong Wang, Hongfeng Wu, Kai Zhou. The abstract reads:
A finite oscillator dictionary which has important applications in sequences designs and the compressive sensing was introduced by Gurevich, Hadani and Sochen. In this paper, we first revisit closed formulae of the finite split oscillator dictionary $\mathfrak{S}^s$ by a simple proof. Then we study the non-split tori of the group $SL(2,\mathbb{F}_p)$. Finally, An explicit algorithm for computing the finite non-split oscillator dictionary $\mathfrak{S}^{ns}$ is described.

An Almost Optimal Unrestricted Fast Johnson-Lindenstrauss Transform by Nir Ailon, Edo Liberty. The abstract reads:
The problems of random projections and sparse reconstruction have much in common and individually received much attention. Surprisingly, until now they progressed in parallel and remained mostly separate. Here, we employ new tools from probability in Banach spaces that were successfully used in the context of sparse reconstruction to advance on an open problem in random pojection. In particular, we generalize and use an intricate result by Rudelson and Vershynin for sparse reconstruction which uses Dudley's theorem for bounding Gaussian processes. Our main result states that any set of $N = \exp(\tilde{O}(n))$ real vectors in $n$ dimensional space can be linearly mapped to a space of dimension $k=O(\log N\polylog(n))$, while (1) preserving the pairwise distances among the vectors to within any constant distortion and (2) being able to apply the transformation in time $O(n\log n)$ on each vector. This improves on the best known $N = \exp(\tilde{O}(n^{1/2}))$ achieved by Ailon and Liberty and $N = \exp(\tilde{O}(n^{1/3}))$ by Ailon and Chazelle.
The dependence in the distortion constant however is believed to be suboptimal and subject to further investigation. For constant distortion, this settles the open question posed by these authors up to a $\polylog(n)$ factor while considerably simplifying their constructions. 

Random projection methods give distributions over k d matrices such that if a matrix      (chosen according to the distribution) is applied to a nite set of vectors xi 2 Rd the resulting vectors     xi 2 Rk approximately preserve the original metric with constant probability. First, we show that any matrix (composed with a random 1 diagonal matrix) is a good random projector for a subset of vectors in Rd. Second, we describe a family of tensor product matrices which we term Lean Walsh. We show that using Lean Walsh matrices as random projections outperforms, in terms or running time, the best known current result (due to Matousek) under comparable assumptions.
Riemannian geometry applied to BCI classification by Alexandre Barachant, Stéphane Bonnet, Marco Congedo, and Christian Jutten. The abstract reads:
In brain computer interface based on motor imagery, covariances matrices are widely used through spatial filters computation and other signal processing methods. Covariances matrices lie in the space of Semi-definite Positives (SPD) matrices and therefore, fall within the Riemannian geometry domain. Using a differential geometry frameworks, we propose different algorithms in order to classify covariances matrices in their native space.




Image Credit: NASA/JPL/Space Science Institute , W00065756.jpg was taken on October 14, 2010 and received on Earth October 15, 2010. The camera was pointing toward TITAN at approximately 207,643 kilometers away, and the image was taken using the CB3 and CL2 filters.

No comments:

Printfriendly