Traditional Nyquist-Shannon sampling dictates that a continuous time signal be sampled at twice its bandwidth to achieve perfect recovery. However, It has been recently demonstrated that by exploiting the structure of the signal, it is possible to sample a signal below the Nyquist rate and achieve perfect reconstruction using a random projection, sparse representation and an l_1-norm minimisation. These methods constitute a new and emerging theory known as Compressive Sampling (or Compressed sensing). Here, we apply Compressive Sampling to non-negative signals, and propose an algorithm—Non-negative Underdetermined Iteratively Reweighted Least Squares (NUIRLS) —for signal recovery. NUIRLS is derived within the framework of Non-negative Matrix Factorisation (NMF) and utilises Iteratively Reweighted Least Squares as its objective, recovering non-negative minimum l_p-norm solutions, 0 p 1. We demonstrate that—for sufficiently sparse non-negative signals—the signals recovered by NUIRLS and NMF are essentially the same, which suggests that a non-negativity constraint is enough to recover sufficiently sparse signals.

...I have placed the code for NUIRLS on my website at:

I can't wait to see the paper [1]. I then replied asking him:The main aim of my MLSP08 paper is to demonstrate that sparse norms are not always necessary for the special case where CS is entirely non-negative. Furthermore, in December I will present a paper [1] at a workshop in England where NMF is performed in the CS domain---recovering sparse coefficients AND a sparse basis from a compressively sampled matrix.

Is it your experience that NUIRLS is fast ?

My experience with standard NMF is that it is fast, further extensions such as adding sparseness constraints to convolutive models make it slow, when you include reweighting into the objective---as is the case for NUIRLS---it becomes very very slow. The graphs in the MLSP paper took a number of weeks. luckily though, we can use standard NMF updates if the signal is sufficiently sparse.

A quick comment: Anna's table actually contains *all* best known (to us) bounds for sparse recovery that work for arbitrary signals. I.e., it is not limited to schemes based on RIP-1 principle, but it also contains schemes based on usual RIP and Euclidean width properties. You can tell that by the last column (RIP-1 property leads to approximation bounds in the l1 norm, while other properties yield mixed norm or l2 norm bounds).

The license allows for it, so I've posted a copy of v0.4b on my site:

The license allows for redistribution non-commercially, as you can see in the file's includedReadme.txt. If you have a newer version or his site comes back up, please let me know.

*Proceedings of the 8th IMA International Conference on Mathematics in Signal Processing*, December 2008, Cirencester UK.