Tuesday, September 21, 2010

CS: The long post of the week (Remarks, Noisy Group Testing, Streaming Compressive Sensing and more)

Echoing an entry on why we should not care about P vs NP when discussing compressed sensing issues, and rather focus on the Donoho-Tanner phase transition, I was reminded (after re-reading one of Suresh's entry "...Remember that exponential might be better than n^100 for many values of n." )  that Dick Lipton pointed out to this issue as well  Is P=NP an Ill Posed Problem?, albeit more eloquently. If last week's scammer (P = NP, "It's all about you, isn't it ?") had even bothered to answer half seriously the question, I think I would have sent some money, really...So maybe it's not such good idea to use the same subject to fight the next scammer, they might get good at answering P vs NP questions :-)

Dick Lipton mentioned on his blog that he has new book out. Giuseppe also mentioned his intention on buying Understanding Complex Datasets: Data Mining with Matrix Decompositions. Both of them are in the Nuit Blanche bookstore.

Talking about the bookstore, some commenter summarized some of my thoughts in the entry about the store's introduction.
The Nikon camera having a projector built-in makes it possible to project on the object various pseudo-random matrix masks, for 3d reconstruction, dual-photography or various CS uses. Does the software allow to use the projector at the same time as the main camera module ? Some hacking might be needed to open these cool possibilities...
You bet! I might add that a similar stance should be taken for this FujiFilm Finpix camera, the 3D AIPTEK camcorder or the Minoru 3Dwebcam albeit more along the lines of MIT's random lens imager.

In a different direction, Kerkil Choi has a post on his website where he is trying to clear up some statement on his recent paper:

Compressive holography and the restricted isometry property

posted Sep 8, 2010 4:34 PM by Kerkil Choi   [ updated Sep 13, 2010 11:21 AM ]
I have received a surge of emails asking why our compressive holography necessarily works in the context of compressive sensing.
 
One intuitional explanation is that the hologram (after removing the signal autocorrelation and the twin image terms) may be interpreted as a slice of the 3D Fourier transform according to the Fourier diffraction slice theorem developed in diffraction tomography. A Fourier transform measurement matrix is called an incoherent measurement matrix in the compressive sensing literature. Our measurement is quite structured, but it may be thought of as an instance or realization of such random sets of samples.
 
A more rigorous argument relies on the work of Justin Romberg at Georgia Tech. Justin proved that a random convolution matrix built by multiplying an inverse Fourier transform matrix, a diagonal with random phase entries with unit magnitudes, and a Fourier transform matrix satisfies the restricted isometry property (RIP), which is an essential (but sufficient) condition for compressive sensing to work. Amusingly, such a matrix can be interpreted as a free space propagation matrix in the context of physics, which is often used in holography and interferometry - a free space propagation matrix has an identical form to that proven by Romberg. Again, the matrix is structured, but it may be viewed as a realization of such a class of random convolution matrices.
 
Here is Justin's paper:
Compressive Sensing by Random Convolution, Justin Romberg, SIAM J. Imaging Sci. 2, 1098 (2009), DOI:10.1137/08072975X 
He proved that a random convolution matrix constructed as he suggests satisfies a concentration inequality similar to what is shown in the Johnson-Lindenstrauss Lemma (JL lemma). This fascinating link, which now shows in almost every CS literature, between the JL lemma and the RIP has been beautifully elucidated by Baraniuk et al - here is the paper:
R. G. Baraniuk, M. A. Davenport, R. A. DeVore and M. B. Wakin, "A simple proof of the restricted isometry property for random matrices," Constructive Approximation, 2007.
If this response is not enough to convince y'all that if you have a physical system your main job is to focus on getting a Donoho-Tanner phase transition diagram then I don't know what can sway you. I mean making a statement on RIP and then not broaching the subject afterwards in the paper when it comes to the actual realization of this property by the hardware is simply asking the reviewers to beat you up with a large and mean stick!. 

Of interest this week on the long post are the following papers:

Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool.
In a classical noiseless group testing setup, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in the sense that the existence of a defective member in a pool results in the test outcome of that pool to be positive. However, this may not be always a valid assumption in some cases of interest. In particular, we consider the case where the defective items in a pool can become independently inactive with a certain probability. Hence, one may obtain a negative test result in a pool despite containing some defective items. As a result, any sampling and reconstruction method should be able to cope with two different types of uncertainty, i.e., the unknown set of defective items and the partially unknown, probabilistic testing procedure. In this work, motivated by the application of detecting infected people in viral epidemics, we design non-adaptive sampling procedures that allow successful identification of the defective items through a set of probabilistic tests. Our design requires only a small number of tests to single out the defective items. In particular, for a population of size N and at most K defective items with activation probability p, our results show that M = O(K^2 log(N/K)/p^2) tests is sufficient if the sampling procedure should work for all possible sets of defective items, while M = O(K log(N)/p^2) tests is enough to be successful for any single set of defective items. Moreover, we show that the defective members can be recovered using a simple reconstruction algorithm with complexity of O(MN).

Extending Group Testing to include noisy measurements, I like it a lot!

The ability of Compressive Sensing (CS) to recover sparse signals from limited measurements has been recently exploited in computational imaging to acquire high-speed periodic and near periodic videos using only a low-speed camera with coded exposure and intensive off-line processing. Each low-speed frame integrates a coded sequence of high-speed frames during its exposure time. The high-speed video can be reconstructed from the low-speed coded frames using a sparse recovery algorithm. This paper presents a new streaming CS algorithm specifically tailored to this application. Our streaming approach allows causal on-line acquisition and reconstruction of the video, with a small, controllable, and guaranteed buffer delay and low computational cost. The algorithm adapts to changes in the signal structure and, thus, outperforms the off-line algorithm in realistic signals.
The attendant videos are here. I note from the introduction:
In this paper we significantly enhance this work in several ways:
  • We develop a streaming reconstruction algorithm, the Streaming Greedy Pursuit (SGP) [9], which enables on-line reconstruction of the high-speed video. The CoSaMP-based SGP is specifically designed for streaming CS scenarios, with explicit guarantees on the computational cost per sample and on the input-output delay. 
  • We formulate a signal model to incorporate the similarities in the sparsity structure of nearby pixels in the reconstruction algorithm using the principles of joint sparsity and model-based CS [10, 11]. 
  • We combat matrix-conditioning problems that arise due to the non-negative nature of imaging measurements by revisiting the minimum variance (or Capon) beamformer from the array processing literature [12] and re-introducing it in the context of CS. Our work significantly improves the reconstruction performance of coded strobing video and, more importantly, enables on-line reconstruction of the acquired video.

This paper develops an optimal decentralized algorithm for sparse signal recovery and demonstrates its application in monitoring localized phenomena using energy-constrained large-scale wireless sensor networks. Capitalizing on the spatial sparsity of localized phenomena, compressive data collection is enforced by turning off a fraction of sensors using a simple random node sleeping strategy, which conserves sensing energy and prolongs network lifetime. In the absence of a fusion center, sparse signal recovery via decentralized in-network processing is developed, based on a consensus optimization formulation and the alternating direction method of multipliers. In the proposed algorithm, each active sensor monitors and recovers its local region only, collaborates with its neighboring active sensors through low-power one-hop communication, and iteratively improves the local estimates until reaching the global optimum. Because each sensor monitors the local region rather than the entire large field, the iterative algorithm converges fast, in addition to being scalable in terms of transmission and computation costs. Further, through collaboration, the sensing performance is globally optimal and attains a high spatial resolution commensurate with the node density of the original network containing both active and inactive sensors. Simulations demonstrate the performance of the proposed
approach.

We introduce several new formulations for sparse nonnegative matrix approximation. Subsequently, we solve these formulations by developing generic algorithms. Further, to help selecting a particular sparse formulation, we briefly discuss the interpretation of each formulation. Finally, preliminary experiments are presented to illustrate the behavior of our formulations and algorithms.
Also in a somewhat different direction there is the presentation: Scalable Tensor Factorizations with Incomplete Data by Tamara G. Kolda, Daniel M. Dunlavy, Evrim Acar , Morten Mørup.

As stated earlier, (CS: Compressive Sensing on Cleve's corner), Cleve Moler, the founder of Mathworks (Matlab) has an article on Compressed Sensing. An interesting tidbit shows up at the very end:
Compressed Sensing
Compressed sensing promises, in theory, to reconstruct a signal or image from surprisingly few samples. Discovered just five years ago by Candès and Tao and by Donoho, the subject is a very active research area. Practical devices that implement the theory are just now being developed. It is important to realize that compressed sensing can be done only by a compressing
sensor, and that it requires new recording technology and file formats. The MP3 and JPEG files used by today’s audio systems and digital cameras are already compressed in such a way that exact reconstruction of the original signals and images is impossible. Some of the Web postings and magazine articles about compressed sensing fail to acknowledge this fact.
Looks like somebody has been, at the very least, reading the wikipedia page on the subject. Good!

Finally, we have behind a paywall:


A compressive sensing approach to perceptual image coding by Mark R. Pickering, Junyong You, Touradj Ebrahimi. The abstract reads:
There exist limitations in the human visual system (HVS) which allow images and video to be reconstructed using fewer bits for the same perceived image quality. In this paper we will review the basis of spatial masking at edges and show a new method for generating a just-noticeable distortion (JND) threshold. This JND threshold is then used in a spatial noise shaping algorithm using a compressive sensing technique to provide a perceptual coding approach for JPEG2000 coding of images. Results of subjective tests show that the new spatial noise shaping framework can provide significant savings in bit-rate compared to the standard approach. The algorithm also allows much more precise control of distortion than existing spatial domain techniques and is fully compliant with part 1 of the JPEG2000 standard.

If you think this blog provides a service, please support it by ordering through the Amazon - Nuit Blanche Reference Store

Credit: NASA/JPL/Space Science Institute,

W00065463.jpg was taken on September 18, 2010 and received on Earth September 20, 2010. The camera was pointing toward SATURN at approximately 1,963,182 kilometers away, and the image was taken using the MT3 and IRP90 filters.

No comments:

Printfriendly