Thursday, December 13, 2007

Compressed Sensing: Single-Pixel Imaging via Compressive Sampling, Stable Manifold Embedding

In an upcoming issue of the IEEE Signal Processing Magazine, we will be treated with a Special Issue on Compressive Sampling. It is scheduled to come out in March 2008. In it, is a new well written review of the trade studies performed on the current single pixel camera development using Compressed Sensing at Rice. The review is Single-Pixel Imaging via Compressive Sampling and was written by Marco Duarte, Mark Davenport, Dharmpal Takhar, Jason Laska, Ting Sun, Kevin Kelly, and Richard Baraniuk. The abstract reads:
Humans are visual animals, and imaging sensors that extend our reach – cameras – have improved dramatically in recent times thanks to the introduction of CCD and CMOS digital technology. Consumer digital cameras in the mega-pixel range are now ubiquitous thanks to the happy coincidence that the semiconductor material of choice for large-scale electronics integration (silicon) also happens to readily convert photons at visual wavelengths into electrons. On the contrary, imaging at wavelengths where silicon is blind is considerably more complicated, bulky, and expensive. Thus, for comparable resolution, a $500 digital camera for the visible becomes a $50,000 camera for the infrared. In this paper, we present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a much broader spectral range than conventional silicon-based cameras. Our approach fuses a new camera architecture based on a digital micromirror device with the new mathematical theory and algorithms of compressive sampling . CS combines sampling and compression into a single nonadaptive linear measurement process [1–4]. Rather than measuring pixel samples of the scene under view, we measure inner products between the scene and a set of test functions. Interestingly, random test functions play a key role, making each measurement a random sum of pixel values taken across the entire image. When the scene under view is compressible by an algorithm like JPEG or JPEG2000, the CS theory enables us to stably reconstruct an image of the scene from fewer measurements than the number of reconstructed pixels. In this manner we achieve sub-Nyquist image acquisition. Our “single-pixel” CS camera architecture is basically an optical computer (comprising a DMD, two lenses, a single photon detector, and an analog-to-digital (A/D) converter) that computes random linear measurements of the scene under view. The image is then recovered or processed from the measurements by a digital computer. The camera design reduces the required size, complexity, and cost of the photon detector array down to a single unit, which enables the use of exotic detectors that would be impossible in a conventional digital camera. The random CS measurements also enable a tradeoff between space and time during image acquisition. Finally, since the camera compresses as it images, it has the capability to efficiently and scalably handle high-dimensional data sets from applications like video and hyperspectral imaging. This article is organized as follows. After describing the hardware, theory, and algorithms of the single-pixel camera in detail, we analyze its theoretical and practical performance and compare it to more conventional cameras based on pixel arrays and raster scanning. We also explain how the camera is information scalable in that its random measurements can be used to directly perform simple image processing tasks, such as target classification, without first reconstructing the underlying imagery. We conclude with a review of related camera architectures and a discussion of ongoing and future work.
When I last explained the single pixel camera, I didn't realize the first option is also called the raster scan. The noteworthy result of the trade study compares different current scanning capabilities and that provided by compressed sensing.They look at dynamic range, quantization error, photon counting noise and remind us of the fact that with CS, one can obtain much more light on the sensitive detector (pixel) than conventional cameras and that it should help in low light conditions as presented in the IMA presentation (Multiscale Geometric Analysis).



X-rays could be implemented the same way if the DMD would be able to reflect at these wavelengths, for instance by using a well thought multilayer system for instance. They also give a good summary of the other hardware technologies implementing Compressed Sensing. It looks like I have not missed any (part 1, part 2, part 3, part 4)

In another good summary of the current investigations of Richard Baraniuk's group, I found the following slide most illuminating (as presented in Single-Pixel Imaging via Compressive Sampling,) explaining the geometric reason why L1 works

The same figure could be used to explained the relative better ability of non-convex Lp regularization techniques to do even better than L1. Because the Lp (for p less than 1) ball shape tend to be squashed compared to the L1 ball, it is easier to see that there are more randomly oriented N-M planes anchored on a K-face of the Lp ball which would not interesect with the ball. [Update: Muthu MuthuKrishnan just posted a nice picture of an Lp ball, the picture is due to Petros Boufounos ].




With regards to extension of compressed sensing or compressed sensing ideas, Richard Baraniuk and Mike Wakin [1] have these three illuminating slides where they shows how Random projections can reduce Manifolds of low dimension in high dimensional space or how the condition for which random projection can be used as dimensionality reduction techniques. It shows how CS provides better bounds than the older Whitney's embedding theorem.




The current drawback of this approach (random projections) is the inability to provide some type of parametrization on the manifold. Never say never.

Source: Rice Compressed Sensing Repository.
Reference : [1] R. G. Baraniuk and M. B. Wakin, Random Projections of Smooth Manifolds, to appear in Foundations of Computational Mathematics.

No comments:

Printfriendly