Thursday, April 30, 2009

CS:Video rate spectral imaging, Photon-limited hyperspectral, MRI Bregman l_p, TVL1, Irregular sampling, Data streams, meetings and talks.

Today we have some interesting hyperspectral work from Duke as well as other contributions found through the interwebs:


We have previously reported on coded aperture snapshot spectral imagers (CASSI) that can capture a full frame spectral image in a snapshot. Here we describe the use of CASSI for spectral imaging of a dynamic scene at video rate. We describe significant advances in the design of the optical system, system calibration procedures and reconstruction method. The new optical system uses a double Amici prism to achieve an in-line, direct view configuration, resulting in a substantial improvement in image quality. We describe NeAREst, an algorithm for estimating the instantaneous three-dimensional spatio-spectral data cube from CASSI’s two-dimensional array of encoded and compressed measurements. We utilize CASSI’s snapshot ability to demonstrate a spectral image video of multi-colored candles with live flames captured at 30 frames per second.

A nice figure shows how the transfer function between a point of light and camera is not translation invariant which I think complicates many issues when one is performing this light multiplexing.

This paper studies photon-limited, hyperspectral intensity estimation and proposes a spatially- and spectrally adaptive, nonparametric method to estimate hyperspectral intensities from Poisson observations. Specifically, our method searches over estimates defined over a family of recursive dyadic partitions in both the spatial and spectral domains, and finds the one that maximizes a penalized log likelihood criterion. The key feature of this approach is that the partition cells are anisotropic across the spatial and spectral dimensions so that the method adapts to varying degrees of spatial- and spectral-smoothness, even when the respective degrees of smoothness are not known apriori. The proposed approach is based on the key insight that the spatial boundaries and singularities exist in the same locations in every spectral band, even though the contrast or perceptibility of these features may be very low in some bands. The incorporation of this model into the reconstruction results in significant performance gains. Furthermore, for hyperspectral intensities that belong to the anisotropic H¨older-Besov function class, the proposed approach is shown to be near-minimax optimal. The upper bounds on the risk function, which is the expected squared Hellinger distance between the true intensity and the estimate obtained using the proposed approach, matches the best possible lower bound up to a log factor for certain degrees of spatial- and spectral-smoothness. Experiments conducted on realistic data sets show that the proposed method can reconstruct the spatial- and the spectral- inhomogeneities very well even when the observations are extremely photon-limited (i.e, less than 0.1 photon per voxel).

Wow, "less than 0.1 photon per voxel".

We now also have Bregman solver for l_p as introduced in Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data" by Rick Chartrand. The abstract reads:

Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.





Here is an interesting geometric characterization of the TV solver using L1: The TVL1 Model: a Geometric Point of View by Vincent Duval, Jean-François Aujol, Yann Gousseau. The abstract reads:

The aim of this paper is to investigate the geometrical behavior of the TVL1 model used in image processing, by making use of the notion of Cheeger sets. This mathematical concept was recently related to the celebrated Rudin-Osher-Fatemi image restoration model, yielding important advances in both fields. We provide the reader with a geometrical characterization of the TVL1 model. We show that, in the convex case, exact solutions of the TVL1 problem are given by an opening followed by a simple test over the ratio perimeter/area. Shapes remain or suddenly vanish depending on this test. As a result of our theoritical study, we suggest a new and efficient numerical scheme to apply the model to digital images. As a by-product, we justify the use of the TVL1 model for image decomposition, by establishing a connection between the model and morphological granulometry. Eventually, we propose an extension of TVL1 into an adaptive framework, in which we derive some theoretical results.
Finally, of connex interest to CS we have: Irregular and multi-channel sampling of operators by Yoon Mi Hong, Götz E. Pfander. The abstract reads:

The classical sampling theorem for bandlimited functions has recently been generalized to apply to so-called bandlimited operators, that is, to operators with band-limited Kohn-Nirenberg symbols. Here, we discuss operator sampling versions of two of the most central extensions to the classical sampling theorem. In irregular operator sampling, the sampling set is not periodic with uniform distance. In multi-channel operator sampling, we obtain complete information on an operator by multiple operator sampling outputs.
and Revisiting Norm Estimation in Data Streams by Daniel M. Kane, Jelani Nelson, David P. Woodruff. The abstract reads:

The problem of estimating the pth moment F_p (p nonnegative and real) in data streams is as follows. There is a vector x which starts at 0, and many updates of the form x_i -- x_i + v come sequentially in a stream. The algorithm also receives an error parameter 0 \lt eps \lt 1. The goal is then to output an approximation with relative error at most eps to F_p = ||x||_p^p. Previously, it was known that polylogarithmic space (in the vector length n) was achievable if and only if p \le 2. We make several new contributions in this regime, including: (*) An optimal space algorithm for 0 \lt p \lt 2, which, unlike previous algorithms which had optimal dependence on 1/eps but sub-optimal dependence on n, does not rely on a generic pseudorandom generator. 
(*) A near-optimal space algorithm for p = 0 with optimal update and query time.
(*) A near-optimal space algorithm for the "distinct elements" problem
(p = 0 and all updates have v = 1) with optimal update and query time.
(*) Improved L_2 to L_2 dimensionality reduction in a stream.
(*) New 1-pass lower bounds to show optimality and near-optimality of our algorithms, as well as of some previous algorithms (the "AMS sketch" for p = 2, and the L_1-difference algorithm of Feigenbaum et al.). As corollaries of our work, we also obtain a few separations in the complexity of moment estimation problems: F_0 in 1 pass vs. 2 passes, p = 0 vs. p \gt 0, and F_0 with strictly positive updates vs. arbitrary updates.


Laurent Jacques mentions that the Poster presentation at ICASSP'09 of the work about CMOS Compressed Imaging by Random Convolution, by Laurent Jacques, Pierre Vandergheynst, vAlexandre Bibet, Vahid Majidzadeh, Alexandre Schmid, and Yusuf Leblebici is now available.

Holger Rauhut is organizing a special session on mathematical aspects of compressed sensing atSampTA09, May 18 - 22, 2009. He is also giving a short course on "Compressive sensing and structured random matrices" at the Summer School on "Theoretical Foundations and Numerical Methods for Sparse Recovery", Linz, August 31-September 4, 2009.

At Rice, James H. McClellan will talk about 

Compressive Sensing Data Acquisition and Imaging for GPRs

on Monday, May 11, 2009 
at 3:00 PM to 4:00 PM
1070 Duncan Hall
Rice University
6100 Main St
Houston, Texas, USA


New data acquisition and imaging strategies are presented for ground penetrating radars (GPRs) used in subsurface imaging. Both stepped-frequency continuous-wave (SFCW) and impulse GPRs are considered. When the target space is sparse, i.e., a small number of point like targets, the theory of compressive sensing (CS) tells us that making measurements at only a small number of random frequencies (or times) is sufficient to construct an image of the target space by solving a convex optimization problem which enforces sparsity through L1 minimization. Random spatial sampling of the synthetic aperture can also be performed as part of this data acquisition method, which greatly reduces the GPR acquisition time at the expense of higher computational costs. Imaging results for both simulated and experimental GPR data exhibit less clutter and better resolution than the standard migration and backprojection methods.



No comments:

Printfriendly