Thursday, December 18, 2008

CS: Ain't the Power of Interwebs Great or What ?


Five examples on why the interwebs is great, a reader provides a CS solution to the 12-balls problem, a commenter on a blog pinpoints an obvious fact, another reader shares his paper before it can be hosted on its home site and the authors provide access to a free book and a recent presentation.

* Marc Bernot, a reader of this blog, who works in the Observation Systems Research Division of the Thales Alenia Space Company, wrote the following:
I was not completely surprised by the title of your last post (the one about the 12 balls problem) because I recently came up with a solution/explanation with a small taste of CS. Maybe this will be of interest to you (or your readers if you think this is sufficiently understandable). Let us consider the following matrix :             
          M=[1 0 0 -1  0 -1  1 -1  0  1 -1  1;
                 0 1 0 -1 -1  0 -1  0  1  1  1 -1;
                 0 0 1  0 -1 -1  0  1 -1 -1  1  1]

The three rows correspond to 3 weighings and the 12 columns to the twelve balls. A 1 means putting the corresponding ball on the right plate of the balance, and a -1 on the left. For example, the first row means that balls number (1,7,10,12) are put on the right and (4,6,8,11) on the left. The output of these three weighings is enough to determine which ball is different and whether it is lighter or heavier than the rest. Here is why: Let x be the vector of masses of 12 balls all weighing m but one that weighs m'. We have x=m*(1 ... 1)+(m'-m)*u where u is a vector with only one nonzero (with value 1) coordinate (it is 1-sparse). Because M has as many 1 as -1 on each row, we have:

                                     Mx = (m'-m) Mu

What we have access to by weighing masses according to M, is not exactly Mx, but the sign of Mx. For example, if ball 1 is heavier, the left plate is up during the first weighing and the sign of the first coordinate of Mx is +1. So, the outputs of the weighings are converted this way : left plate up = 1, left plate down = -1, even = 0. The output is thus

                             sign(Mx)=sign[(m'-m)Mu].  

Since u is of the form (0 .. 0 1 0 .. 0), sign(Mx) represents one of the column of M or the opposite of one of the column of M.

Because of the way M was constructed, there is one and only one column that corresponds to sign(Mx) (or its opposite). The number of this column is the number of the faulty ball. If sign(Mx) has the same sign as Mu, the ball is heavier, it is lighter otherwise. About the particular construction of M :

Excluding (0,0,0)',(1,1,1)' and (-1,-1,-1)', there are 24 vectors in {-1,0,1}^3. If we assume that two vectors with opposite signs are equivalent, then there are only 12 classes left. M was constructed so that its 12 columns represent the 12 classes of equivalence (and the sum of elements of each row is 0). Conclusion : we were able to identify any 1-sparse vector in {-1,0,1}^12 with 3 "sign measurements".
Kudos to Marc. If I am following the description of how he scales the matrix M, then the number of balls that could be checked with 20 weighings is 1,743,392,199 ( = (3^n-3)/2 ). Twenty measurements to find one unfit ball, it's like finding a needle in some haystack. The problem resolution reminds me of the 1-Bit Compressive Sensing paper by Petros Boufounos and Richard Baraniuk ( related talks are "L1 Minimization Without Amplitude Information." and 1-Bit Compressive Sensing). Of related interest is the Group Testing in the Life Sciences document of which one of the author, Nicolas Thierry-Mieg is the author of the STD algorithm that was mentioned in the Shifted Transversal Designs and Expander Graphs entry of Raghu Kainkaryam's blog. To give you an idea of the power of the web, look at the exchange in the comment section between Raghu and Radu BerindeRadu is one of the author of the  Sparse Recovery Experiments with Sparse Matrices wiki.

* In the category "...If it walks like a duck and quacks like a duck...", I found the following entry in the arxivblog that features the following paper:  Ghost imaging with a single detector by Yaron BrombergOri Katz and Yaron Silberberg. (also here). The abstract reads:
We experimentally demonstrate pseudothermal ghost-imaging and ghost-diffraction using only a single single-pixel detector. We achieve this by replacing the high resolution detector of the reference beam with a computation of the propagating field, following a recent proposal by Shapiro [J. H. Shapiro, arXiv:0807.2614 (2008)]. Since only a single detector is used, this provides an experimental evidence that pseudothermal ghost imaging does not rely on non-local quantum correlations. In addition, we show the depth-resolving capability of this ghost-imaging technique.
If you look at the comment section, you'll notice a commentator named Wim making the connection to CS. As I replied in the comment section, if you:

  • Reconstruct an 300 x 300 image with 16000 measurements ( 17% reduction of the original image)
  • Obtain each measurement by the application of random masks and finally, 
  • Need a specific nonlinear algorithm to do the reconstruction 

  • and Use a single pixel (this is a not an actual condition but it clearly points to a similarity to the Rice Camera)

then one can safely say that what you are doing is Compressed Sensing. It would be fascinating to see how the Gerchberg Saxton phase-retrieval algorithm mentioned in the paper compares to the recently featured Compressed Sensing Phase Retrieval of Matthew MoravecJustin Romberg and Richard Baraniuk. Let us also note the authors' interest in detecting depth as well as investigating nonlinear adaptive  measurements as witnessed in the conclusion section of the paper:

Furthermore, the use of a SLM enables the implementation of closed-loop feedback schemes, potentially reducing the number of realizations needed for image reconstruction.
More information and the movie of the reconstruction can be found here.

* The papers accepted at ICASSP'09 are listed here, We have covered some of them before. Hadi Zayyani forwarded me this one: Bayesian Pursuit Algorithm for Sparse Representation by Hadi ZayyaniMassoud Babaie-Zadeh and Christian Jutten. The abstract reads:
In this paper, we propose a Bayesian Pursuit algorithm for sparse representation. It uses both the simplicity of the pursuit algorithms and optimal Bayesian framework to determine the active atoms in the sparse representation of the signal. We show that using the Bayesian Hypothesis testing to determine the active atoms from the correlations, leads to an efficient activity measure. Simulation results show that our suggested algorithm has the best performance among the algorithms which are implemented in our simulations.




If one were to decompose the signal in wavelets, it looks as though this approach would be very similar to the recent BOMP algorithm featured in Block-Sparsity: Coherence and Efficient Recovery by Yonina Eldar and Helmut Bolcskei. Hadi tells me that he is going to post an implemention of his algorithm on his page a little later. I am going to add the algorithm to the Big Picture page as soon as the algorithm is available.

Aapo Hyvärinen, Jarmo Hurri, and Patrik Hoyer is released free for the time being.

Piotr Indyk  and Milan Ruzic just released the slides of their FOCS presentation entitled Near-Optimal Sparse Recovery in the L1 norm featuring some of the results of the Sparse Recovery Experiments with Sparse Matrices.



Credit Photo: Igor Carron. Coast of Greenland.

No comments:

Printfriendly