Wednesday, March 11, 2009

CS: Terry Tao's blog, Spectral multiplexing method for digital snapshot spectral, Tinkering with Cheap and large SSDs/RAM and CS

Terry Tao has a new blog entry that summarizes the gist of a paper he co-wrote with Emmanuel Candes and mentioned yesterday in The Long PostThe power of convex relaxation: near-optimal matrix completion.


How about using a Canon EOS 30D digital camera and make it hyperspectral camera with 32 bands ? This is what is this new paper is aiming for: Spectral multiplexing method for digital snapshot spectral imaging by Michael A. Golub, Menachem NathanAmir AverbuchEitan LaviValery Zheludev, and Alon Schclar. The abstract reads:
We propose a spectral imaging method for piecewise uniform "macropixel" objects, which allows a regular digital camera to be converted into a digital snapshot spectral imager by equipping the camera with only a disperser and de-multiplexing algorithm. The method exploits a "multiplexed spectrum" intensity pattern, i.e. the superposition of spectra from adjacent different image points, formed on the image sensor of the digital camera. The spatial image resolution is restricted to a "macropixel" level in order to acquire both spectral and spatial data (i.e. an entire spectral cube) in a single snapshot. Results of lab experiments with a special macropixel object image, composed of small, spatially uniform squares, provide a first verification of the proposed spectral imaging method.
I am adding this hardware to the Compressive Sensing Hardware list.

When implementing a similar system called HyperGeoCam, a random lens imager with a prism-like diffuser enabling multiple wavelengths mixing, my main concern was how to calibrate this thing, i.e. find the psf of the instrument. I am not sure I have the right answer yet as I aiming for an elegant solution, but I should probably continue tinkering around. For instance, as I was reading this article 24 Solid State Drives Open ALL of Microsoft Office In .5 Seconds", I could not help but ask a question I eventually asked on Twitter:
Does anybody know what sort of computation would be sped up by using an SSD as opposed to your average HD ? Large matrix-vector multiply?
SSDs are now used in netbook as well as Hard Drive equivalent or that we also have access to 32GB RAM sticks. Could these technical improvements be on a par with the GPU developement in Compressive Sensing and change some of the assumption made early in CS ? Is there a specific application in CS where the read part is often needed while the write part is not ? Encoding with very large Gaussian matrices ?

Thanks Zachary Harmany   and Eric Tramel  for their feedback on Twitter!

No comments:

Printfriendly