Wednesday, May 05, 2010

CS: 54168.4 is the latest score for the Matlab contest ? Can you do better ?

Robert, formerly, anonymous in the previous thread just wrote the following:

I have run some Compressed Sensing code on the MATLAB contest.

Basicially, it is soft shrinkage with continuation (decrease of threshold) in a simply implemented DWT and application of the pixel range constraint.

I use image blocks (eg. size 32x32 or 16x16) to handle SVD for imaging matrix inversion in reasonable time.

Each block uses one DC measure plus 50%-pixel-random measurements.

The code is pretty straight forward.
There is a lot of "fail-safe" stuff in it to handle non-power-of-2 cases. There must be a lot of room for improvement.


My currently best score of 54168.4 is here:
http://www.mathworks.com/matlabcentral/contest/contests/2/submissions/4338

I made it pretty easy to vary parameters. I know time is short, but if you are curious feel free to mess around with the code.

I can only agree with "R" on the scoring and measuring process... I would have loved to see some real compressed sensing challenge. Plus, the penalty on the computation time is huge - in the CS case, this encourages for reduction of quality vs. computation time.

Robert
Thanks Robert ! All of Robert's entry are here. Changing the parameters and cloning his entries is very easy as Mathworks did a great job at enabling cloning entries. It would be a stunning turn of event if any of the l1 algorithm could do well compared to the best score which has been hovering in the 28,000 sphere for the past few days. The contest ends at 16:00 UTC or in about 3 hours and fifteen minutes (9am on the west coast, 10 am MST, 11 am CST, 12pm EST. If you do have a better score, please let me know.

6 comments:

Eric Tramel said...

I was taking a look at Robert's code and one thing that I noticed was that he was using blocking to speed up the recovery process. This is a good approach, but I have a comment on the method.

The current implementation shown recovers every block independently. This does not take into account inter-block correlations. Taking a look at Lu Gan's work (Block Compressed Sensing) and Mun & Fowler's directional transform version, you can see a marked improvement when we:

A. Threshold over the entire signal rather than each block independently.

B. Apply a smoothing step (i.e. wiener filter) at each iteration to smooth out blocking artifacts.

I'm working on an entry takes this into consideration :P But, I lack any real skill in optimization, so I don't know if it will turn out to be very competitive.

Anonymous said...

So, here is what I just typed into my text editor before reading Eric's comment :-)

(Summary: Given more time and effort CS solutions could have done better.)

"While I'm waiting for my final score (which I expect to be around 53500), I'd like to add a few comments on my algorithm to give an idea of the level of heuristic.

As mentioned before I processed the data blockwise, mainly because of the lack of a fast transform for the given measurement setup (non-weighted sums of pixels).
The algorithm is basic soft thresholding plus backprojection with decreasing threshold parameter. Degrees of freedom in this setup are #iterations, the sequence of decreasing thresholds and a regularization parameter for the backprojection - quite simple.

The main drawbacks of this approach are:
A. haar-wavelet basis (an overcomplete wavelet frame would probably have performed better)

B. no regularization across the block-borders (i.e. assembling transform matrixes for bigger areas)

C. no heuristic intermediate- or post-processing (smoothing, edge-detection, ...)

So, as "R" mentioned, even with non-adaptive sampling one could probably have reached the performance of adaptive solvers, using more advanced algorithms (SPGL1, SL0, ..?), fast transformations, additional heuristics tuned to the problemset and/or additional computation time.

Not quite as devastating as the other anonymous poster would make us believe."

Robert

Eric Tramel said...

Well, another downside to my implementation was its being written some time *after* the close of the contest, hah :)

The Vlad said...

Well, my score for 'vladirator10' (solving Laplace's equation with Dirichlet boundary conditions supplied by random pixel sampling) was around 41874, still quite a bit better than the 53500 Anonymous was expecting. So, it would have been interesting if a CS algorithm could beat that. (Of course, we can still make comparisons by running the 'runcontest(1)' command that came with the 'sensor' package.) As my algorithm didn't involve any adaptive sampling, I think it's not unreasonable to compare these reconstruction methods (L1 versus Laplace's equation).

Igor said...

Yes vlad you are right we ought to check that.

Robert said...

Just for the record, my last attempt went somewhat off and so the best score remains 53556.2.

Its quite a bit ashaming (and I guess that was Anonymous poster's point) that simple (nonadaptive) linear interpolation like this early code get 36k-ish scores so effortlessly.

But then, the "world of compressed sensing" doesn't end at 8 iterations of naïve IST (like in my code).

Printfriendly