Thursday, January 07, 2010

CS: ART and CS in CT, Regularization for Matrix Completion, Estimation of High-Dimensional Low-Rank Matrices


With regards to nonlinear solvers in CT, I got the following e-mail (reproduced here with permission as always) from Emil Sidky:
Hi Igor,

I saw your entry last week on the use of ART in CT. There's no contradiction there with what we wrote in our IP review [1]. While it may be true that there are specific protocols that use something other than FBP in CT. These are but a drop in the bucket. No one can really argue with the fact that FBP is the workhorse algorithm of medical CT, and considering the numbers of people that get scanned there is certainly room for substantial exposure reduction to the general population by engineering a general-purpose CT scanner with advanced image-reconstruction algorithms.

Cheers,
Emil

Emil Sidky is right, we ought to be looking at what constitutes the largest market share and it looks like there is room for improvement. However, if radiation dose is the only issue, I fear that Richard Gordon articulation of the inability of ART to topple FBP is the same problem that a CS implementation would face. Therefore a CS system would have to provide something else.

In the previous post, Larry Wasserman made the comment that I had missed a paper related to the themes of this blog. And indeed, my arxiv search did not include the term "matrix completion" and therefore missed the following two items (see below). This has been fixed and if you come to the blog, you can click on the Arxiv link on the right hand side of the menu to see all the papers related to compressive sensing and attendant problems. One thing that is clear from this request is the large variety of fields involved. The first 25 entries of this search include the following arxiv field labels:

physics.optics
cs.IT
cs.NA
quant-ph
cs.DS
math.OC
nlin.CG
math.MG
physics.med-ph
stat.ML
math.ST
Wow, talk about a diverse crowd! And this is just those fields that regularly post on Arxiv.

Here are the two preprints I missed earlier:
Regularization for Matrix Completion by Raghunandan H. Keshavan, Andrea Montanari. The abstract reads:
We consider the problem of reconstructing a low rank matrix from noisy observations of a subset of its entries. This task has applications in statistical learning, computer vision, and signal processing. In these contexts, "noise" generically refers to any contribution to the data that is not captured by the low-rank model. In most applications, the noise level is large compared to the underlying signal and it is important to avoid overfitting. In order to tackle this problem, we define a regularized cost function well suited for spectral reconstruction methods. Within a random noise model, and in the large system limit, we prove that the resulting accuracy undergoes a phase transition depending on the noise level and on the fraction of observed entries. The cost function can be minimized using OPTSPACE (a manifold gradient descent algorithm). Numerical simulations show that this approach is competitive with state-of-the-art alternatives.

Estimation of High-Dimensional Low-Rank Matrices by Angelika Rohde, Alexandre Tsybakov. The abstract reads:
Suppose that we observe entries or, more generally, linear combinations of entries of an unknown mxT-matrix A corrupted by noise. We are particularly interested in the high-dimensional setting where the number mT of unknown entries can be much larger than the sample size N. Motivated by several applications, we consider estimation of matrix A under the assumption that it has small rank. This can be viewed as dimension reduction or sparsity assumption. In order to shrink towards a low-rank representation, we investigate penalized least squares estimators with a Schatten-p quasi-norm penalty term, $p\leq 1$. We study these estimators under two possible assumptions -- a modified version of the restricted isometry condition and a uniform bound on the ratio "empirical norm induced by the sampling operator/Frobenius norm". The main results are stated as non-asymptotic upper bounds on the prediction risk and on the Schatten-$q$ risk of the estimators, where $q\in[p,2]$. The rates that we obtain for the prediction risk are of the form rm/N (for m=T), up to logarithmic factors, where r is the rank of A. The particular examples of multi-task learning and matrix completion are worked out in detail. The proofs are based on tools from the theory of empirical processes. As a by-product we derive bounds for the $k$th entropy numbers of the quasi-convex Schatten class embeddings $S_p^M\hookrightarrow S_2^M$, p \lt1, which are of independent interest.

Thanks Larry and Emil !

No comments:

Printfriendly