Thursday, January 31, 2013

ROKS 2013 International Workshop on Advances in Regularization, Optimization, Kernel Methods and Support Vector Machines:

Marco Signoretto sent me the following: 

Dear Igor,
we are organising ROKS 2013, a workshop that includes compressed sensing in its scope (see http://www.esat.kuleuven.be/sista/ROKS2013/ for details).
Can you please post the announcement below in Nuit Blanche?
Best Regards
Marco
dr. Marco Signoretto
ESAT - SCD - SISTA,
Systems, Models and Control
Katholieke Universiteit Leuven,
Kasteelpark Arenberg 10, B-3001 LEUVEN - HEVERLEE (BELGIUM)


Absolutely Marco ! I presume the hashtag for the meeting will #roks2013.  From the page, the scope of the meeting:

Welcome to ROKS-2013
One area of high impact both in theory and applications is kernel methods and support vector machines. Optimization problems, learning and representations of models are key ingredients in these methods. On the other hand considerable progress has also been made on regularization of parametric models, including methods for compressed sensing and sparsity, where convex optimization plays a prominent role. The aim of ROKS-2013 is to provide a multi-disciplinary forum where researchers of different communities can meet, to find new synergies along these areas, both at the level of theory and applications.
The scope includes but is not limited to:
  • Regularization: L2, L1, Lp, lasso, group lasso, elastic net, spectral regularization, nuclear norm, others
  • Support vector machines, least squares support vector machines, kernel methods, gaussian processes and graphical models
  • Lagrange duality, Fenchel duality, estimation in Hilbert spaces, reproducing kernel Hilbert spaces, Banach spaces, operator splitting
  • Optimization formulations, optimization algorithms
  • Supervised, unsupervised, semi-supervised learning, inductive and transductive learning
  • Multi-task learning, multiple kernel learning, choice of kernel functions, manifold learning
  • Prior knowledge incorporation
  • Approximation theory, learning theory, statistics
  • Matrix and tensor completion, learning with tensors
  • Feature selection, structure detection, regularization paths, model selection
  • Sparsity and interpretability
  • On-line learning and optimization
  • Applications in machine learning, computational intelligence, pattern analysis, system identification, signal processing, networks, datamining, others
SoftwareCo-sponsored by ERC Advanced Grant and KU Leuven



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Three-dimensional ghost imaging ladar

Looks like we have an improvement over A 2cm resolution at 1km range Ghost Imaging LIDAR, a 3rd dimension. I note that they have not yet used the compressive sensing approach to reduce further the number of measurements. It would be very handy since the TV minimization would be ideal for that purpose. I am sure this is a just a question of time.




Compared with two-dimensional imaging, three-dimensional imaging is much more advantageous to catch the characteristic information of the target for remote sensing. We report a range-resolving ghost imaging ladar system together with the experimental demonstration of three-dimensional remote sensing with a large field of view. The experiments show that, by measuring the correlation function of intensity fluctuations between two light fields, a three-dimensional map at about 1.0 km range with 25 cm resolution in lateral direction and 60 cm resolution in axial direction has been achieved by time-resolved measurements of the reflection signals.



Wednesday, January 30, 2013

When Does Computational Imaging Improve Performance?

If there is something to be said about compressive sensing is that it provides a new landscape when exploring parameters in imaging and other areas of signal processing. The technology that is getting the most press in imaging and so we are really in need of a study that explores phase space in which compressive sensing is interesting and instances where it is .... less interesting compared to direct imaging. To help in this exploration here is When Does Computational Imaging Improve Performance? by Oliver Cossairt, Mohit Gupta, and Shree K. Nayar (the pdf is here)
The abstract reads:
A number of computational imaging techniques have been introduced to improve image quality by increasing light throughput. These techniques use optical coding to measure a stronger signal level. However, the performance of these techniques is limited by the decoding step, which amplifies noise. While it is well understood that optical coding can increase performance at low light levels, little is known about the quantitative performance advantage of computational imaging in general settings. In this paper, we derive the performance bounds for various computational imaging techniques. We then discuss the implications of these bounds for several real-world scenarios (illumination conditions, scene properties and sensor noise characteristics). Our results show that computational imaging techniques do not provide a significant performance advantage when imaging with illumination brighter than typical daylight. These results can be readily used by practitioners to design the most suitable imaging systems given the application at hand.
and the magic formula is


There are obvious caveats to this but this is a very nice work and much appreciated contribution.




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

A Randomized Parallel Algorithm with Run Time $O(n^2)$ for Solving an $n \times n$ System of Linear Equations - implementation -



Around the time Curiosity landed on Mars, an Around the blogs in 80 summer hours pointed to Dick Lipton's blog's entry entitled A New Way To Solve Linear Equations where he mentioned a stunning preprint by Prasad Raghavendra. Yesterday, Riccardo Murri commented the following: 

Benjamin Jonen and I have implemented some example code in Python for this solver; you can read the source files at:
https://code.google.com/p/gc3pie/source/browse/#svn%2Ftrunk%2Fgc3pie%2Fexamples%2Foptimizer%2Flinear-systems-solver
There’s also an implementation of the algorithm as modified by Fliege (arXiv 1209.3995v1), which works over the reals, and some variants with which we are experimenting.

In this note, following suggestions by Tao, we extend the randomized algorithm for linear equations over prime fields by Raghavendra to a randomized algorithm for linear equations over the reals. We also show that the algorithm can be parallelized to solve a system of linear equations $A x = b$ with a regular $n \times n$ matrix $A$ in time $O(n^2)$, with probability one. Note that we do not assume that $A$ is symmetric.
Several comments here. This is the second time I see a preprint being borne out of a comment on a blog. I wonder how many there are. Second, while the randomized algorithm works for square matrices, I wonder if it still works out for short and fat matrices (underdetermined linear systems of equations) and how easy it is to have a solution from a certain family of vectors. In other words, does it still work if one chooses only vectors that are 5-sparse, that have a bernouilli distribution, I am sure y'all get my drift. 



Image Credit: NASA/JPL-Caltech
This image was taken by Navcam: Right A (NAV_RIGHT_A) onboard NASA's Mars rover Curiosity on Sol 172 (2013-01-29 21:21:57 UTC) .
Full Resolution

Welcome back to the Jungle



Derin Babacan sent me the following:

Hello Igor,
Hope everything is well with you. 
A researcher contacted me looking for the code for our paper "Sparse Bayesian Methods for Low-Rank Matrix Estimation". Apparently he could not find it in your matrix factorization jungle page. I remember that you featured it in Nuit Blanche. Can you please add it to the jungle page as well? 
Here is the info again:
Thanks!
The first time this algorithm was featured was in June of last year[1], I forgot to add it and then Derin graduated. The institution with which he graduated removed the page altogether. Several lessons can be learned out of this situation. As a student, postdoc, if you are building a webpage, you are building a brand (yours), why would would let that brand be hostage to future departemental IT decisions ? You can use your university owned page to build an external page and then jump out of that one when you leave that institution. Second, if you have an implementation of an advanced Matrix Factorization, please remind me kindly that it ought to be listed on the Matrix Factorization Jungle page (same goes the Big Picture in Compressive Sensing). Third  if you happen to change webpages for specific solvers, please let me know and I'll update accordingly.

[1] Sparse Bayesian Methods for Low-Rank Matrix Estimation and Bayesian Group-Sparse Modeling and Variational Inference - implementation -

Credit: Wikipedia

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, January 29, 2013

Phase Diagram and Approximate Message Passing for Blind Calibration and Dictionary Learning

Lenka Zdeborová sent me the following:

Dear Igor, 
Surely you will find this paper yourself as you always do ... but let me send a link anyway: http://arxiv.org/abs/1301.5898 . We think that this is a nice contribution to the matrix factorization jungle ...! For calibration (and others, completion etc.) the algorithm works for a number of samples just a bit larger than the trivial counting bound, which is much much lower than anything else we have seen. It needs some tuning to work really well in the dictionary learning case, but we think this is a very promising track.
At the same time, and again, if we missed some crucial references on the topic, please let us know.

Best!

We consider dictionary learning and blind calibration for signals and matrices created from a random ensemble. We study the mean-squared error in the limit of large signal dimension using the replica method and unveil the appearance of phase transitions delimiting impossible, possible-but-hard and possible inference regions. We also introduce an approximate message passing algorithm that asymptotically matches the theoretical performance, and show through numerical tests that it performs very well, for the calibration problem, for tractable system sizes.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, January 28, 2013

Tensor completion based on nuclear norm minimization for 5D seismic data reconstruction

Nadia Kreimer sent me the following:

Hello Igor,

I wanted to know if you could post this paper on your blog http://nuit-blanche.blogspot.com/.
It has been recently submitted to the Geophysics journal. It is about tensor completion using nuclear norm for the reconstruction of seismic data. It is the first time this technique is tried in seismic data, so we thought it would be interesting if it appeared on your Matrix Factorization group.
Me and the other authors belong to the SAIG Consortium at the University of Alberta (http://saig.physics.ualberta.ca/).

Thanks, regards,
Nadia Kreimer
Thanks Nadia !

Nadia also tells me that they will probably release their code later. At which point, I will list it in the Matrix Factorization Jungle page. Here is the paper:




Prestack seismic data are multidimensional signals that can be described as a low-rank
fourth-order tensor in the frequency  space domain. Tensor completion strategies can be used to recover unrecorded observations and to improve the signal-to-noise ratio of prestack volumes. Additionally, tensor completion can be posed as an inverse problem and solved using convex optimization algorithms. The objective function for this problem contains a data mis t term and a term that serves to minimize the rank of the tensor. The alternating direction method of multipliers o ers automatic rank determination and it is used to obtain a reconstructed seismic volume. The proposed method converges to a good approximation of the rank of the tensor given the input data. We present synthetic examples to illustrate the behaviour of the algorithm in terms of trade-o parameters that control the quality of the reconstruction. We further illustrate the performance of the algorithm in a land data survey from Alberta, Canada.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, January 25, 2013

Linear Bandits in High Dimension and Recommendation Systems

I recently stumbled on a presentation by Andrea Montanari on Collaborative Filtering: Models and Algorithms where I learned that his team won the 2011 CAMrA competition 


If an AMP beat the netflix winner, I wonder if another AMP instance could beat SVDFeature at the latest KDD cups ? The seocnd part of the presentation is provided in more detail in Linear Bandits in High Dimension and Recommendation Systems by Yash Deshpande and Andrea Montanari

A large number of online services provide automated recommendations to help users to navigate through a large collection of items. New items (products, videos, songs, advertisements) are suggested on the basis of the user’s past history and –when available– her demographic profile. Recommendations have to satisfy the dual goal of helping the user to explore the space of available items, while allowing the system to probe the user’s preferences.We model this trade-off using linearly parametrized multi-armed bandits, propose a policy and prove upper and lower bounds on the cumulative “reward” that coincide up to constants in the data poor(high-dimensional) regime. Prior work on linear bandits has focused on the data rich (low-dimensional)regime and used cumulative “risk” as the figure of merit. For this data rich regime, we provide a simple modification for our policy that achieves near-optimal risk performance under more restrictive assumptions on the geometry of the problem. We test (a variation of) the scheme used for establishing achievability on the Netflix and MovieLens datasets and obtain good agreement with the qualitative predictions of the theory we develop.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Spread spectrum compressed sensing MRI using chirp radio frequency pulses




Xiaobo Qu sent me the following:


Dear Igor, 
Thanks for maintaining your blog to share the progress on sparse signal processing and related fields. I really got lots of meaningful information from this blog.
Recently, we did one one work on spread spectrum compressed sensing MRI using Chirp pulses. The original spread spectrum compressed MRI was previously published in "G. Puy, J. Marques, R. Gruetter, J. Thiran, D. Van De Ville, P. Vandergheynst, and Y. Wiaux, Spread spectrum magnetic resonance imaging, IEEE Trans. Med. Imaging, vol. 31, pp. 586-598, 2012." , which reduces mutual coherence between sensing and sparsity bases. Second order shim coil is used to produce the spread spectrum effect. In our work, we produce the spread spectrum using Chirp pulses.It controls the energy distribution of k-space in MRI more easily by setting chirp pulses bandwidth.
Here is the information of this paper.
Authors: Xiaobo Qu, Ying Chen, Xiaoxing Zhuang, Zhiyu Yan, Di Guo, Zhong Chen
Abstract: Compressed sensing has shown great potential in reducing data acquisition time in magnetic resonance imaging (MRI). Recently, a spread spectrum compressed sensing MRI method modulates an image with a quadratic phase. It performs better than the conventional compressed sensing MRI with variable density sampling, since the coherence between the sensing and sparsity bases are reduced. However, spread spectrum in that method is implemented via a shim coil which limits its modulation intensity and is not convenient to operate. In this letter, we propose to apply chirp (linear frequency-swept) radio frequency pulses to easily control the spread spectrum.
To accelerate the image reconstruction, an alternating direction algorithm is modified by exploiting the complex orthogonality of the quadratic phase encoding. Reconstruction on the acquired data demonstrates that more image features are preserved using the proposed approach than those of conventional CS-MRI.

Thanks for sharing.
Best regards,
Xiaobo
Ph.D., Assistant Professor
Dept. of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, China 
Postal Address: P.O. Box 979, Xiamen University, Xiamen, Fujian, China
Office room: Jiageng Building #4-610
Postal Code: 361005

Thursday, January 24, 2013

Signal reconstruction in linear mixing systems with additive error metrics (video introduction)

ITA 2013 asks its presenters for a video introduction of their talks ahead of time. Here is Dror Baron and Jin Tan's video (see also All hopes may not be crushed all at once and It's not a bad reconstruction, just the end of an illusion...)




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Structure-Based Bayesian Sparse Reconstruction - implementation -




Structure-Based Bayesian Sparse Reconstruction by Ahmed A. Quadeer and Tareq Y. Al-Naffouri/ The abstract reads
Sparse signal reconstruction algorithms have attracted research attention due to their wide applications in various fields. In this paper, we present a simple Bayesian approach that utilizes the sparsity constraint and a priori statistical information (Gaussian or otherwise) to obtain near optimal estimates. In addition, we make use of the rich structure of the sensing matrix encountered in many signal processing applications to develop a fast sparse recovery algorithm. The computational complexity of the proposed algorithm is relatively low compared with the widely used convex relaxation methods as well as greedy matching pursuit techniques, especially at a low sparsity rate.
The attendant code is here.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, January 23, 2013

Around the blogs in 80 summer hours

The LinkedIn Compressive Sensing group has now more than 2000 members!

 The counts for communities gravitating around Nuit Blanche are:

Please, join the conversations !

Suresh tells us about an unexpected sampling gem: sampling from ℓp balls. while Vladimir let us know that Analog Devices Applies for Image Sensor Patents, yes Analog Devices. It's always CMOS baby.

we also have: 

Danny: Graph Database Resources. Danny also let us know that GraphChi 2.0 is out.

Ever since the last Around the blogs in 80 summer hours, Nuit Blanche featured:


Image Credit: NASA/JPL/Space Science Institute
N00200866.jpg was taken on January 20, 2013 and received on Earth January 21, 2013. The camera was pointing toward SATURN-RINGS at approximately 552,510 miles (889,179 kilometers) away, and the image was taken using the CL1 and GRN filters.



Tuesday, January 22, 2013

Correspondence Differential Ghost Imaging

Another day. another compressive imaging instance:

Correspondence Differential Ghost Imaging by Ming-Fei Li, Yu-Ran Zhang, Kai-Hong LuoLing-An Wuand Heng Fan. The abstract reads:

Experimental data with digital masks and a theoretical analysis are presented for a nonlocal imaging scheme that we name correspondence differential ghost imaging (CDGI). It is shown that by conditional averaging of the information from the reference detector but with the negative signals inverted, the quality of the reconstructed images is, in general, superior to all other ghost imaging (GI) methods to date. The advantages of both differential GI and correspondence GI are combined, plus less data and shorter computation time are required to obtain equivalent quality images under the same conditions. This CDGI method offers a general approach applicable to all GI techniques, especially when objects with continuous gray tones are involved.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, January 21, 2013

Metamaterial Apertures for Computational Imaging



You probably recall the use of metamaterials in the constitution of flat lenses in Plenoptic Function Sensing Hacks ? well, Sylvain Gigan spotted this one on Friday: Metamaterial Apertures for Computational Imaging by John HuntTom DriscollAlex Mrozac,  Guy Lipworth, Matthew Reynolds, David Brady,David R. Smith. The abstract reads:
By leveraging metamaterials and compressive imaging, a low-profile aperture capable of microwave imaging without lenses, moving parts, or phase shifters is demonstrated. This designer aperture allows image compression to be performed on the physical hardware layer rather than in the postprocessing stage, thus averting the detector, storage, and transmission costs associated with full diffraction-limited sampling of a scene. A guided-wave metamaterial aperture is used to perform compressive image reconstruction at 10 frames per second of two-dimensional (range and angle) sparse still and video scenes at K-band (18 to 26 gigahertz) frequencies, using frequency diversity to avoid mechanical scanning. Image acquisition is accomplished with a 40:1 compression ratio.

the thing to keep in mind is that even our current "conventional" imaging system are already compressive of the plenoptic function (they take a 3d scene of an inifinite set of  colors and make it a 2-d scene made up of a combination of three colors)









Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, January 20, 2013

Phased Array 2013 announcement

Greg Charvat let me know of the following:


Hi Igor, 
Wondering if you could help me get the word out that we've extended the abstract deadline to 2/1 and will work with those who need to await government approvals for their abstracts:
Our last convention in 2010 was amazing! Approximately 500 attendees, 200 papers, and nearly 40% of attendees were from Europe. Should be a good time again this year.
If any of your readers want to e-mail me directly about this they can feel free to do so, i will reply to every message: charvatg@gmail.com
.....
Greg

Sure Greg , the readers of the blog are now aware.



Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, January 18, 2013

Fast Food: Approximating Kernel Expansion in Loglinear Time

You probably recall Fast Functions via Randomized Algorithms: Fastfood versus Random Kitchen Sinks, well  Ana at VideoLectures.net just informed me that the video of Alex Smola´s presentation is now available from the NIPS video site. We may have to wait for the presentation 9some of it is here) though. In the meantime, but here it is:

The ability to evaluate nonlinear function classes rapidly is crucial for nonparametric estimation. We propose an improvement to random kitchen sinks that offers O(n log d) computation and O(n) storage for n basis functions in d dimensions without sacrificing accuracy. We show how one may adjust the regularization properties of the kernel simply by changing the spectral distribution of the projection matrix. Experiments show that we achieve identical accuracy to full kernel expansions and random kitchen sinks 100x faster with 1000x less memory.

Where are we now ?

Here is what I gleaned in the past two days that ought to give us a snapshot of what's coming and what it is we do not know.

Tim Gowers let us know that he is joining the good guys in that there is a group in Grenoble, that has institutional support and is willing to go after this arxiv-overlay idea. We mentioned this idea here previously because this is the only way post peer review publishing can gain some traction and eventually become the norm. (h/t Thomas)

In the same vein, I don't know what Kool-aid the folks at TechCrunch are drinking but it does wonders: Here are two unconnected headlines
We mentioned the passing of Aaron Swartz recently (Agents of Change), Philip Greenspun states  correctly the central question of this case Can we get government out of the copyright enforcement business?

If you recall Predicting the Future: The Steamrollers and its attendant part 2, you'll probably notice that a simple consequence of exome sequencing and dirt cheap sequencers is the very new capability to identify and even cure first what used to be called orphan diseases (Researchers Study Mystery of the Toddler Who Won't Grow). Cheap sequencing means the ability to pinpoint more rapidly genes that are outside of some "normal" envelope. It sounds plausible that some of these orphan diseases might be attended to faster than some more common ones. In the same vein, Brian tells us why CCD sensors will become obsolete. We are not surprised, (Do not mess with CMOS

But as advanced as we think we are, we still don't know how batteries really work (Grounded Boeing 787 Dreamliners Use Batteries Prone to Overheating), we slowly realize that our fear is our worst enemy ( Fukushima's Fallout of Fear) but that it is still the way we do business (Germany Repatriating Gold From NY, Paris 'In Case Of A Currency Crisis'), that some disease for which phenotype classification is extremely hard (On the difficulty of Autism diagnosis: Can we plot this better ?) yields very different outcomes (Can Some Children 'Lose' Autism Diagnosis? New Evidence Says Yes) and that sometimes we don't know shit but we certainly ought to.

Finally, group testing (a subset of compressive sensing) that is one of the first problem looked at for delineating what works and what doesn't in agriculture and the basis of much statistical theory, may soon get a boost by the gathering of more data (Kickstarting an Internet-of-Things for your garden). Rapsberry, the new 35$ computer is getting to use Python (Pi-A-Sketch code review) and 3D printing is on collision course with many societal issues (NY Congressman Introducing Ban On 3D-Printed High Capacity Gun Magazines), which also include copyrights....

Meantime, on Mars, Hirise took a photo of Curiosity and its tracks (Pretty picture: new HiRISE view of Curiosity, sol 145). wow.

Thursday, January 17, 2013

Fête Parisienne in Computation, Inference and Optimization: A Young Researchers' Forum

Francis Bach just sent me the following:

Hi Igor,
Mike Jordan and I are organizing a workshop that may be of interest to the Nuit Blanche community. Could you feature it in your blog?

Thanks!
Francis

Sure Francis ! Here is the announcement:


Fête Parisienne in Computation, Inference and Optimization: A Young Researchers' Forum

One-day workshop, March 20, 2013, at IHES, Bures-sur-Yvette, France Organized by Francis Bach (INRIA - ENS) and Michael Jordan (U.C. Berkeley)

Many modern data analysis problems lead to inference based on high-dimensional structured models, typically with large amounts of observed data. In these situations, often referred to as "Big Data" problems, computation, inference and optimization need to be studied together.

The workshop will focus on various facets of this challenging problem.The emerging links among the computational and inferential disciplines have led to the emergence of a new generation of researchers whose
approach is multi-disciplinary: this workshop aims to highlight their recent work and bring together international participants who wish to contribute to further developments.

The workshop will be held on March 20, 2013, at the Institute des Hautes Etudes Scientifiques (http://www.ihes.fr/), located in Bures-sur-Yvette, close to Paris. It will be composed of talks by the following invited speakers as well as a lunch-time poster session.

  • Sylvain Arlot (CNRS - Ecole Normale Supérieure)
  • Francois Caron (INRIA Bordeaux - Sud-Ouest)
  • Nicolas Chopin (CREST - ENSAE)
  • Aurélien Garivier (Institut Mathématique de Toulouse, Université Paul Sabatier)
  • Zaid Harchaoui (INRIA Grenoble - Rhône-Alpes)
  • Guillaume Obozinski (Ecole des Ponts - Paristech)
  • Igor Prünster (University of Torino)
  • Peter Richtarik (University of Edinburgh)
  • Aarti Singh (Carnegie Mellon University)
  • Yee Whye Teh (Oxford University)

Participants have to registered on the workshop website
(http://www.ihes.fr/jsp/site/Portal.jsp?document_id=3270&portlet_id=14)
that contains all details regarding the meeting.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

The #NIPS2012 Videos are out



Videolectures came through earlier than last year. woohoo! Presentations relevant to Nuit Blanche were featured earlier here. Videos for the presentations for the Posner Lectures, Invited Talks and Oral Sessions of the conference are here. Videos for the presentations for the different Workshops are here. Some videos are not available because the presenters have not given their permission to the good folks at Videolectures. If you know any of them, let them know the world is waiting.

Compressed Sensing with Correlation Between Measurements and Noise - implementation -





Existing convex relaxation-based approaches to reconstruction in compressed sensing assume that noise in the measurements is independent of the measurements themselves. We consider the case of noise correlated with the compressed measurements and introduce a simple technique for improvement of compressed sensing reconstruction from such measurements.The technique is based on a linear model of the correlation of additive noise with the measurements. The modification of there construction algorithm based on this model is very simple and has negligible additional computational cost compared to standard reconstruction algorithms. The proposed technique reduces reconstruction error considerably in the case of correlated measurements and noise. Numerical experiments confirm the efficacy of the technique. The technique is demonstrated with application to low-rate quantization of compressed measurements, which is known to introduce correlated noise, and improvements in reconstruction error up to approximately 7 dB are observed for 1 bit/sample quantization.
The attendant code to replicate the figures is here.


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly