Friday, February 27, 2015

Approximate Message Passing with Restricted Boltzmann Machine Priors

The great convergence is upon us. From the following paper:

The present paper demonstrates the first steps in incorporating deep learned priors into generalized linear problems such as compressed sensing.


 
Approximate Message Passing with Restricted Boltzmann Machine Priors by Eric W. Tramel, Angélique Drémeau, Florent Krzakala
Approximate Message Passing (AMP) has been shown to be an excellent statistical approach to signal inference and compressed sensing problem. The AMP framework provides modularity in the choice of signal prior; here we propose a hierarchical form of the Gauss-Bernouilli prior which utilizes a Restricted Boltzmann Machine (RBM) trained on the signal support to push reconstruction performance beyond that of simple iid priors for signals whose support can be well represented by a trained binary RBM. We present and analyze two methods of RBM factorization and demonstrate how these affect signal reconstruction performance within our proposed algorithm. Finally, using the MNIST handwritten digit dataset, we show experimentally that using an RBM allows AMP to approach oracle-support performance.  
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Gaussian Phase Transitions and Conic Intrinsic Volumes: Steining the Steiner Formula

 
 
Stéphane Chrétien just sent me the following:
 
Dear Igor,
 
I hope you are doing fine !
 
I recently read the following paper:

http://arxiv.org/abs/1411.6265

which uses techniques such as the Stein method for proving a CLT explaining precisely the asymptotical phase transition phenomenon in compressed sensing with gaussian measurements. The readers of Nuit Blanche might also be interested in this research.

Best regards, 
Thank you Stéphane !


Intrinsic volumes of convex sets are natural geometric quantities that also play important roles in applications, such as linear inverse problems with convex constraints, and constrained statistical inference. It is a well-known fact that, given a closed convex cone C⊂Rd, its conic intrinsic volumes determine a probability measure on the finite set {0,1,...d}, customarily denoted by L(VC). The aim of the present paper is to provide a Berry-Esseen bound for the normal approximation of L(VC), implying a general quantitative central limit theorem (CLT) for sequences of (correctly normalised) discrete probability measures of the type L(VCn), n≥1. This bound shows that, in the high-dimensional limit, most conic intrinsic volumes encountered in applications can be approximated by a suitable Gaussian distribution. Our approach is based on a variety of techniques, namely: (1) Steiner formulae for closed convex cones, (2) Stein's method and second order Poincar\'e inequality, (3) concentration estimates, and (4) Fourier analysis. Our results explicitly connect the sharp phase transitions, observed in many regularised linear inverse problems with convex constraints, with the asymptotic Gaussian fluctuations of the intrinsic volumes of the associated descent cones. In particular, our findings complete and further illuminate the recent breakthrough discoveries by Amelunxen, Lotz, McCoy and Tropp (2014) and McCoy and Tropp (2014) about the concentration of conic intrinsic volumes and its connection with threshold phenomena. As an additional outgrowth of our work we develop total variation bounds for normal approximations of the lengths of projections of Gaussian vectors on closed convex sets.  
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, February 26, 2015

Tutorial: Submodularity in Machine Learning Applications

Here is a tutorial on Submodularity in Machine Learning Applications by Jeff Bilmes.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

RSVDPACK: Subroutines for computing partial singular value decompositions via randomized sampling on single core, multi core, and GPU architectures - implementation -


RSVDPACK: Subroutines for computing partial singular value decompositions via randomized sampling on single core, multi core, and GPU architectures by Sergey Voronin, Per-Gunnar Martinsson

This document describes an implementation in C of a set of randomized algorithms for computing partial Singular Value Decompositions (SVDs). The techniques largely follow the prescriptions in the article "Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions," N. Halko, P.G. Martinsson, J. Tropp, SIAM Review, 53(2), 2011, pp. 217-288, but with some modifications to improve performance. The codes implement a number of low rank SVD computing routines for three different sets of hardware: (1) single core CPU, (2) multi core CPU, and (3) massively multicore GPU.  
The implementations are on Sergey Voronin's GitHub: More specifically here.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, February 25, 2015

Unsupervised Learning of Acoustic Features Via Deep Canonical Correlation Analysis





It has been previously shown that, when both acoustic and articulatory training data are available, it is possible to improve phonetic recognition accuracy by learning acoustic features from this multi-view data with canonical correlation analysis (CCA). In contrast with previous work based on linear or kernel CCA, we use the recently proposed deep CCA, where the functional form of the feature mapping is a deep neural network. We apply the approach on a speaker-independent phonetic recognition task using data from the University of Wisconsin X-ray Microbeam Database. Using a tandem-style recognizer on this task, deep CCA features improve over earlier multi-view approaches as well as over articulatory inversion and typical neural network-based tandem features. We also present a new stochastic training approach for deep CCA, which produces both faster training and better-performing features.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Scalable audio separation with light Kernel Additive Modelling - implementation -

Using randomization to scale some capabilities on low power systems, this is what we have today: Scalable audio separation with light Kernel Additive Modelling by Antoine Liutkus, Derry Fitzgerald, Zafar Rafii
Recently, Kernel Additive Modelling (KAM) was proposed as a unified framework to achieve multichannel audio source separation. Its main feature is to use kernel models for locally describing the spectrograms of the sources. Such kernels can capture source features such as repetitivity, stability over time and/or frequency, self-similarity, etc. KAM notably subsumes many popular and effective methods from the state of the art, including REPET and harmonic/percussive separation with median filters. However, it also comes with an important drawback in its initial form: its memory usage badly scales with the number of sources. Indeed, KAM requires the storage of the full-resolution spectrogram for each source, which may become prohibitive for full-length tracks or many sources. In this paper, we show how it can be combined with a fast compression algorithm of its parameters to address the scalability issue, thus enabling its use on small platforms or mobile devices.  
The implementation is one Antoine Liutkus' KAML page.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, February 24, 2015

Crossing the P-river (follow-up)

A follow up to yesterday's blog entry. How much time did it take to gather all the information from the entire bacterial genome using a USB connected instrument that looks like a large memory stick and featured in the paper in Crossing the P-river: A complete bacterial genome assembled de novo using only nanopore sequencing data ?

 


Wow ! You're seeing history in the making.
 
Credit photo: this forum thread
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Compressive spectrum sensing of radar pulses based on photonic techniques

 
 
Here is another instance of Compressive Sensing hardware: Compressive spectrum sensing of radar pulses based on photonic techniques by Qiang Guo, Yunhua Liang, Minghua Chen, Hongwei Chen, and Shizhong Xie

We present a photonic-assisted compressive sampling (CS) system which can acquire about 106 radar pulses per second spanning from 500 MHz to 5 GHz with a 520-MHz analog-to-digital converter (ADC). A rectangular pulse, a linear frequency modulated (LFM) pulse and a pulse stream is respectively reconstructed faithfully through this system with a sliding window-based recovery algorithm, demonstrating the feasibility of the proposed photonic-assisted CS system in spectral estimation for radar pulses. 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, February 23, 2015

Crossing the P-river: A complete bacterial genome assembled de novo using only nanopore sequencing data

 
 
We interrupt our regularly scheduled program, you probably remember this entry from fifteen days ago:
 Here is a new piece of that puzzle, with the right sensor (the Oxford Nanopre MinION instrument a long read sequencer), things that used to be difficult are enabling the flooding of a new tsunami: A complete bacterial genome assembled de novo using only nanopore sequencing data by Nicholas James Loman , Joshua Quick , Jared T Simpson

A method for de novo assembly of data from the Oxford Nanopore MinION instrument is presented which is able to reconstruct the sequence of an entire bacterial chromosome in a single contig. Initially, overlaps between nanopore reads are detected. Reads are then subjected to one or more rounds of error correction by a multiple alignment process employing partial order graphs. After correction, reads are assembled using the Celera assembler. We show that this method is able to assemble nanopore reads from Escherichia coli K-12 MG1655 into a single contig of length 4.6Mb permitting a full reconstruction of gene order. The resulting assembly has 98.4% nucleotide identity compared to the finished reference genome.
The software pipeline used to generate these assemblies is freely available online at https://github.com/jts/nanocorrect.
 
 
 
Related blog entries:

 

 

 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

X-Ray Vision with Only WiFi Power Measurements Using Rytov Wave Models

We continue our series of Hardware related compressive sensing blog entries with this tomography using WiFi and robots to cover the area of interest from different angles.




In this paper, unmanned vehicles are tasked with seeing a completely unknown area behind thick walls based on only wireless power measurements using WLAN cards. We show that a proper modeling of wave propagation that considers scattering and other propagation phenomena can result in a considerable improvement in see-through imaging. More specifically, we develop a theoretical and experimental framework for this problem based on Rytov wave models, and integrate it with sparse signal processing and robotic path planning. Our experimental results show high-resolution imaging of three different areas, validating the proposed framework. Moreover, they show considerable performance improvement over the state-of-the-art that only considers the Line Of Sight (LOS) path, allowing us to image more complex areas not possible before. Finally, we show the impact of robot positioning and antenna alignment errors on our see-through imaging framework. 

In the end, I note their use of TV as a way to regularize their signals and I wonder if by modulating the power level of the emitters and having more than one receivers (they seem to have only two robots, one emitting the other receiving), then the results might be even more beautiful than what they have. While the system is indeed linear, it seems to rely on much trajectory design for both robots. With additional receivers, one would think that some of the trajectory issues might become less important but then additional receivers woud also mean that the system is not linear anymore as is the case of CT (traditional CT is linear, source coding CT is not).

On a totally unrelated note, I wonder if using monocular camera SLAM would simplify some of the embedded coding that goes into this current set up.

The project page is here.
 
 
 
another Wifi related project:

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, February 20, 2015

Reference-less measurement of the transmission matrix of a highly scattering material using a DMD and phase retrieval techniques - implementation -



Here is a follow up of our recent work on Compressive Sensing. I am not a co-author of this new paper so I feel freer to make a comment on it. In the previous approach, we defined the transmission matrix through measurements that required us to go through at least 4 different measurements with a phase difference of pi/4 in order to finally figure out each coefficient of the transmission matrix. Actually, this was the paper that originally used this technique. Our work put a spin on this by looking at it as a compressive sensing system (as opposed to using a reconstruction based on Tikhonov solver). But SLMs that produce those tiny phase shifts are expensive. For that reason, the authors of today's paper used a Texas Instrument DMD that loses this phase information. The paper shows that with enough of these phaseless information, we can recover the transmission matrix. In turn this knowledge is used to produce peaks through these media. Woohoo !
 
Reference-less measurement of the transmission matrix of a highly scattering material using a DMD and phase retrieval techniques by Angelique Dremeau, Antoine Liutkus, David Martina, Ori Katz, Christophe Schulke, Florent Krzakala, Sylvain Gigan, Laurent Daudet

This paper investigates experimental means of measuring the transmission matrix (TM) of a highly scattering medium, with the simplest optical setup. Spatial light modulation is performed by a digital micromirror device (DMD), allowing high rates and high pixel counts but only binary amplitude modulation. We used intensity measurement only, thus avoiding the need for a reference beam. Therefore, the phase of the TM has to be estimated through signal processing techniques of phase retrieval. Here, we compare four different phase retrieval principles on noisy experimental data. We validate our estimations of the TM on three criteria : quality of prediction, distribution of singular values, and quality of focusing. Results indicate that Bayesian phase retrieval algorithms with variational approaches provide a good tradeoff between the computational complexity and the precision of the estimates.


the solver is on Angelique's page under this paper:

  • A. Drémeau, F. Krzakala - Phase recovery from a Bayesian point of view: the variational approach - Accepted at IEEE Int'l Conference on Acoustics, Speech and Signal Processing (ICASSP), (preprint) arXiv:1410.1368, Brisbane, Australia, April 2015. 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, February 19, 2015

Thesis: Recovery of Continuous Quantities from Discrete and Binary Data with Applications to Neural Data


We featured Karin Knudson's work before here in One-bit compressive sensing with norm estimation - implementation - well, Karin graduated and here is her very interesting thesis. It is interesting in part because it is part of the continuum of studies between traditional compressive sensing and neural networks. Gigem, uh....Congratulations Karin ! here is the thesis: Recovery of Continuous Quantities from Discrete and Binary Data with Applications to Neural Data

We consider three problems, motivated by questions in computational neuroscience, related to recovering continuous quantities from binary or discrete data or measurements in the context of sparse structure. First, we show that it is possible to recover the norms of sparse vectors given one-bit compressive measurements, and provide associated guarantees. Second, we present a novel algorithm for spike-sorting in neural data, which involves recovering continuous times and amplitudes of events using discrete bases. This method, Continuous Orthogonal Matching Pursuit, builds on algorithms used in compressive sensing. It exploits the sparsity of the signal and proceeds greedily,achieving gains in speed and accuracy over previous methods. Lastly, we present a Bayesian method making use of hierarchical priors for entropy rate estimation from binary sequences.
 
 
N00235465.jpg was taken on February 16, 2015 and received on Earth February 17, 2015. The camera was pointing toward SATURN, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated. Image Credit: NASA/JPL/Space Science Institute
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Weighted SGD for ℓp Regression with Randomized Preconditioning


Following up on Tuesday's related development (53 pages of good stuff), here is some more on the use of randomized projection for preconditionning. Proposition 13 is interesting: Weighted SGD for ℓp Regression with Randomized Preconditioning by Jiyan Yang, Yin-Lam Chow, Christopher Ré, Michael W. Mahoney

In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. In this paper, we aim to bridge the gap between these two methods in solving overdetermined linear regression problems---e.g., ℓ2 and ℓ1 regression problems. We propose a hybrid algorithm that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system. We prove that this algorithm inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. The effectiveness of such algorithms is illustrated numerically, and the results are consistent with our theoretical findings. Finally, we also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems.  
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, February 17, 2015

Implementing Randomized Matrix Algorithms in Parallel and Distributed Environments

For some problems that are typically using Hadoop and related techniques, RandNLA is an enabler  in  that " the improved scalability often comes due to restricted communications, rather than improvements in FLOPS"




Implementing Randomized Matrix Algorithms in Parallel and Distributed Environments by Jiyan Yang, Xiangrui Meng, Michael W. Mahoney

In this era of large-scale data, distributed systems built on top of clusters of commodity hardware provide cheap and reliable storage and scalable processing of massive data. Here, we review recent work on developing and implementing randomized matrix algorithms in large-scale parallel and distributed environments. Randomized algorithms for matrix problems have received a great deal of attention in recent years, thus far typically either in theory or in machine learning applications or with implementations on a single machine. Our main focus is on the underlying theory and practical implementation of random projection and random sampling algorithms for very large very overdetermined (i.e., overconstrained) ℓ1 and ℓ2 regression problems. Randomization can be used in one of two related ways: either to construct sub-sampled problems that can be solved, exactly or approximately, with traditional numerical methods; or to construct preconditioned versions of the original full problem that are easier to solve with traditional iterative algorithms. Theoretical results demonstrate that in near input-sparsity time and with only a few passes through the data one can obtain very strong relative-error approximate solutions, with high probability. Empirical results highlight the importance of various trade-offs (e.g., between the time to construct an embedding and the conditioning quality of the embedding, between the relative importance of computation versus communication, etc.) and demonstrate that ℓ1 and ℓ2 regression problems can be solved to low, medium, or high precision in existing distributed systems on up to terabyte-sized data.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, February 16, 2015

Kernel Probabilistic Programming (KPP): Computing Functions of Random Variables via Reproducing Kernel Hilbert Space Representations

Random Features used as a means of speeding up Kernel Probabilistic Programming:


We describe a method to perform functional operations on probability distributions of random variables. The method uses reproducing kernel Hilbert space representations of probability distributions, and it is applicable to all operations which can be applied to points drawn from the respective distributions. We refer to our approach as {\em kernel probabilistic programming}. We illustrate it on synthetic data, and show how it can be used for nonparametric structural equation models, with an application to causal inference.
this is funny ref 34 ( http://www.learning-with-kernels.org/ ) does not lead to the intended site.

Of relevance:
Kernel Mean Estimation and Stein's Effect by Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Arthur Gretton, Bernhard Schölkopf

A mean function in reproducing kernel Hilbert space, or a kernel mean, is an important part of many applications ranging from kernel principal component analysis to Hilbert-space embedding of distributions. Given finite samples, an empirical average is the standard estimate for the true kernel mean. We show that this estimator can be improved via a well-known phenomenon in statistics called Stein's phenomenon. After consideration, our theoretical analysis reveals the existence of a wide class of estimators that are better than the standard. Focusing on a subset of this class, we propose efficient shrinkage estimators for the kernel mean. Empirical evaluations on several benchmark applications clearly demonstrate that the proposed estimators outperform the standard kernel mean estimator.
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CRISP - Towards Compressive Information Processing Systems - implementation -

Giulio just sent me the following the other day:
 
 
Dear Igor, 
my name is Giulio Coluccia and I'm a PostDoc at Politecnico di Torino, Italy. I'm involved in an Eurpean ERC Project, named CRISP, entirely dedicated to Compressed Sensing. Its website is www.crisp-erc.eu and the Principal Investigator is Prof. Enrico Magli (enrico.magli@polito.it http://www1.tlc.polito.it/oldsite/sas-ipl/Magli/). The website contains all the material produced within the project, including the full text of the papers and the software to reproduce the results. I just wanted to address you the website of our project and, in case you find it interesting, to encourage you to contact either me or directly Prof. Magli for more details.
Kind regards,Giulio

Giulio Coluccia, PhD
Politecnico di Torino
Dipartimento di Elettronica e Telecomunicazioni
(Sede Storica - Corso Montevecchio)
Room S4-MB-15
corso Duca degli Abruzzi 24
I-10129, Torino, Italy 
Here is the publication page with attendant links to papers and implementations:
 

2014



2013


2012

 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, February 15, 2015

Saturday Morning Videos: Nando de Freitas' Machine Learning course, ICLR2014


I also found this series of ICLR videos from last year:

 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, February 13, 2015

Abstract for "collège doctoral" seminar at IFPEN, March 2015

A while back, Laurent Duval kindly asked me to do a presentation at the "collège doctoral" seminar at IFPEN at the end of March.  

Based on my co-hosting of the Paris Machine Learning meetup and the themes covered on Nuit Blanche, I (foolishly ?) decided to say yes. Here is my first (and probably last) draft abstract (so now it's public, I cannot walk away from this :-)




Title: "Ca va être compliqué": Islands of knowledge, Mathematician-Pirates and the Great Convergence

Igor Carron,
https://www.linkedin.com/in/IgorCarron
http://nuit-blanche.blogspot.com

In this talk, we will survey the different techniques that have led to recent changes in the way we do sensing and how to make sense of that information. In particular, we will talk about problem complexity and attendant algorithms, compressive sensing, advanced matrix factorization, sensing hardware and machine learning and how all these seemingly unrelated issues are of importance to the praticing engineer. In particular, we'll draw some parallel between some of the techniques currently used in machine learning as used by internet companies and the upcoming convergence that will occur in many fields of Engineering and Science as a result.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Abstracts: Information Theory and Applications (ITA 2014)

 
As I was reading this presentation Universal Denoising and Approximate Message Passing by Dror Baron , Yanting Ma and Junan Zhu, I realized I had not covered the ITA conference yet.

Here are some of the abstracts of the presentations made there, some of which have been covered here on Nuit Blanche already. Enjoy !
Zhu


Here are some videos presentations of some of the talks:
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly