Wednesday, October 01, 2014

Sparse Representation Issues in Hyperspectral and Multispectral Images

I just compiled a list of a few preprints and papers that recently came out in support of hyperspectral work and its relation to either compressive sensing or advanced matrix factorization. Without further ado:




This paper considers a recently emerged hyperspectral unmixing formulation based on sparse regression of a self-dictionary multiple measurement vector (SD-MMV) model, wherein the measured hyperspectral pixels are used as the dictionary. Operating under the pure pixel assumption, this SD-MMV formalism is special in enabling simultaneous identification of the endmember spectral signatures and the number of endmembers. Previous SD-MMV studies mainly focus on convex relaxations. In this study, we explore the alternative of greedy pursuit, which generally provides efficient and simple algorithms. In particular, we design a greedy SD-MMV algorithm using simultaneous orthogonal matching pursuit. Intriguingly, the proposed greedy algorithm is shown to be closely related to some existing pure pixel search algorithms, especially, the successive projection algorithm (SPA). Thus, a link between SD-MMV and pure pixel search is revealed. We then perform exact recovery analyses, and prove that the proposed greedy algorithm is robust to noise---including its identification of the (unknown) number of endmembers---under a sufficiently low noise level. The identification performance of the proposed greedy algorithm is demonstrated through both synthetic and real-data experiments.

Hyperspectral and Multispectral Image Fusion based on a Sparse Representation by Qi Wei, José Bioucas-Dias, Nicolas Dobigeon, Jean-Yves Tourneret
This paper presents a variational based approach to fusing hyperspectral and multispectral images. The fusion process is formulated as an inverse problem whose solution is the target image assumed to live in a much lower dimensional subspace. A sparse regularization term is carefully designed, relying on a decomposition of the scene on a set of dictionaries. The dictionary atoms and the corresponding supports of active coding coefficients are learned from the observed images. Then, conditionally on these dictionaries and supports, the fusion problem is solved via alternating optimization with respect to the target image (using the alternating direction method of multipliers) and the coding coefficients. Simulation results demonstrate the efficiency of the proposed algorithm when compared with the state-of-the-art fusion methods.

Effective Spectral Unmixing via Robust Representation and Learning-based Sparsity by Feiyun Zhu, Ying Wang, Bin Fan, Gaofeng Meng, Chunhong Pan
Hyperspectral unmixing (HU) plays a fundamental role in a wide range of hyperspectral applications. It is still challenging due to the common presence of outlier channels and the large solution space. To address the above two issues, we propose a novel model by emphasizing both robust representation and learning-based sparsity. Specifically, we apply the $\ell_{2,1}$-norm to measure the representation error, preventing outlier channels from dominating our objective. In this way, the side effects of outlier channels are greatly relieved. Besides, we observe that the mixed level of each pixel varies over image grids. Based on this observation, we exploit a learning-based sparsity method to simultaneously learn the HU results and a sparse guidance map. Via this guidance map, the sparsity constraint in the $\ell_{p}\!\left(\!0\!<\! p\!\leq\!1\right)$-norm is adaptively imposed according to the learnt mixed level of each pixel. Compared with state-of-the-art methods, our model is better suited to the real situation, thus expected to achieve better HU results. The resulted objective is highly non-convex and non-smooth, and so it is hard to optimize. As a profound theoretical contribution, we propose an efficient algorithm to solve it. Meanwhile, the convergence proof and the computational complexity analysis are systematically provided. Extensive evaluations verify that our method is highly promising for the HU task---it achieves very accurate guidance maps and much better HU results compared with state-of-the-art methods.


This paper presents a conceptually simple, robust, yet highly effective approach to both texture classification and material categorization. The proposed system is composed of three components: (1) local, highly discriminative, and robust features based on sorted random projections, built on the universal and information-preserving properties of random projections; (2) an effective Bag-of-Words (BoW) global model; and (3) a novel approach for combining multiple features in a Support Vector Machine (SVM) classifier. The proposed approach encompasses the simplicity, broad applicability, and efficiency of the three methods. We have tested the proposed approach on eight popular texture databases, including FMD, a highly challenging materials database. We compare our method with thirteen recent stateof- the-art methods, and the experimental results show that our texture classification system yields the best classification rates of which we are aware of 99.37% for CUReT, 97.16% for Brodatz, 99.30% for UMD and 99.29% for KTH-TIPS. Moreover, the proposed approach significantly outperforms the current state-of-the-art approach in materials categorization, with an improvement to classification accuracy of 67%. 



Many powerful pansharpening approaches exploit the functional relation between the fusion of PANchromatic (PAN) and MultiSpectral (MS) images. To this purpose, the modulation transfer function of the MS sensor is typically used, being easily approximated as a Gaussian filter whose analytic expression is fully specified by the sensor gain at the Nyquist frequency. However, this characterization is often inadequate in practice. In this paper, we develop an algorithm for estimating the relation between PAN and MS images directly from the available data through an efficient optimization procedure. The effectiveness of the approach is validated both on a reduced scale data set generated by degrading images acquired by the IKONOS sensor and on full-scale data consisting of images collected by the QuickBird sensor. In the first case, the proposed method achieves performances very similar to that of the algorithm that relies upon the full knowledge of the degrading filter. In the second, it is shown to outperform several very credited state-of-the-art approaches for the extraction of the details used in the current literature.
of related interest:

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, September 30, 2014

Ce Soir: Machine Learning Meetup Hors-Série #1 (season 2): Datajournalisme

c


Thanks to Numa and HopWorkChrystèle, Franck and I will be hosting a "Hors-Série" of our Machine Learning Meetup on the theme of Data Journalism. At least the first guest will speak in English while the others are expected to speak french (slides ought to be in english). The video of the meetup is at the top of this blog entry and it should start around 6:50/6:55PM Paris time (CET).

Video of the Meetup in French (mostly)

Invités confirmés :

Animation de la discussion : Chrystèle Bazin, consultante indépendante et rédactrice pour différents médias numériques.


Avec le soutien actif de Numa et de HopWork

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Map Estimation for Bayesian Mixture Models with Submodular Priors ( and slides Fourth Cargese Workshop on Combinatorial Optimization )


We saw submodularity here before, take for instance Francis Bach's slides (page 99 and up) on Structured sparsity through convex optimization , where he mentions the use of submodularity to find new regularizers).  Here is another way of using submodularity in compressive sensing: Map Estimation for Bayesian Mixture Models with Submodular Priors by Marwa El Halabi, Luca Baldassarre and Volkan Cevher
We propose a Bayesian approach where the signal structure can be represented by a mixture model with a submodular prior. We consider an observation model that leads to Lipschitz functions. Due to its combinatorial nature, computing the maximum a posteriori estimate for this model is NP-Hard, nonetheless our converging majorization-minimization scheme yields approximate estimates that, in practice, outperform state-of-the-art modular prior. We consider an observation model that leads to Lipschitz functions. Due to its combinatorial nature, computing the maximum a posteriori estimate for this model is NP-Hard, nonetheless our converging majorization-minimization scheme yields approximate estimates that, in practice, outperform state-of-the-art methods


why submodularity is a big deal, from the paper:

Submodularity is considered the discrete equivalent of convexity in the sense that submodular function minimization (SFM) admits efficient algorithms, with best known complexity of O(N5T+N6), where T is the function evaluation complexity [11]. In practice, however, the minimum-norm point algorithm is usually used whenever, the minimum-norm point algorithm is usually used, which commonly runs in O(N2), but has no known complexity [12]. Furthermore, for certain functions which are “graph representable” [13, 14], SFM is equivalent to the minimum s-t cut on an appropriate graph G(V,E), with time complexity 1 O(|E|min{|V|2/3,|E|1/2}) [15].


Relevant to the theme of submodularity are last year's Fourth Cargese Workshop on Combinatorial Optimization


Bach:

Iwata:

McCormick:

Naor:




Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, September 29, 2014

Image Classification with A Deep Network Model based on Compressive Sensing

Using compressive sensing to build the first layers of a neural network, and more importantly thinking of iterations of different reconstruction solvers as equilvant to layers of neural network, we are getting there. 



To simplify the parameter of the deep learning network, a cascaded compressive sensing model "CSNet" is implemented for image classification. Firstly, we use cascaded compressive sensing network to learn feature from the data. Secondly, CSNet generates the feature by binary hashing and block-wise histograms. Finally, a linear SVM classifier is used to classify these features. The experiments on the MNIST dataset indicate that higher classification accuracy can be obtained by this algorithm.

 

 

 


Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, September 26, 2014

Compressive Earth Observatory: An Insight from AIRS/AMSU Retrievals (and a comment)

This is exciting but I have included my thoughts below:

Compressive Earth Observatory: An Insight from AIRS/AMSU Retrievals by Ardeshir Mohammad Ebtehaj, Efi Foufoula-Georgiou, Gilad Lerman, Rafael Luis Bras
We demonstrate that the global fields of temperature, humidity and geopotential heights admit a nearly sparse representation in the wavelet domain, offering a viable path forward to explore new paradigms of sparsity-promoting assimilation and compressive retrieval of spaceborne earth observations. We illustrate this idea using retrieval products of the Atmospheric Infrared Sounder (AIRS) and Advanced Microwave Sounding Unit (AMSU) on board the Aqua satellite. The results reveal that the sparsity of the fields of temperature and geopotential height is relatively pressure-independent while atmospheric humidity fields are typically less sparse at higher pressures. Using the sparsity prior, we provide evidence that the global variability of these land-atmospheric states can be accurately estimated from space in a compressed form, using a small set of randomly chosen measurements/retrievals.
 
From the paper we can see:


In other words, our sensing matrices are obtained from an identity matrix in which we have randomly eliminated 55% and 65% of its rows, respectively. In this case, it is easy to show that the sensing matrix has the RIP property for which the CS in (5) can lead to an accurate and successful recovery (see, Section B in Appendix). 
 
so it looks like inpainting and from the conclusion, we have:
 
 
 While progress has been made recently in developing sparse digital image acquisition in visible bands [33], development of sparse-remote-sensing instruments for earth observations from space in microwave and infrared wavelengths remains an important challenge in the coming years. However, our results suggest that, even under the current sensing protocols, transmitting, storing, and processing only a few randomly chosen pixel-samples of the primary land-atmospheric states can be advantageously exploited for a speedy reconstruction of the entire sensor’s field of view with a notable degree of accuracy. The implications of such a capability cannot be overstated for real-time tracking and data assimilation of extreme land-atmospheric phenomena in global early warning systems.
 I like the fact that the findings put the current remote sensing systems within the larger view on how to do sampling and how future instruments might be an extension of that through the use of compressive sensing. 
 
There is the potential for impressionable kids to think that compressive sensing and inpainting might generally be the same as the title of this article on Wired might imply. It is a dichotomy that is difficult to communicate and to this day, we still have people mixing up compressive sensing and inpainting. The idea is very difficult to eradicate that in order to perform compressive sensing, you generally oversample the signal so that measurements are actually redundant. Inpainting on the other hand makes it look like we are getting something for nothing (by avoiding sampling in some part of the signal - an image in general - we can recover the full image), i.e. the algorithm somehow discovers something that was never sensed in the first place. This is not what we have here.

These two views can be coincident or blurred only if you show that the field being measured is actually spread or incoherent with the measurement system being used (here it is point like). This is why the authors of the paper spend a large part of the paper showing that the field is part of a dictionary (low frequency wavelets) and that sampling at specific locations is an adequate incoherent measurement system....at that scale.
 
I wish the authors had written a sentence on that because there are a lot of impressionable kids on the interwebs.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, September 25, 2014

Alternating proximal gradient method for sparse nonnegative Tucker decomposition - implementation -


Multi-way data arises in many applications such as electroencephalography (EEG) classification, face recognition, text mining and hyperspectral data analysis. Tensor decomposition has been commonly used to find the hidden factors and elicit the intrinsic structures of the multi-way data. This paper considers sparse nonnegative Tucker decomposition (NTD), which is to decompose a given tensor into the product of a core tensor and several factor matrices with sparsity and nonnegativity constraints. An alternating proximal gradient method (APG) is applied to solve the problem. The algorithm is then modified to sparse NTD with missing values. Per-iteration cost of the algorithm is estimated scalable about the data size, and global convergence is established under fairly loose conditions. Numerical experiments on both synthetic and real world data demonstrate its superiority over a few state-of-the-art methods for (sparse) NTD from partial and/or full observations. The MATLAB code along with demos are accessible from the author's homepage.
 The implementation is on Yangyang Xu's  page.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, September 24, 2014

Stochastic Coordinate Coding (SCC) - implementation -

As examplified by this work, genomics is bound to have a profound effect on designing new algorithms.


Stochastic Coordinate Coding and Its Application for Drosophila Gene Expression Pattern Annotation by Binbin Lin, Qingyang Li, Qian Sun, Ming-Jun Lai, Ian Davidson, Wei Fan, Jieping Ye

\textit{Drosophila melanogaster} has been established as a model organism for investigating the fundamental principles of developmental gene interactions. The gene expression patterns of \textit{Drosophila melanogaster} can be documented as digital images, which are annotated with anatomical ontology terms to facilitate pattern discovery and comparison. The automated annotation of gene expression pattern images has received increasing attention due to the recent expansion of the image database. The effectiveness of gene expression pattern annotation relies on the quality of feature representation. Previous studies have demonstrated that sparse coding is effective for extracting features from gene expression images. However, solving sparse coding remains a computationally challenging problem, especially when dealing with large-scale data sets and learning large size dictionaries. In this paper, we propose a novel algorithm to solve the sparse coding problem, called Stochastic Coordinate Coding (SCC). The proposed algorithm alternatively updates the sparse codes via just a few steps of coordinate descent and updates the dictionary via second order stochastic gradient descent. The computational cost is further reduced by focusing on the non-zero components of the sparse codes and the corresponding columns of the dictionary only in the updating procedure. Thus, the proposed algorithm significantly improves the efficiency and the scalability, making sparse coding applicable for large-scale data sets and large dictionary sizes. Our experiments on Drosophila gene expression data sets show that the proposed algorithm achieves one or two orders of magnitude speedup compared to the state-of-art sparse coding algorithm.  

The attendant implementation of SCC is on Jieping Ye's software page.
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, September 22, 2014

Book / Proceedings / Lecture Notes: Statistical physics, Optimization, Inference and Message-Passing algorithms



Last year, there was an "Ecole de Physique" in Cargese on Statistical physics, Optimization, Inference and Message-Passing algorithms. Florent Krzakala, one of the organizers, mentioned earlier today that they had released the lecture notes of the talks. They are all on this page

Autumn School, September 30 -- October 11, 2013


Proceeding of all lectures


Called "Statistical Physics, Optimization, Inference, and Message-Passing Algorithms" and edited by F. Krzakala, F. Ricci-Tersenghi, L. Zdeborova, R. Zecchina, E. W. Tramel, L. F. Cugliandolo. A book containing the proceeding of all lectures is in preparation at Oxford University Press. A preliminary version of some chapters can be found online on Arxiv:


 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, September 21, 2014

Unfiltered Access



It happened last season. When Franck and I decided to program Season 1 of the Paris Machine Learning meetup, we initially thought the whole thing would run out pretty quickly. Indeed, our interest was to get people to talk about algorithms as opposed to specific web/langage technologies as is usually the case in technical meetups these days.

Over the course of the Parisian Winter, we both realized that people who came to the meetups were not scared by difficult subjects. In fact, the feedback suggested attendees were sick of dumbed down stories on important topics. While the presentations were probably less heavy than usual academic talks, quite a few academics even told us they liked the format. Deep down, it really was a moment of nostalgia. Those instances reminded people of how great they were when they were learning in their earlier years. They remembered how exceptional those moments were, they craved for these unique times. The meetups brought back that message: You are not stupid, you are decrypting the world and you are not alone.

Over the Parisian Spring, we realized something important. Some of the best people in the world could not come to Paris. Not that they did not want to, but, much like long distance runners, most have reached the Zone and they don't want to deviate from it for some limited exposure. We realized that there had to be a better way, so we started with Skype or hangouts to get them to speak to our audience for 10 to 15 minutes remotely.

Another realization crystalized at meetup #10. I had seen a video of Brenda McCowan who was talking about what her group was doing in Dolphin communication - a faraway concern to people who's livelihood is to predict the next click. Putting this in context of our meetup's theme, that work is hard Unsupervised Learning, it's the stuff of exploration, think Champollion but with animals and no Rosetta Stone: we had to get her on the meetup schedule. Because Brenda is very busy and because this was probably an unexpected invitation from some crazy French guys, she kindly accepted a five minutes Q&A from our crowd on Skype after we had locally played the video that had been recorded earlier by the good folks at NIMBioS. The untold promise was that it would be one of the least impactful event in her professional life: answer technical questions for five minutes from folks 8,000 miles away who had just seen her earlier academic pitch. In the end, the exchange lasted longer.

I am sure I am not the only person in the audience who realized this but for the first time in our lives, when we were asking questions to Dr. Brenda McCowan, we were becoming actors of a really technical National Geographic feature film... "sans" the post-production , "sans" the dumbing down of a traditional filtered Q&A, "sans" the delicate narrative of a show. We, the meetup audience, whose inner 9-year old kids had given up on our own dreams of talking to Flipper when we decided to be serious and eventually have jobs, realized we were having a technically insightful conversation with the one of the few people on Earth who had the interest, knowledge and means of potentially understanding Dolphin communication. For a moment, more than a hundred "wiser" 9-year old kids were back in business.


Props to NIMBioS, Brenda McCowan, meetup.com , Skype, TheFamilyHopWork, DojoCrea, for making this unique moment happen.

In light of the success of the Paris Machine Learning Applications Group, I just started two new meetups in Paris: 
Credit Photo: Hans Hillewaert - The Rosetta Stone in the British Museum.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.