Page Views on Nuit Blanche since July 2010







Please join/comment on the Google+ Community (1548), the CompressiveSensing subreddit (884), the Facebook page (60 likes), the LinkedIn Compressive Sensing group (3301) or the Advanced Matrix Factorization Group (1048)

Tuesday, July 28, 2015

Compressive Sensing for #IoT: Photoplethysmography-Based Heart Rate Monitoring in Physical Activities via Joint Sparse Spectrum Reconstruction, TROIKA

 
 
Zhilin just sent me the following:
 
Hi Igor,

I hope all is well with you!

​ In recent years, I have been working on signal processing for wearable health monitoring, such as signal processing of vital signs in smart watch and other wearables. Particularly, I've applied compressed sensing to this area, and achieved some successes on heart rate monitoring for fitness tracking and health monitoring. So I think you and your blog's readers may be interested in the following work of my collaborators and me:
 
Zhilin Zhang, Zhouyue Pi, Benyuan Liu, TROIKA: A General Framework for Heart Rate Monitoring Using Wrist-Type Photoplethysmographic Signals During Intensive Physical Exercise, IEEE Trans. on Biomedical Engineering, vol. 62, no. 2, pp. 522-531, February 2015
​(preprint: ​http://arxiv.org/abs/1409.5181)

Zhilin Zhang, Photoplethysmography-Based Heart Rate Monitoring in Physical Activities via Joint Sparse Spectrum Reconstruction, IEEE Transactions on Biomedical Engineering, vol. 62, no. 8, pp. 1902-1910, August 2015
(preprint: http://arxiv.org/abs/1503.00688)


In fact, I think the problem of Photoplethysmography-based heart rate monitoring can be well formulated into various kinds of compressed sensing models, such as multiple measurement vector (MMV) model (as shown in my second paper), gridless compressed sensing model (also mentioned in my second paper), and time-varying sparsity model. Since the data are available online (the download link was given in my papers), I hope these data can encourage compressed sensing researchers to join this area, revealing potential values of compressed sensing in these real-life problems.


I will very appreciate if you can introduce my work on your blog.


Thank you!

Best regards,
Zhilin
Thanks Zhilin ! and yes, I am glad to cover work on how compressive sensing and related techniques can make sense of  IoT type of sensors (and work that includes datasets!). Without further ado:


TROIKA: A General Framework for Heart Rate Monitoring Using Wrist-Type Photoplethysmographic Signals During Intensive Physical Exercise by Zhilin Zhang, Zhouyue Pi, Benyuan Liu

Heart rate monitoring using wrist-type photoplethysmographic (PPG) signals during subjects' intensive exercise is a difficult problem, since the signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. So far few works have studied this problem. In this work, a general framework, termed TROIKA, is proposed, which consists of signal decomposiTion for denoising, sparse signal RecOnstructIon for high-resolution spectrum estimation, and spectral peaK trAcking with verification. The TROIKA framework has high estimation accuracy and is robust to strong motion artifacts. Many variants can be straightforwardly derived from this framework. Experimental results on datasets recorded from 12 subjects during fast running at the peak speed of 15 km/hour showed that the average absolute error of heart rate estimation was 2.34 beat per minute (BPM), and the Pearson correlation between the estimates and the ground-truth of heart rate was 0.992. This framework is of great values to wearable devices such as smart-watches which use PPG signals to monitor heart rate for fitness.
 

Photoplethysmography-Based Heart Rate Monitoring in Physical Activities via Joint Sparse Spectrum Reconstruction by Zhilin Zhang

Goal: A new method for heart rate monitoring using photoplethysmography (PPG) during physical activities is proposed. Methods: It jointly estimates spectra of PPG signals and simultaneous acceleration signals, utilizing the multiple measurement vector model in sparse signal recovery. Due to a common sparsity constraint on spectral coefficients, the method can easily identify and remove spectral peaks of motion artifact (MA) in PPG spectra. Thus, it does not need any extra signal processing modular to remove MA as in some other algorithms. Furthermore, seeking spectral peaks associated with heart rate is simplified. Results: Experimental results on 12 PPG datasets sampled at 25 Hz and recorded during subjects' fast running showed that it had high performance. The average absolute estimation error was 1.28 beat per minute and the standard deviation was 2.61 beat per minute. Conclusion and Significance: These results show that the method has great potential to be used for PPG-based heart rate monitoring in wearable devices for fitness tracking and health monitoring.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, July 27, 2015

Random Mappings Designed for Commercial Search Engines - implementation -

Changing various non-text documents into vectors that have the characteristics of vector texts using thresholded random projections is the goal of today's paper. From the paper:
Although proving that our random mapping scheme works is involved, the scheme is remarkably simple. Our corpus X is a finite collection of vectors in R^d, normalized to have unit l_2 norm. To transform each vector in X, multiply each vector by a random matrix, then threshold each element.


Random mappings designed for commercial search engines by Roger Donaldson, Arijit Gupta, Yaniv Plan, Thomas Reimer

We give a practical random mapping that takes any set of documents represented as vectors in Euclidean space and then maps them to a sparse subset of the Hamming cube while retaining ordering of inter-vector inner products. Once represented in the sparse space, it is natural to index documents using commercial text-based search engines which are specialized to take advantage of this sparse and discrete structure for large-scale document retrieval. We give a theoretical analysis of the mapping scheme, characterizing exact asymptotic behavior and also giving non-asymptotic bounds which we verify through numerical simulations. We balance the theoretical treatment with several practical considerations; these allow substantial speed up of the method. We further illustrate the use of this method on search over two real data sets: a corpus of images represented by their color histograms, and a corpus of daily stock market index values.
Python codes used to generate results of that paper, including running example searches using the Whoosh search engine for Python, is here at: https://gitlab.com/dgpr-sparse-search/code




Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, July 25, 2015

Saturday Morning Video: Reverse Engineering the Human Visual System


At CVPR this year, there was another plenary speaker Jack Gallant who talked about Reverse Engineering the Human Visual System. The video is here.
Let us note the use of Blender to produce truth scenes which is really what other reserchers ( see A Probabilistic Theory of Deep Learning by Ankit B. Patel, Tan Nguyen, Richard G. Baraniuk ) are suggesting to use in order to remove nuisance parameters and improve learning abilities.
Abstract of the talk:
The human brain is the most sophisticated image processing system known, capable of impressive feats of recognition and discrimination under challenging natural conditions. Reverse-engineering the brain might enable us to design artificial systems with the same capabilities. My laboratory uses a data-driven system identification approach to tackle this reverse-engineering problem. Our approach consists of four broad stages. First, we use functional MRI to measure brain activity while people watch naturalistic movies. We divide these data into two parts, one use to fit models and one for testing model predictions. Second, we use a system identification framework (based on multiple linearizing feature spaces) to model activity measured at each point in the brain. Third, we inspect the most accurate models to understand how the brain represents low-, mid- and high-level information in the movies. Finally, we use the estimated models to decode brain activity, reconstructing the structural and semantic content in the movies. Any effort to reverse-engineer the brain is inevitably limited by the spatial and temporal resolution of brain measurements, and at this time the resolution of human brain measurements is relatively poor. Still, as measurement technology progresses this framework could inform development of biologically-inspired computer vision systems, and it could aid in development of practical new brain reading technologies. 
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, July 24, 2015

Forward - Backward Greedy Algorithms for Atomic Norm Regularization - implementation -

Nikhil just let me know of version 2 of his preprint, and the attendant code that goes with it. Let us note, as shown in the figure below, that when it is easy to compute, then the atomic norm does a better job than the nuclear norm relaxation.


Forward - Backward Greedy Algorithms for Atomic Norm Regularization by Nikhil Rao, Parikshit Shah, Stephen Wright

In many signal processing applications, the aim is to reconstruct a signal that has a simple representation with respect to a certain basis or frame. Fundamental elements of the basis known as "atoms" allow us to define "atomic norms" that can be used to formulate convex regularizations for the reconstruction problem. Efficient algorithms are available to solve these formulations in certain special cases, but an approach that works well for general atomic norms, both in terms of speed and reconstruction accuracy, remains to be found. This paper describes an optimization algorithm called CoGEnT that produces solutions with succinct atomic representations for reconstruction problems, generally formulated with atomic-norm constraints. CoGEnT combines a greedy selection scheme based on the conditional gradient approach with a backward (or "truncation") step that exploits the quadratic nature of the objective to reduce the basis size. We establish convergence properties and validate the algorithm via extensive numerical experiments on a suite of signal processing applications. Our algorithm and analysis also allow for inexact forward steps and for occasional enhancements of the current representation to be performed. CoGEnT can outperform the basic conditional gradient method, and indeed many methods that are tailored to specific applications, when the enhancement and truncation steps are defined appropriately. We also introduce several novel applications that are enabled by the atomic-norm framework, including tensor completion, moment problems in signal processing, and graph deconvolution.
 The attendant implementation is on Nikhil's code page.
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, July 23, 2015

Slides and Video: What's Wrong with Deep Learning?





This was an invited talk by Yann LeCun at CVPR 2015. The video is here and the slides are here: What's Wrong with Deep Learning?  (h/t Andrei and Gabriel)

I note that what is missing in deep Learning revolves around unsupervised learning and is the subject of most interrogations in compressive sensing and advanced matrix factorizations as per the issue of encoding, decoding and regularizers.


 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, July 22, 2015

Optimal approximate matrix product in terms of stable rank

Here is a new result for Randomized Numerical Linear Algebra.

Optimal approximate matrix product in terms of stable rank by Michael B. Cohen, Jelani Nelson, David P. Woodruff

We prove, using the subspace embedding guarantee in a black box way, that one can achieve the spectral norm guarantee for approximate matrix multiplication with a dimensionality-reducing map having $m = O(\tilde{r}/\varepsilon^2)$ rows. Here $\tilde{r}$ is the maximum stable rank, i.e. squared ratio of Frobenius and operator norms, of the two matrices being multiplied. This is a quantitative improvement over previous work of [MZ11, KVZ14], and is also optimal for any oblivious dimensionality-reducing map. Furthermore, due to the black box reliance on the subspace embedding property in our proofs, our theorem can be applied to a much more general class of sketching matrices than what was known before, in addition to achieving better bounds. For example, one can apply our theorem to efficient subspace embeddings such as the Subsampled Randomized Hadamard Transform or sparse subspace embeddings, or even with subspace embedding constructions that may be developed in the future.
Our main theorem, via connections with spectral error matrix multiplication shown in prior work, implies quantitative improvements for approximate least squares regression and low rank approximation. Our main result has also already been applied to improve dimensionality reduction guarantees for $k$-means clustering [CEMMP14], and implies new results for nonparametric regression [YPW15].
We also separately point out that the proof of the "BSS" deterministic row-sampling result of [BSS12] can be modified to show that for any matrices $A, B$ of stable rank at most $\tilde{r}$, one can achieve the spectral norm guarantee for approximate matrix multiplication of $A^T B$ by deterministically sampling $O(\tilde{r}/\varepsilon^2)$ rows that can be found in polynomial time. The original result of [BSS12] was for rank instead of stable rank. Our observation leads to a stronger version of a main theorem of [KMST10].
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, July 21, 2015

Randomized sketches for kernels: Fast and optimal non-parametric regression

Interesting ! Sketching kernels:


Kernel ridge regression (KRR) is a standard method for performing non-parametric regression over reproducing kernel Hilbert spaces. Given n samples, the time and space complexity of computing the KRR estimate scale as O(n3) and O(n2) respectively, and so is prohibitive in many cases. We propose approximations of KRR based on m-dimensional randomized sketches of the kernel matrix, and study how small the projection dimension m can be chosen while still preserving minimax optimality of the approximate KRR estimate. For various classes of randomized sketches, including those based on Gaussian and randomized Hadamard matrices, we prove that it suffices to choose the sketch dimension m proportional to the statistical dimension (modulo logarithmic factors). Thus, we obtain fast and minimax optimal approximations to the KRR estimate for non-parametric regression.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday, July 18, 2015

Saturday Morning Videos: Information Theory in Complexity Theory and Combinatorics (Simons Institute @ Berkeley)

Here are the videos and slides of this workshop on Information Theory in Complexity Theory and Combinatorics organized at the Simons Institute at Berkeley. But in the meantime here are the links to pages that generally features both the slides and the videos (and sometimes none of the two).

 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, July 17, 2015

CSJob: Postdoc and/or a Graduate Studentship, Iowa State University

Namrata Vaswani   just sent me the following:
 
  Dear Igor, 
Hope you are doing well. Could you please post this postdoc ad in your blog. Thanks much.
Namrata
http://www.ece.iastate.edu/~namrata
Sure Namrata ! Namrata is looking for a postdoc and/or a graduate student to start in her group for Fall 2015, Spring 2016 or Fall 2016.  
The proposed research lies at the intersection of machine learning for high dimensional problems and signal or information processing. Details can also be found at www.ece.iastate.edu/~namrata. Prof. Namrata Vaswani is looking for a postdoc to start in her group to work on theory and algorithms for online structured data matrix recovery problems such as online robust PCA.

Candidates with a Ph.D. in Electrical Engineering (EE), Mathematics /Applied Mathematics or Mathematical Statistics and with background related to the topics mentioned below are encouraged to apply.

Namrata's research lies at the intersection of signal/information processing and machine learning for high dimensional problems. In recent years, her group has worked on developing and analyzing online algorithms for various high-dimensional structured data recovery problems such as online sparse matrix recovery (recursive recovery of sparse vector sequences) or dynamic compressed sensing, online robust principal components' analysis (PCA) and online matrix completion, sparse PCA etc. Some ongoing work also involves proof of concept applications in video analytics and bioimaging. For more details, see her webpage (the two talks posted on this page will provide a good overview).

If you are interested, please email namrata@iastate.edu with the subject line `Postdoc application`. Please attach a copy of your resume, either a transcript (scanned or unofficial is fine) or a link to your webpage and a copy of your paper(s).
 
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Thursday, July 16, 2015

Random forests and kernel methods

  Thanks to Reddit, I just found this "Awesome Random Forest" page and added it to the list of highly technical reference pages. From the Reddit thread, I also noticed this reference linking random forests and kernel methods. Interesting. Without further ado:



Random forests and kernel methods by Erwan Scornet 

Abstract : Random forests are ensemble methods which grow trees as base learners and combine their predictions by averaging. Random forests are known for their good practical performance, particularly in high dimensional set-tings. On the theoretical side, several studies highlight the potentially fruitful connection between random forests and kernel methods. In this paper, we work out in full details this connection. In particular, we show that by slightly modifying their definition, random forests can be rewrit-ten as kernel methods (called KeRF for Kernel based on Random Forests) which are more interpretable and easier to analyze. Explicit expressions of KeRF estimates for some specific random forest models are given, together with upper bounds on their rate of consistency. We also show empirically that KeRF estimates compare favourably to random forest estimates.
 
Join the CompressiveSensing subreddit or the Google+ Community or the Facebook page and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly