Thursday, October 30, 2014

CfP: ICCP 2015 - IEEE International Conference on Computational Photography



Jason Holloway just sent me the following note

Hi Igor,

We are announcing the call for papers for ICCP 2015. Can you help disseminate the CFP to your readers? (Or link to our Google+/Facebook page?)

Thanks!
~Jason (on behalf of the program committee for ICCP 2015)


IEEE International Conference on Computational Photography ( ICCP 2015 -- http://iccp.rice.edu/ ) seeks high quality submissions in all areas related to computational photography.  
The field of Computational Photography seeks to create new photographic and imaging functionalities and experiences that go beyond what is possible with traditional cameras and image processing tools. The IEEE International Conference on Computational Photography is organized with the vision of fostering the community of researchers, from many different disciplines, working on computational photography. We welcome all submissions that introduce new ideas to the field including, but not limited to, those in the following areas:
  • Computational cameras
  • Computational illumination
  • Computational optics (wavefront coding, compressive optical sensing, digital holography, …)
  • High-performance imaging (high-speed, hyper-spectral, high-dynamic range, thermal, confocal, …).
  • Multiple images and camera arrays
  • Sensor and illumination hardware
  • Scientific imaging and videography
  • Advanced image processing
  • Organizing and exploiting photo/video collections
  • Mobile imaging
  • Imaging for health and bio applications
Submissions should be full papers in the IEEE proceedings format. A typical paper length in ICCP has been 6-8 pages. This is a rough guideline, and there is no arbitrary strict maximum length imposed. Reviewers will be instructed to weigh the contribution of a paper relative to its length. Supplementary material can also be submitted. Details can be found at http://iccp.rice.edu/.
Submissions must be blind (not containing author names and affiliation). The reviewing process will be double blind.
Submissions must present original unpublished work. Furthermore, work submitted to ICCP cannot be submitted to another forum (journal, conference or workshop) during the ICCP reviewing period.
Important Dates:
Paper Submission: December 12, 2014
Supplementary Material Submission: December 15, 2014
Paper Decisions: February 11, 2015
Connect with us on 





 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

NYC Event: “How the Age of Machine Consciousness is Transforming Our Lives”

Greg Charvat just sent me the following (see below). Let me state that the "consciousness" wording is not optimal for this blog but I definitely appreciate technical people getting into a room and thinking about the next steps (and I also like the 4combinator approach yet I have no link to them) so here we are:

Hi Igor,

You might find this interesting.

I'm involved in this seminar series and panel discussion that will be held in NYC in November, on the topic of next gen deep learning as applied to hard data such as image data or other measured data.  David Ferrucci (IBM Watson), Max Tagmark (on PBS Nova frequently), and Jonathan Rothberg (one who created on-chip genetic sequencing) will be speaking and discussing.  I think your readers might be interested in this.

Its a free event, but must register in advance so we can get the right quantities of food and drink.  Please feel free to share with your blog readers.

Cheers,
Greg


I would like to let you know about an exciting event we are hosting in New York City in a couple of weeks:

“How the Age of Machine Consciousness is Transforming Our Lives”
·         Date and Time: Thursday, November 13, 2014, 7:00-9:00 PM
·         A cocktail reception in the SoHi room will follow the panel discussion and will provide a chance to meet our expert panel and learn more about career opportunities at 4Combinator

The expert panel includes:
·         David Ferrucci - Former VP of Watson Technologies who led development of the AI system that beat Jeopardy’s best
·         Max Tegmark – MIT professor and author of “The Mathematical Universe” and “Consciousness as a State of Matter”
·         Jonathan Rothberg - Inventor of high speed DNA sequencing.  His latest venture, 4Combinator, aspires to transform medicine by integrating devices, deep learning and cloud computing

As you are getting an exclusive invitation ahead of a broader outreach program, please register your interest in attending the event in the next 24 hours to ensure your place at the event: https://4combinator-speaker-series.eventbrite.com
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Higher Criticism for Large-Scale Inference: especially for Rare and Weak effects

Two papers on an interesting subject today with a common co-author. From a footnote in the first paper:

Our point here is not that HC [Higher Criticism] should replace formal methods using random matrix theory, but instead that HC can be used in structured settings where theory is not yet available. A careful comparison to formal inference using random matrix theory not possible here would illustrate the benefits of theoretical analysis of a specific situation as exemplified by random matrix theory, in this case over the direct application of a general procedure like HC.



Higher Criticism for Large-Scale Inference: especially for Rare and Weak effects by David Donoho, Jiashun Jin

In modern high-throughput data analysis, researchers perform a large number of statistical tests, expecting to find perhaps a small fraction of significant effects against a predominantly null background. Higher Criticism (HC) was introduced to determine whether there are any non-zero effects; more recently, it was applied to feature selection, where it provides a method for selecting useful predictive features from a large body of potentially useful features, among which only a rare few will prove truly useful.
In this article, we review the basics of HC in both the testing and feature selection settings. HC is a flexible idea, which adapts easily to new situations; we point out how it adapts to clique detection and bivariate outlier detection. HC, although still early in its development, is seeing increasing interest from practitioners; we illustrate this with worked examples. HC is computationally effective, which gives it a nice leverage in the increasingly more relevant "Big Data" settings we see today.
We also review the underlying theoretical "ideology" behind HC. The Rare/Weak} (RW) model is a theoretical framework simultaneously controlling the size and prevalence of useful/significant items among the useless/null bulk. The RW model shows that HC has important advantages over better known procedures such as False Discovery Rate (FDR) control and Family-wise Error control (FwER), in particular, certain optimality properties. We discuss the rare/weak {\it phase diagram}, a way to visualize clearly the class of RW settings where the true signals are so rare or so weak that detection and feature selection are simply impossible, and a way to understand the known optimality properties of HC.




Rare and Weak effects in Large-Scale Inference: methods and phase diagrams by Jiashun Jin, Tracy Ke

Often when we deal with `Big Data', the true effects we are interested in are Rare and Weak (RW). Researchers measure a large number of features, hoping to find perhaps only a small fraction of them to be relevant to the research in question; the effect sizes of the relevant features are individually small so the true effects are not strong enough to stand out for themselves.
Higher Criticism (HC) and Graphlet Screening (GS) are two classes of methods that are specifically designed for the Rare/Weak settings. HC was introduced to determine whether there are any relevant effects in all the measured features. More recently, HC was applied to classification, where it provides a method for selecting useful predictive features for trained classification rules. GS was introduced as a graph-guided multivariate screening procedure, and was used for variable selection.
We develop a theoretic framework where we use an Asymptotic Rare and Weak (ARW) model simultaneously controlling the size and prevalence of useful/significant features among the useless/null bulk. At the heart of the ARW model is the so-called phase diagram, which is a way to visualize clearly the class of ARW settings where the relevant effects are so rare or weak that desired goals (signal detection, variable selection, etc.) are simply impossible to achieve. We show that HC and GS have important advantages over better known procedures and achieve the optimal phase diagrams in a variety of ARW settings.
HC and GS are flexible ideas that adapt easily to many interesting situations. We review the basics of these ideas and some of the recent extensions, discuss their connections to existing literature, and suggest some new applications of these ideas.
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, October 28, 2014

Non-linear Causal Inference using Gaussianity Measures - implementation -

Rewatching Leon Bottou's talk yesterday, I was reminded of David Lopez-Paz's paper on the subject that I had not mentioned before. I wonder if this work could not be helped with some of the tool developed in advanced matrix/tensor factorization:


In this paper we provide theoretical and empirical evidence of a type of asymmetry between causes and effects that is present when these are related via linear models contaminated with additive non-Gaussian noise. This asymmetry is found in the different degrees of Gaussianity of the residuals of linear fits in the causal and the anti-causal direction. More precisely, under certain conditions the distribution of the residuals is closer to a Gaussian distribution when the fit is made in the incorrect or anti-causal direction. The problem of non-linear causal inference is addressed by performing the analysis in an extended feature space. In this space the required computations can be efficiently performed using kernel techniques. The effectiveness of a method based on the asymmetry described is illustrated in a variety of experiments on both synthetic and real-world cause-effect pairs. In the experiments performed one observes the Gaussianization of the residuals if the model is fitted in the anti-causal direction. Furthermore, such a method is competitive with state-of-the-art techniques for causal inference.  

The attendant implementation is here.



Related:
The Randomized Causation Coefficient - implementation -
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CSJob : OCE Postdoctoral Fellow - Real-time 2D compressive Hyperspectral Imaging, Brisbane, Australia

Mingrui just sent me the following postdoc opportunity in Brisbane, Australia:
 
 
Dear Dr. Carron,


Could you please advertise the follow postdoc position on your blog?


Thanks,


Mingrui
------------------------------------------------------------------ 
Mingrui Yang
Autonomous Systems Laboratory
CSIRO Digital Productivity
Queensland Centre for Advanced Technologies (QCAT)
1 Technology Court, Pullenvale QLD 4069, Australia
Thanks Mingrui, here it is:
 

OCE Postdoctoral Fellow - Real-time 2D compressive Hyperspectral Imaging

The Position:
CSIRO offers PhD graduates an opportunity to launch their scientific careers through our Office of the Chief Executive (OCE) Postdoctoral Fellowships. Successful applicants will work with leaders in the field of science and receive personal development and learning opportunities.
CSIRO Digital Productivity Flagship is seeking to appoint a highly motivated postdoctoral fellow to join a team of researchers and embedded engineers on an exciting cross-disciplinary project of real-time 2D hyperspectral sensing, which aims to design and implement a ground-breaking hyperspectral acquisition and analysis system based on compressive sensing theory.
Specifically you will:
  • Under the direction of senior research scientists, carry out innovative, impactful research of strategic importance to CSIRO that will, where possible, lead to novel and important scientific outcomes.
  • Develop and evaluate novel, innovative solutions to key research problems in the area of compressive sensing on hyperspectral imaging.
  • Undertake regular reviews of relevant literature and patents.
  • Produce high quality scientific and/or engineering papers suitable for publication in quality journals, for client reports and granting of patents.
Location: Pullenvale, Brisbane, Queensland
Salary: $78K to $88K plus up to 15.4% superannuation (pension fund)
Tenure: Up to 3 years
Reference: Q14/03258
To be considered you will hold a PhD (or will shortly satisfy the requirements of a PhD) in a relevant discipline area, such as applied mathematics/statistics, computer science and electrical engineering.
You will also have:
  • Expertise in one or more areas of compressive sensing, sparse approximation, statistical analysis and signal processing.
  • Demonstrated ability to program in MATLAB (or similar) for data analysis.
  • Analytical skills and ability to solve complex conceptual problems through the application of scientific and engineering principles.
  • Excellent verbal and written communication skills.
  • The ability to work effectively as part of a multi-disciplinary, regionally dispersed research team, plus the motivation and discipline to carry out autonomous research.
Owing to terms of the fellowship, candidates must not have more than 3 years of relevant Postdoctoral experience.
About CSIRO: Australia is founding its future on science and innovation. Its national science agency, the Commonwealth Scientific and Industrial Research Organisation (CSIRO) is a powerhouse of ideas, technologies and skills for building prosperity, growth, health and sustainability. It serves governments, industries, business and communities across the nation. Find out more! www.csiro.au
About CSIRO Digital Productivity Flagship: The Digital Productivity Flagship is focussed on Australia’s productivity challenge. We use data and digital technologies to address economic and developmental challenges.
Applications are open to Australian Citizens, Permanent Residents and International candidates. Relocation assistance will be provided if required.
Applications close 30 November 2014 (11:30pm AEST)
Position Details Q14/03258
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, October 27, 2014

Videos: MMDS 2014: Workshop on Algorithms for Modern Massive Data Sets

Andrew Clegg is responsible for me finding out that the videos of MMDS 2014: Workshop on Algorithms for Modern Massive Data Sets are out and they are all on the MMDS channel. All the slides are here. I have listed the videos here (and corrected at least one title).


     
    Leon Bottou's talk (on the same topic he presented in Season 1 of the Paris Machine Learning Meetup) 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

BRTF: Robust Bayesian Tensor Factorization for Incomplete Multiway Data - implementation -

Here is an extension of Robust PCA to tensors:


Robust Bayesian Tensor Factorization for Incomplete Multiway Data by Qibin Zhao, Guoxu Zhou, Liqing Zhang, Andrzej Cichocki, Shun-ichi Amari

We propose a generative model for robust tensor factorization in the presence of both missing data and outliers. The objective is to explicitly infer the underlying low-CP-rank tensor capturing the global information and a sparse tensor capturing the local information (also considered as outliers), thus providing the robust predictive distribution over missing entries. The low-CP-rank tensor is modeled by multilinear interactions between multiple latent factors on which the column sparsity is enforced by a hierarchical prior, while the sparse tensor is modeled by a hierarchical view of Student-$t$ distribution that associates an individual hyperparameter with each element independently. For model learning, we develop an efficient closed-form variational inference under a fully Bayesian treatment, which can effectively prevent the overfitting problem and scales linearly with data size. In contrast to existing related works, our method can perform model selection automatically and implicitly without need of tuning parameters. More specifically, it can discover the groundtruth of CP rank and automatically adapt the sparsity inducing priors to various types of outliers. In addition, the tradeoff between the low-rank approximation and the sparse representation can be optimized in the sense of maximum model evidence. The extensive experiments and comparisons with many state-of-the-art algorithms on both synthetic and real-world datasets demonstrate the superiorities of our method from several perspectives.
An implementation of BRTF is available on Qibin Zhao's software page:

 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Sunday, October 26, 2014

Ambition: Finding Water

We had 7 minutes of Terror, a blockbuster taking place on Mars this Summer when Curiosity landed on Mars. We now have Ambition for the upcoming landing of Rosetta's Philae on Comet 67P/Churyumov-Gerasimenko on November 12th, 2014. Well done ESA !
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Saturday Morning Video: Ravi Kannan -- Foundations of Data Science

Thanks to Laurent Duval for pointing this video (in support on the book mentioned earlier Book: "Foundations of Data Science" by John Hopcroft and Ravindran Kannan - link has been fixed -).Ravi provides a bird's eyeview of the field in this first lecture. Hat tip to CSA (IISc) for making the lectures of the Big Data Initiative videos available.
 
 
 
 
 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, October 24, 2014

Book: "Foundations of Data Science" by John Hopcroft and Ravindran Kannan

I just found out about a draft version of "Foundations of Data Science" a new book by John Hopcroft and Ravindran Kannan and very much like chapter 7 through 10 that touches upon themes we talk about often here on Nuit Blanche: Compressive Sensing ( see also The Big Picture in Compressive Sensing) , Advanced Matrix Factorization (see also the Advanced Matrix Factorization Jungle Page), complexity, streaming/sketching issues and Randomized Numerical Linear Algebra, Machine Learning and more. Let us note that compressive sensing is introduced at the very end, much like the other foundation book on signal processing this week (Books: "Foundations of Signal Processing" and "Fourier and Wavelet Signal Processing"). One wonders if a certain clarity would come earlier if the subject was introduced first so that titles such as Compressive Sensing Demystified ( by Frank Bucholtz and Jonathan M. Nichols) would not be needed. Here is the table of content for the last chapters.
7 Algorithms for Massive Data Problems 238
7.1 Frequency Moments of Data Streams . . . . . . . . . . . . . . . . . . . . . 238
7.1.1 Number of Distinct Elements in a Data Stream . . . . . . . . . . . 239
7.1.2 Counting the Number of Occurrences of a Given Element. . . . . . 243
7.1.3 Counting Frequent Elements . . . . . . . . . . . . . . . . . . . . . . 243
7.1.4 The Second Moment . . . . . . . . . . . . . . . . . . . . . . . . . . 245
7.2 Matrix Algorithms Using Sampling . . . . . . . . . . . . . . . . . . . . . . 248
7.2.1 Matrix Multiplication Using Sampling . . . . . . . . . . . . . . . . 248
7.2.2 Sketch of a Large Matrix . . . . . . . . . . . . . . . . . . . . . . . . 250
7.3 Sketches of Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
7.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
8 Clustering 260
8.1 Some Clustering Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
8.2 A k-means Clustering Algorithm . . . . . . . . . . . . . . . . . . . . . . . 263
8.3 A Greedy Algorithm for k-Center Criterion Clustering . . . . . . . . . . . 265
8.4 Spectral Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
8.5 Recursive Clustering Based on Sparse Cuts . . . . . . . . . . . . . . . . . . 273
8.6 Kernel Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
8.7 Agglomerative Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
8.8 Dense Submatrices and Communities . . . . . . . . . . . . . . . . . . . . . 278
8.9 Flow Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
8.10 Finding a Local Cluster Without Examining the Whole Graph . . . . . . . 284
8.11 Axioms for Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
8.11.1 An Impossibility Result . . . . . . . . . . . . . . . . . . . . . . . . 289
8.11.2 A Satis able Set of Axioms . . . . . . . . . . . . . . . . . . . . . . 295
8.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
9 Topic Models, Hidden Markov Process, Graphical Models, and Belief Propagation 301
9.1 Topic Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
9.2 Hidden Markov Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
9.3 Graphical Models, and Belief Propagation . . . . . . . . . . . . . . . . . . 310
9.4 Bayesian or Belief Networks . . . . . . . . . . . . . . . . . . . . . . . . . . 311
9.5 Markov Random Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
9.6 Factor Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313
9.7 Tree Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
9.8 Message Passing in general Graphs . . . . . . . . . . . . . . . . . . . . . . 315
9.9 Graphs with a Single Cycle . . . . . . . . . . . . . . . . . . . . . . . . . . 317
9.10 Belief Update in Networks with a Single Loop . . . . . . . . . . . . . . . . 319
9.11 Maximum Weight Matching . . . . . . . . . . . . . . . . . . . . . . . . . . 320
9.12 Warning Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
9.13 Correlation Between Variables . . . . . . . . . . . . . . . . . . . . . . . . . 325
9.14 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330

10 Other Topics 332
10.1 Rankings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .332
10.2 Hare System for Voting . . . . . . . . . . . . . . . . . . . . . . . . . . . . .334
10.3 Compressed Sensing and Sparse Vectors . . . . . . . . . . . . . . . . . . .335
10.3.1 Unique Reconstruction of a Sparse Vector . . . . . . . . . . . . . .336
10.3.2 The Exact Reconstruction Property . . . . . . . . . . . . . . . . . .339
10.3.3 Restricted Isometry Property . . . . . . . . . . . . . . . . . . . . .340
10.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .342
10.4.1 Sparse Vector in Some Coordinate Basis . . . . . . . . . . . . . . .342
10.4.2 A Representation Cannot be Sparse in Both Time and Frequency Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .342
10.4.3 Biological . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .345
10.4.4 Finding Overlapping Cliques or Communities . . . . . . . . . . . .345
10.4.5 Low Rank Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . .346
10.5 Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .347
10.6 Linear Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348
10.6.1 The Ellipsoid Algorithm . . . . . . . . . . . . . . . . . . . . . . . .350
10.7 Integer Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .351
10.8 Semi-De nite Programming . . . . . . . . . . . . . . . . . . . . . . . . . .352
10.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356



 
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Printfriendly