MIA 2012

[traduire en français]

- Program -

Monday 16 January

08h45-09h00
-
Wellcome
09h00-09h45
-
09h45-10h30
-
-
Coffee break
11h00-11h45
-
11h45-12h30
-
-
Lunch break
14h00-14h45
-
14h45-15h30
-
-
Coffee break
16h00-16h45
-
16h45-17h30
-

Tuesday 17 January

09h00-09h45
-
09h45-10h30
-
-
Coffee break
11h00-11h45
-
11h45-12h30
-
-
Lunch break
14h00-14h45
-
14h45-15h30
-
-
Coffee break
16h00-16h45
-
16h45-17h30
-

Wednesday 18 January

09h00-09h45
-
09h45-10h30
-
-
Coffee break
11h00-11h45
-
11h45-12h30
-
-
Lunch break
14h00-14h45
-
14h45-15h30
-
-
Coffee break
16h00-16h45
-
16h45-17h30
-

 

- Abstracts -

Vicent Caselles (Pompeu Fabra, Spain) - Exemplar-Based Image Inpainting and Applications

Image inpainting consists in recovering the missing or corrupted parts of an image so that the reconstructed image looks natural. The purpose of this talk will be to give an overview of recent techniques in non-local exemplar-based image inpainting and its applications in video and cinema post-production.

Non-local methods for image denoising and inpainting have gained considerable attention in recent years. This is due to their superior performance in textured images, a known weakness of purely local methods. Local methods on the other hand have shown to be very appropriate for the recovering of geometric structure such as image edges. The synthesis of both types of methods is a trend in current research. Variational analysis in particular is an appropriate tool for a unified treatment of local and non-local methods. We present a general variational framework for the problem of non-local image inpainting, from which some previous inpainting schemes can be derived, in addition to leading to novel ones. We give an statistical mechanics interpretation of the proposed framework.

We also study the properties of the variational formulation of the Efros-Leung copying scheme.

We show applications of image inpainting to different problems: interpolation of sparsely sampled images, the replacement of objects in video sequences, and to the post-production of depth-enhanced imagery.

References:

[1] P. Arias, V. Caselles, G. Facciolo, and G. Sapiro. Variational Framework for Exemplar-Based Image Inpainting. Preprint, 2010.

[2] P. Arias, V. Caselles, and G. Sapiro. A variational framework for non-local image inpainting. Proc. of the 7th Int. Conf. on Energy Minimization Methods in Computer Vision and Pattern Recognition EMMCVPR, Springer LNCS, Bonn, August 2009.

[3] G. Facciolo, P. Arias, V. Caselles, and G. Sapiro. Exemplar-Based Interpolation of Sparsely Sampled Images, Proce. of the 7th Int. Conf. on Energy Minimization Methods in Computer Vision and Pattern Recognition, EMMCVPR, Springer LNCS Bonn, August 2009.

[4] P. Arias, V. Caselles, and G. Facciolo. Algorithms for a variational framework for non-local image inpainting. Preprint, July 2011.

Volkan Cevher (EPFL, Switzerland) - Hard Thresholding with Norm Constraints

We introduce a new sparse recovery paradigm, where efficient algorithms from combinatorial and convex optimization interface for interpretable and model-based solutions. A highlight is the introduction of a new algorithmic definition of union-of-subspaces models, which we dub as the Polynomial time Modular Approximation Property (PMAP). PMAP rigorously identifies tractable models within the generic union-of-subspaces models, extending the reach of the model-based sparse recovery to a broader set of applications. Synthetic and real data experiments illustrate that our approach can significantly enhance the performance of both hard thresholding methods and convex solvers in sparse recovery.

Julie Digne (INRIA Sophia Antipolis, France) - An optimal transport approach to the surface reconstruction problem

We develop a new strategy for reconstructing 3D surfaces based on the optimal transport of sampled surface points to triangulation simplices. Computing the transport on the global surface is impossible to solve directly, we therefore rely on a discretization and an approximation to explore a subspace of the global problem. This approximation give good results in practice and is scalable to larger data sets. We demonstrate the reconstruction on LIDAR data.

Michael Elad (Technion, Israel) - The Analysis Sparse Model - Definition, Pursuit, Dictionary Learning, and Beyond

The synthesis-based sparse representation model for signals has drawn a considerable interest in the past decade. Such a model assumes that the signal of interest can be decomposed as a linear combination of a few atoms from a given dictionary. In this talk we concentrate on an alternative, analysis-based model, where an analysis operator -- hereafter referred to as the "Analysis Dictionary" - multiplies the signal, leading to a sparse outcome. While the two alternative models seem to be very close and similar, they are in fact very different. In this talk we define clearly the analysis model and describe how to generate signals from it. We discuss the pursuit denoising problem that seeks the zeros of the signal with respect to the analysis dictionary given noisy measurements. Finally, we explore ideas for learning the analysis dictionary from a set of signal examples. We demonstrate this model's effectiveness in several experiments, treating synthetic data and real images, showing a successful and meaningful recovery of the analysis dictionary.

Yonina Eldar (Technion, Israel) - Xampling at the Rate of Innovation: Correlations, Nonlinearities, and Bounds

We present a framework for sampling a wide class of wideband analog signals with finite rate of innovation, at rates far below that dictated by the Nyquist rate. We refer to this methodology as Xampling: A combination of compression and sampling, performed simultaneously. Using the Cramer-Rao bound we show that our sampling scheme is optimal in a mean-squared error sense, and improves upon several alternative sampling mechanisms proposed in the literature. Another advantage of our scheme is that it can be easily modified to incorporate correlations between signals. We consider in detail an application of these ideas to ultrasound imaging and demonstrate recovery of noisy ultrasound images from sub-Nyquist samples while performing beamforming in the compressed domain. As another part of our framework we show how the Xampling ideas can be extended to include nonlinear sampling. We provide a general method that allows recovery of the sub-Nyquist samples in the presence of nonlinearities. We also consider nonlinear sampling of vectors, motivated in particular by problems in optics. We present an algorithm for quadratic compressed sensing which recovers sparse signals from quadratic measurements. These results can be applied in particular to phase recovery from magnitude measurements.

Massimo Fornasier (Technical University of Munich, Germany) - Inverse Free-discontinuity Problems and Iterative Thresholding Algorithms

Free-discontinuity problems describe situations where the solution of interest is defined by a function and a lower dimensional set consisting of the discontinuities of the function. Hence, the derivative of the solution is assumed to be a "small function" almost everywhere except on sets where it concentrates as a singular measure. This is the case, for instance, in certain digital image segmentation problems and brittle fracture models. In the first part of this talk we show some counterexamples to existence of minimizers of functionals modeling inverse free-discontinuity problems. New preliminary results on the existence of minimizers are then obtained, by restricting the solutions to a class of functions with piecewise Lipschitz discontinuity set. If we discretize such situations for numerical purposes, the inverse free-discontinuity problem in the discrete setting can be re-formulated as that of finding a derivative vector with small components at all but a few entries that exceed a certain threshold. This problem is similar to those encountered in the field of "sparse recovery", where vectors with a small number of dominating components in absolute value are recovered from a few given linear measurements via the minimization of related energy functionals. We first show results of existence of minimizers in the discrete setting of inverse free-discontinuity problems and then we show that their computation is actually NP-hard. With the aim of formulating efficient computational approaches in such a complicated situation, we address iterative thresholding algorithms which intertwine gradient-type iterations with thresholding steps which were designed to recover sparse solutions. It is natural to wonder how such algorithms can be used towards solving discrete free-discontinuity problems. This talk explores also this connection, and, by establishing an iterative thresholding algorithm for discrete inverse free-discontinuity problems, provides new insights on properties of minimizing solutions thereof.

Markus Grasmair (Wien, Austria) - Topological Derivatives for the Minimisation of the Mumford-Shah Functional

We present a method for the numerical approximation of a minimiser of the Mumford-Shah functional that is based on the idea of topological derivatives. To that end, we describe the edge set in the image we want to denoise as the union of a finite number of balls of radius epsilon>0. Then we can use the number of balls times their diameter as an approximation of the Hausdorff measure of the edge set. We introduce a functional based on this edge descriptor and show that it converges indeed to the Mumford-Shah functional in the sense of Gamma-convergence. In order to minimise the approximating functionals, it is then necessary to find an optimal placement of the balls covering the edge set. To that end, we propose to use a local asymptotic expansion of the approximating functionals with respect to the radius epsilon>0 of the covering balls. This expansion uses techniques that are common in the idea of topological derivatives.

Jelena Kovacevic (Carnegie Mellon University, USA) - Problems in Biomedical Imaging: Opportunities for Mathematical Signal Processing

In recent years, the focus in biological sciences has shifted from understanding single parts of larger systems, sort of vertical approach, to understanding complex systems at the cellular and molecular levels, horizontal approach. Thus the revolution of "omics" projects, genomics and now proteomics. Understanding complexity of biological systems is a task that requires acquisition, analysis and sharing of huge databases, and in particular, high-dimensional databases. Processing such huge amount of bioimages visually by biologists is inefficient, time-consuming and error-prone. Therefore, we would like to move towards automated, efficient and robust processing of such bioimage data sets. Moreover, some information hidden in the images may not be readily visually available. Thus, we do not only help humans by using sophisticated algorithms for faster and more efficient processing but also because new knowledge is generated through use of such algorithms.

The ultimate dream is to have distributed yet integrated large bioimage databases which would allow researchers to upload their data, have it processed, share the data, download data as well as platform-optimized code, etc, and all this in a common format. To achieve this goal, we must draw upon a whole host of sophisticated tools from signal processing, machine learning and scientific computing. I will address some of these issues in this presentation, especially those where signal processing expertise can play a significant role.

Gitta Kutyniok (Technische Universitat Berlin, Germany) - Compactly Supported Shearlets: Theory and Applications

Many important problem classes in applied sciences, in particular, in imaging sciences are governed by anisotropic features as assumed in the frequently used cartoon model for images. Shearlet analysis might by now be considered the most versatile and successful methodology to efficiently represent such features, in particular, because it allows a unified treatment of the continuum and digital realm. However, although compact support is often required to achieve superior spatial localization, most research has so far focussed on band-limited shearlets.

In this talk, we will discuss the construction of compactly supported shearlet frames for 2D and 3D data, which will be also shown to optimal sparsely approximate anisotropic features. We will then focus on a faithful digitalization of the associated shearlet transform and also provide an associated extensive performance testing package for directional transforms. Finally, we will present some theoretical as well as numerical results on the application of shearlets to geometry extraction and inpainting.

Anat Levin (Weizmann Institute, Israel) - Natural Image Denoising: Optimality and Inherent Bounds

The goal of natural image denoising is to estimate a clean version of a given noisy image, utilizing prior knowledge on the statistics of natural images. The problem has been studied intensively with considerable progress made in recent years. However, it seems that image denoising algorithms are starting to converge and recent algorithms improve over previous ones by only fractional dB values. It is thus important to understand how much more can we still improve natural image denoising algorithms and what are the inherent limits imposed by the actual statistics of the data. The challenge in evaluating such limits is that constructing proper models of natural image statistics is a long standing and yet unsolved problem. To overcome the absence of accurate image priors, this work takes a non parametric approach and represents the distribution of natural images using a huge set of 10^10 patches. We then derive a simple statistical measure which provides a lower bound on the optimal Bayesian minimum mean square error (MMSE). This imposes a limit on the best possible results of denoising algorithms which utilize a fixed support around a denoised pixel and a generic natural image prior. Our findings suggest that for small windows, state of the art denoising algorithms are approaching optimality and cannot be further improved beyond about 0.1dB values.

Peyman Milanfar (UCSC, USA) - A Tour of Modern "Image Processing"

Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include Moving Least Square (from Graphics), the Bilateral Filter and Anisotropic Diffusion (from Vision), Boosting and Spectral Methods (from Machine Learning), Non-local Means and Bregman Iterations (from Signal Processing and Applied Math), Kernel Regression and Iterative Scaling (from Statistics). While these approaches found their inspirations in diverse fields of nascence, they are deeply connected. In this talk, I will present a practical and unified framework for understanding some common underpinnings of these methods. This leads to new insights and a broad understanding of how these diverse methods interrelate. I will also discuss several applications, and the statistical performance of the resulting algorithms. Finally I briefly illustrate connections between these techniques and empirical Bayesian approaches.

Jean Marie Mirebeau (CNRS, France) - A Fast Marching Algorithm tailored for large Anisotropies

The Fast Marching Algorithm allows to compute efficiently the riemannian distance, and the associated geodesics, between points of a domain. In the context of medical imaging, and using well chosen riemannian metrics, the computation of such geodesics allows to put in light and to segment biological structures. Other applications of the fast marching algorithm to image processing range from shape classification to mesh generation.

The use of strongly anisotropic Riemannian metrics, which often arise in image processing, challenges current fast marching algorithms. We present a modification of this algorithm, on cartesian grids, based on reduced lattice bases, which allows to handle large or extreme anisotropies for a reduced numerical cost. We also present some applications.

Thomas Pock (Graz, Austria) - Convex relaxation of a class of vertex penalizing functionals

We investigate a class of variational problems that incorporate in some sense curvature information of the level lines. The functionals we consider incorporate metrics defined on the orientations of pairs of line segments that meet in the vertices of the level lines. We discuss two particular instances: One instance that minimizes the total number of vertices of the level lines and another instance that minimizes the total sum of the absolute exterior angles between the line segments. In case of smooth level lines, the latter corresponds to the total absolute curvature. We show that these problems can be solved approximately by means of a tractable convex relaxation in higher dimensions. In our numerical experiments we present preliminary results for image segmentation, image denoising and image inpainting. (Joint work with Kristian Bredies, University Graz)

Konrad Polthier (Berlin, Germany) - CubeCover - Cubical grids for bounded volumes

We discuss novel techniques to fill a bounded volumetric shape with a (preferably coarse) cubical voxel structure. Among the optimization goals are alignment of the voxels with the bounding surface as well as simplicity of the voxel grid. Mathematical analysis of the possible singularities is given.

The algorithm is uses a tetrahedral volume mesh plus a user given guiding frame field as input. Then it constructs an atlas of chart functions, i.e. the parameterization function of the volume, such that the images of the coordinate lines align with the given frame field. Formally the function is given by a first-order PDE, namely the gradients of the coordinate functions are the vectors of the frame. In a first step, the algorithm uses a discrete Hodge decomposition to assure local integrability of the frame field. A subsequent step assures global integrability along generators of the first homology group and alignment a face of the boundary cube with the original surface boundary. All steps can be merged into solving linear equations.

Conceptually the presented CubeCover-algorithm extends the known QuadCover-algorithm from surface meshes to volumes meshes.

Nelly Pustelnik (CNRS, France) - On l1/l0-equivalence in a deterministic context. Application to limited view angle tomography

The main challenge of limited view angle tomography is to reduce the data acquisition time. This problem concerns various application fields ranking from medical applications where the exposure time for patient should be minimal to non destructive inspection in manufacturing process. Reducing the number of view angle leads to an ill-posed problem that can be solved by regularization techniques. When data are known to be intrinsically sparse or sparse after an orthonormal transform, it is relevant to deal with l1-minimization (i.e. convex relaxation of the l0-minimization). The l1/l0-equivalence, also called identifiability, insures exact recovery for sufficiently sparse signal by solving the convex l1-minimization. For random system matrices, a theoretical relationship between the number of observations, the size of the signal, and its sparsity degree s, ensures the identifiability of every s-sparse signals. However, such a closed expression is difficult to obtain for deterministic matrices.

In the context of l1/l0-equivalence, we propose an algorithm to extract non-identifiable vectors for a given deterministic matrix. This algorithm relies on a specific measure, denoted Dx, related to the polytope associated with the system matrix. Our proposal is based on the adaptation of Dossal et al. algorithm [1] initially designed for random matrices.

Experimental results illustrate the improvement of the proposed algorithm over the version devoted to the random case. For a given deterministic matrix, the proposed algorithm allows us to specify the largest sparsity degree s that leads to the reconstruction of every s-sparse vectors. Moreover, we highlight the relevance of Dx over the sparsity for characterizing the identifiability.

(in collaboration with F. Turcu, C. Dossal, and Y. Berthoumieu)

References :
[1] C. Dossal, and G. Peyre, and J. Fadili, "A Numerical Exploration of Compressed Sampling Recovery," Linear Algebra and Appl., vol. 432, no. 7, pp. 1663-1679, 2010.

Justin Romberg (Georgia Tech, USA) - Architectures for compressive sampling of correlated signals

We will discuss several ways in which recent results on the recovery of low-rank matrices from partial observations can be applied to the problem of sampling ensembles of correlated signals. We will present several architectures that use simple analog building blocks (vector-matrix multiply, modulators, filters, and ADCs) to implement different types of measurement schemes with "structured randomness". These sampling schemes allow us to take advantage of the (a priori unknown) correlation structure of the ensemble by reducing the total number of observations required to reconstruct the collection of signals. We will discuss scenarios that use an ADC for every channel, and those which multiplex the channels onto a single line which is sampled with a single ADC.

Martin Rumpf (Bonn, Germany) - Discrete geodesic calculus in shape space

Based on a local approximation of the Riemannian distance on a manifold by a computationally cheap dissimilarity measure a time discrete geodesic calculus is developed and applications to shape space are explored. Thereby, the dissimilarity measure is derived from a spring type energy whose Hessian reproduces the underlying Riemannian metric. Using this measure to define length and energy on discrete paths on the manifold a discrete analog of classical geodesic calculus can be developed. The notion of discrete geodesics defined as energy minimizing paths gives rise to discrete logarithmic and exponential maps and enables to introduce a time discrete parallel transport as well. The relation of this time discrete to the actual, time continuous Riemannian calculus is explored and the new concept is applied to a shape space in which shapes are considered as boundary contours of physical objects consisting of viscous material. The flexibility and computational efficiency of the approach is demonstrated for topology preserving morphing, the interplay of paths in shape space and local shape variations as associated generators, the extrapolation of paths, and the transfer of geometric features.

Guillermo Sapiro (Minnesota, USA) - Collaborative signal modeling and analysis

In this talk I will describe how signal and processing collaboration can render very unstable and difficult problems, stable and much simpler. I will present the frameworks and applications ranging from activity recognition in video to audio separation and reconstruction and HIV/viruses analysis.

Stefano Soatto (UCLA, USA) - Perception, Action and the Information Knot that Ties Them

I will describe a notion of Information for the purpose of decision and control tasks, as opposed to data transmission and storage tasks implicit in Communication Theory. It is rooted in ideas of J. J. Gibson, and is specific to classes of tasks and nuisance factors affecting the data formation process. When such nuisances involve scaling and occlusion phenomena, as in most imaging modalities, the "Information Gap" between the maximal invariants and the minimal sufficient statistics can only be closed by exercising control on the sensing process. Thus, senging, control and information are inextricably tied. This has consequences in understanding the so-called "signal-to-symbol barrier" problem, as well as in the analysis and design of active sensing systems. I will show applications in vision-based control, navigation, 3-D reconstruction and rendering, as well as detection, localization, recognition and categorization of objects and scenes in live video.

Carola-Bibiane Schönlieb (Cambridge, UK) - Higher-Order PDE Techniques for Image Restoration

Restoring the original image contents from distorted measurements is one of the most important tasks in image processing. It comprises the enhancement and reconstruction of images distorted by noise or blur (image denoising/deblurring) and the filling-in of gaps in images (image inpainting). Within various standard methodologies for the solution of these tasks, partial differential equations (PDEs) constitute a rich toolbox of restoration approaches. These techniques are interesting from both an applicational viewpoint - because they are able to produce qualitatively good visual results and can be captured within automatable processing algorithms - but also from a mathematical analysis point of view - because they show some beautiful mathematical concepts and pose interesting analytical problems.

In this presentation we shall concentrate on a specific class of PDE techniques, namely higher-order approaches, i.e., third- and fourth-order PDEs. After spending some time on introducing the concept of such methods and giving a historical overview of some important contributions in this area, we will get to know some recently proposed higher-order PDEs, their mathematical properties and applications. The presentation will be furnished by various numerical examples and applications for image restoration.

Gabriele Steidl (Kaiserslautern, Germany) - Shearlets and Function Spaces

In the context of directional signal analysis several approaches have been suggested such as ridgelets, curvelets, contourlets, shearlets and many others. Among all these approaches, the shearlet transform is outstanding because it is related to group theory, i.e., this transform can be derived from a square-integrable representation the so-called shearlet group. Therefore, in the context of the shearlet transform all the powerful tools of group representation theory can be exploited. Moreover, there is a very natural link to another useful concept, namely the coorbit space theory introduced by Feichtinger and Gr ╠łochenig in a series of papers. By means of the coorbit space theory, it is possible to derive in a very natural way scales of smoothness spaces associated with the group representation. In this setting, the smoothness of functions is measured by the decay of the associated shearlet transform. Moreover, by a tricky discretization of the representation, it is possible to obtain Banach frames for these smoothness spaces.

Once these new shearlet smoothness spaces are established some natural questions arise.

  • How do these spaces really look like?
  • Are there "nice" sets of functions that are dense in these spaces?
  • What are the relations to classical smoothness spaces such as Besov spaces?
  • What are the associated trace spaces?

We show that for natural subclasses of shearlet coorbit spaces which correspond to "shearlets on the cone", there exist embeddings into homogeneous Besov spaces and that for the same subclasses, the traces onto the coordinate axis can again be identified with homogeneous Besov spaces or shearlet coorbit spaces. To prove our results we use compactly supported shearlets and apply atomic characterizations of Besov spaces as well as molecular descriptions of coorbit spaces.

This is joint work with Stephan Dahlke (University Marburg), Soren Hauser (University Kaiserslautern), and Gerd Teschke (University Neubrandenburg).

Michael Unser (EPFL, Switzerland) - Towards a theory of sparse stochastic processes, or when Paul Levy joins forces with Nobert Wiener

The current formulations of compressed sensing and sparse signal recovery are based on solid variational principles, but they are fundamentally deterministic. By drawing on the analogy with the classical theory of signal processing, it is likely that further progress may be achieved by adopting a statistical (or estimation theoretic) point of view. Here, we shall argue that Paul Levy (1886-1971), who was one of the very early proponents of Haar wavelets, was in advance over his time, once more. He is the originator of the Levy-Khinchine formula, which happens to be the perfect (non-Gaussian) ingredient to support a continuous-domain theory of sparse stochastic processes.
Specifically, we shall present an extended class of signal models that are ruled by stochastic differential equations (SDEs) driven by white Levy noise. Levy noise is a highly singular mathematical entity that can be interpreted as the weak derivative of a Levy process. A special case is Gaussian white noise which is the weak derivative of the Wiener process (a.k.a. Brownian motion). When the excitation (or innovation) is Gaussian, the proposed model is equivalent to the traditional one. Of special interest is the property that the signals generated by non-Gaussian linear SDEs tend to be sparse by construction; they also admit a concise representation in some adapted wavelet basis. Moreover, these processes can be (approximately) decoupled by applying a discrete version of the whitening operator (e.g., a finite-difference operator). The corresponding log-likelihood functions, which are nonquadratic, can be specified analytically. In particular, this allows us to uncover a Levy processes that results in a maximum a posteriori (MAP) estimator that is equivalent to total variation. We make the connection with current methods for the recovery of sparse signals and present some examples of MAP reconstruction of MR images with sparse priors.

References:

[1] M. Unser, P. Tafti, and Q. Sun, "A unified formulation of Gaussian vs. sparse stochastic processes-Part I: Continuous-domain theory," preprint, http://arxiv.org/abs/1108.6150.

[2] M. Unser, P.D. Tafti, "Stochastic Models for Sparse and Piecewise- Smooth Signals," IEEE Transactions on Signal Processing, vol. 59, no. 3, pp. 989-1006, March 2011.

Martin Vetterli (EPFL, Switzerland) - Sparse Sampling Variations on a Theme

Sampling is a central topic not just in signal processing and communications, but in all fields where the world is analog, but computation is digital. The question is simple: When does a countable set of measurements allow a perfect and stable representation of a class of signals?

Classic results are on bandlimited functions and shift-invariant subspaces, and use linear approximation. Recently, non-linear methods have appeared, based on parametric methods or convex relaxation, which allow a broader class of sampling results. We review sampling of finite rate of innovation (FRI) signals, which are non-bandlimited continuous-time signals with a finite parametric representation. This leads to sharp results on sampling and reconstruction of such sparse continuous-time signals. We then explore performance bounds on retrieving sparse continuous-time signals buried in noise. While this is a classic estimation problem, we show sharper lower bounds for simple cases, indicating (i) there is a phase transition and (ii) current algorithms are close to the bounds. We then turn our attention to sampling problems where physics plays a central role. After all, many signals to be sensed are the result of some PDE. In these cases, continuous-time or continuous-space modeling is advantageous. First, we consider the wave equation, and review the fact that wave fields are essentially bandlimited in space-time domain. This can be used to sample acquisition or rendering. We also show an acoustic source localization problem, where wideband frequency probing and finite element modeling show interesting localization power. Then, in a diffusion equation scenario, source localization using a sensor network can be addressed with a parametric approach, indicating trade-offs between spatial and temporal sampling densities. This can be used in air pollution monitoring. We also briefly discuss some on going work on estimating radioactive fields based on citizen sensing, indicating critical issues in modeling, denoising, outliers detection, and tampering. We conclude by casting sampling within physical problems where sparsity or low order modeling allow to improve acquisition or regularize inverse problems. The computational tools like FRI or CS then come in handy when the modeling and the conditioning is adequate. Last but not least, the proof of the pudding is in experiments and/or real data sets. Joint work with T.Blu (CUHK), Y.Lu (Harvard), Y.Barbotin, I.Dokmanic, A.Hormati, M.Kolundzija, M.Martinez-Camara, J.Ranieri (EPFL)

Rebecca Willett (Duke, USA) - Oracle inequalities and minimax rates for non-local means

In this talk I describe a novel theoretical characterization of the performance of non-local means (NLM) for noise removal. NLM has proven effective in a variety of empirical studies, but little is understood fundamentally about how it performs relative to classical methods based on wavelets or how various parameters (e.g., patch size) should be chosen. For cartoon images and images which may contain thin features and regular textures, the error decay rates of NLM are derived and compared with those of linear filtering, oracle estimators, variable-bandwidth kernel methods, Yaroslavsky's filter and wavelet thresholding estimators. The trade-off between global and local search for matching patches is examined, and the bias reduction associated with the local polynomial regression version of NLM is analyzed. The theoretical results are validated via simulations for 2D images corrupted by additive white Gaussian noise. This is joint work with Ery Arias-Castro and Joseph Salmon.