Seminars 2016

Prof. Judith Rousseau
Université Paris Dauphiné
Friday, September 30, 2016
Time 15:15 – Room MA12

Title: On the Bayesian measures of uncertainty in infinite dimensional models

Abstract

In (regular)finite dimensional models, Bayesian and frequentist (likelihood) methods agree asymptotically both for the determination of point estimators and for measures of uncertainty such as confidence and credible regions . This statement fails to be systematically valid in large or infinite dimensional models. Over the last 2 decades there have been many advances in the study of frequentist properties of Bayesian approaches in nonparametric or high dimensional models. Following the seminal papers of Ghosal, Ghosh and van der Vaart 2000 on posterior concentration rates, there has been a large literature on posterior concentration rates in various families of sampling and prior models. Recently, more refined properties of the posterior distribution have also been investigated, leading to a better understanding of the frequentist properties of Bayesian measures of uncertainty such as credible regions. In this talk, I will first present the general ideas behind posterior concentration rates. Then I will describe more precisely the recent advances behind the understanding of Bayesian measures of uncertainty . I will then concentrate on measures of uncertainty for infinite dimensional parameters and give general conditions on the prior and the model so that Bayesian credible balls have good frequentist properties. Some particular models and priors will also be discussed.

Prof. Alejandro Murua
Université de Montréal
Friday, October 7, 2016
Time 15:15 – Room MA12

Title: Semiparametric Bayesian regression via Potts model

Abstract

We consider Bayesian nonparametric regression through random partition models. Our approach involves the construction of a covariate-dependent prior distribution on partitions of individuals. Our goal is to use covariate information to improve predictive inference. To do so we propose a prior on partitions based on the Potts clustering model associated with the observed covariates. This drives by covariate proximity both the formation of clusters, and the prior predictive distribution. The resulting prior model is flexible enough to support many different types of likelihood models. We focus the discussion on nonparametric regression. Implementation details are discussed for the specific case of multivariate multiple linear regression. The proposed model performs well in terms of model fitting and prediction when compared to other alternative nonparametric regression approaches. We illustrate the methodology with an application to the health status of nations at the turn of the 21stcentury.

Prof. Nicolai Meinshausen
ETH Zurich
Friday, November 11, 2016
Time 15:15 – Room MA12

Title: Causal discovery with confidence using invariance principles

Abstract

What is interesting about causal inference? One of the most compelling aspects is that any prediction under a causal model is valid in environments that are possibly very different to the environment used for inference. For example, variables can be actively changed and predictions will still be valid and useful. This invariance is very useful but still leaves open the difficult question of inference. We propose to turn this invariance principle around and exploit the invariance for inference. If we observe a system in different environments (or under different but possibly not well specified interventions) we can identify all models that are invariant. We know that any causal model has to be in this subset of invariant models. This allows causal inference with valid confidence intervals. We propose different estimators, depending on the nature of the interventions and depending on whether hidden variables and feedbacks are present. Some empirical examples demonstrate the power and possible pitfalls of this approach.

Dr. Anne Van Delft
Maastricht University
Friday, November 18, 2016
Time 15:15 – Room MA12

Title: Locally Stationary Functional Time Series

Abstract

Inference methods for functional data have received a lot of attention the last few years. So far, the literature on functional time series has focused on processes of which the probabilistic law is either constant over time or constant up to its second-order structure. Especially for long stretches of data it is desirable to be able to weaken this assumption. We introduce a framework that allows for meaningful statistical inference of functional data of which the dynamics change over time. That is, we put forward the concept of local stationarity in the functional setting and establish a class of processes that have a functional time-varying spectral representation. Time-varying functional ARMA processes are investigated and shown to be functional locally stationary according to the proposed definition. Important in our context is the notion of a time-varying spectral density operator of which the properties are studied and uniqueness is derived. The framework is then used to construct an estimator of the spectral density operator based on a functional version of the segmented periodogram matrix. In particular, we prove it is consistent and study its asymptotic distribution.

Mr. Adrien Hitz
University of Oxford
Friday, December 2nd, 2016
Time 15:15 – Room MA12

Title: Graphical Modeling of Extremes

Abstract

Multivariate extremes are commonly modeled using a homogeneous joint density. In this talk, I will characterize homogeneous densities factorizing w.r.t. a graph, i.e., that can be written as a product of low-dimensional functions. Following the ideas of graphical models, we will see how this enables us to infer high-dimensional tail distributions, before illustrating the approach on extreme river flow data.

This talk is based on a joint work with Robin Evans and Sebastian Engelke.
References: A. Hitz and R. Evans (2016) One-Component Regular Variation and Graphical Modeling of Extremes. Journal of Applied Probability. P. Asadi, A. Davison and S. Engelke (2015) Extremes on River Networks. The Annals of Applied Statistics.

Prof. Charles Taylor
University of Leeds
Friday, December 16, 2016
Time 15:15 – Room MA12

Title: Nonparametric transformations for directional and shape data

Abstract

For i.i.d. data (x_i, y_i), in which both x and y lie on a sphere, we consider flexible (non-rigid) regression models, in which solutions can be obtained for each location of the manifold, with (local) weights which are are function of distance. By considering terms in a series expansion, a “local linear” model is proposed for rotations, and we explore an iterative procedure with connections to boosting. Further extensions to general shape matching are discussed.

Mr. Stefan Wager
Stanford University
Friday, January 29, 2016
Time 14:00 – Room CM5

Title: Statistical Estimation with Random Forests

Abstract

Random forests, introduced by Breiman (2001), are among the most widely used machine learning algorithms today, with applications in fields as varied as ecology, genetics, and remote sensing. Random forests have been found empirically to fit complex interactions in high dimensions, all while remaining strikingly resilient to overfitting. In principle, these qualities also ought to make random forests good statistical estimators. However, our current understanding of the statistics of random forest predictions is not good enough to make random forests usable as a part of a standard applied statistics pipeline: in particular, we lack robust consistency guarantees and asymptotic inferential tools. In this talk, I will present some recent results that seek to overcome these limitations. The first half of the talk develops a Gaussian theory for random forests in low dimensions that allows for valid asymptotic inference, and applies the resulting methodology to the problem of heterogeneous treatment effect estimation. The second half of the talk then considers high-dimensional properties of regression trees and forests in a setting motivated by the work of Berk et al. (2013) on valid post-selection inference: at a high level, we find that the amount by which a random forest can overfit to training data scales only logarithmically in the ambient dimension of the problem. This talk is based on joint work with Susan Athey, Bradley Efron, Trevor Hastie, and Guenther Walther.

Prof. Marloes Maathuis
ETHZ
Friday, March 4, 2016
Time 15:15 – Room MA10

Title: High-dimensional consistency in score-based and hybrid structure learning

Abstract

The main approaches for learning Bayesian networks can be classified as constraint-based, score-based or hybrid methods. Although high-dimensional consistency results are available for the constraint-based PC algorithm, such results have been lacking for score-based and hybrid methods, and most hybrid methods are not even proved to be consistent in the classical setting where the number of variables remains fixed. We study the score-based Greedy Equivalence Search (GES) algorithm, as well as hybrid algorithms that are based on GES. We show that such hybrid algorithms can be made consistent in the classical setting by using an adaptive restriction on the search space. Moreover, we prove consistency of GES and adaptively restricted GES (ARGES) for certain sparse high-dimensional scenarios. ARGES scales well to large graphs with thousands of variables, and our simulation studies indicate that both ARGES and GES generally outperform the PC algorithm.

Joint work with Preetam Nandy and Alain Hauser

Prof. Tatyana Krivobokova
University of Göttingen
Friday, March 11, 2016
Time 15:15 – Room MA10

Title: Partial least squares for dependent data

Abstract

The partial least squares algorithm for dependent data realisations is considered. Consequences of ignoring the dependence in the data for the performance of the algorithm are studied theoretically and numerically. It is shown that ignoring non-stationary dependence structures can lead to inconsistent estimation. A simple modification of the algorithm for dependent data is proposed and consistency of the corresponding estimators is shown. A protein dynamics example illustrates the superior predictive power of the method. This is the joint work with Marco Singer, Axel Munk and Bert de Groot.

Prof. Ingrid van Keilegom
Université Catholique de Louvain
Friday, March 18, 2016
Time 15:15 – Room MA10

Title: Wilks’ Phenomenon in Two-Step Semiparametric Empirical Likelihood Inference

Abstract

In both parametric and certain nonparametric statistical models, the empirical likelihood ratio satisfies a nonparametric version of Wilks’ theorem. For many semiparametric models, however, the commonly used two-step (plug-in) empirical likelihood ratio is not asymptotically distribution-free, that is, Wilks’ phenomenon breaks down. In this paper we suggest a general approach to restore Wilks’ phenomenon in two-step semiparametric empirical likelihood inferences. The main insight consists in using as the moment function in the estimating equation the influence function of the plug-in sample moment. The proposed method is general, leads to distribution-free inference and it is less sensitive to the first-step estimator than alternative bootstrap methods. Several examples and a simulation study illustrate the generality of the procedure and its good finite sample performance. (jont work with Francesco Bravo and Juan Carlos Escanciano)

Prof. Konstantinos Fokianos
University of Cyprus
Friday, May 13, 2016
Time 15:15 – Room MA10

Title: Consistent testing for pairwise dependence in time series

Abstract

We consider the problem of testing pairwise dependence for stationary time series. For this, we suggest the use of a Box-Ljung type test statistic which is formed after calculating the distance covariance function among pairs of observations. The distance covariance function is a suitable measure for detecting dependencies between observations as it is based on the distance between the characteristic function of the joint distribution of the random variables and the product of the marginals. We show that, under the null hypothesis of independence and under mild regularity conditions, the test statistic converges to a normal random variable. The results are complemented by several examples.
This is a joint work with M. Pitsillou.
— THIS SEMINAR IS CANCELLED DUE TO STRIKE IN AIR-TRAFFIC CONTROLLERS —
Prof. Charles Taylor
University of Leeds
Friday, May 27, 2016
Time 15:15 – Room MA10

Title: Nonparametric transformations for directional and shape data

Abstract

For i.i.d. data (x_i, y_i), in which both x and y lie on a sphere, we consider flexible (non-rigid) regression models, in which solutions can be obtained for each location of the manifold, with (local) weights which are are function of distance. By considering terms in a series expansion, a “local linear” model is proposed for rotations, and we explore an iterative procedure with connections to boosting. Further extensions to general shape matching are discussed.
Prof. Philip B. Stark
University of California, Berkeley
Friday, June 24, 2016
Time 15:15 – Room MA12

Title: Simple Random Sampling: Not So Simple

Abstract

A simple random sample (SRS) of size $k$ from a population of size $n$ is a sample drawn at random in such a way that every subset of $k$ of the $n$ items is equally likely to be selected. The theory of inference from SRSs is fundamental in statistics; many statistical techniques and formulae assume that the data are an SRS. True SRSs are rare; in practice, people tend to draw samples by using pseudo-random number generators (PRNGs) and algorithms that map a set of pseudo-random numbers into a subset of the population. Most statisticians take for granted that the software they use “does the right thing,” producing samples that can be treated as if they are SRSs. In fact, the PRNG algorithm and the algorithm for drawing samples using the PRNG matter enormously. Some widely used methods are particularly bad. They cannot generate all subsets of size $k$; the subsets they do generate may not have equal frequencies; and they are numerically inefficient. Using such methods introduces bias and makes standard uncertainty calculations meaningless.

Joint work with Kellie Ottoboni, Department of Statistics, University of California, Berkeley
Prof. Jonathan Rougier
University of Bristol
Friday, July 1st, 2016
Time 16:15 please note unusual time! – Room MA12

Title: Ensemble averaging and mean squared error

Abstract

In fields such as climate science, it is common to compile an ensemble of different simulators for the same underlying process. It is an interesting observation that the ensemble mean often out-performs at least half of the ensemble members in mean squared error (measured with respect to observations). This despite the fact that the ensemble mean is typically ‘less physical’ than the individual ensemble members (the state space not being convex). In fact, as demonstrated in the most recent IPCC report, the ensemble mean often out-performs all or almost all of the ensemble members. It turns out that that this is likely to be a mathematical result based on convexity and asymptotic averaging, rather than a deep insight about climate simulators. I will outline the result and discuss its implications.
Prof. Geoffrey G. Decrouez
National Research University, Higher School of Economics, Moscow
Monday, July 18, 2016
Time 15:15 – Room MA10

Title: Finite sample properties of the mean occupancy counts and probabilities

Abstract

For a probability distribution P on an at most countable alphabet, we give finite sample bounds for the expected occupancy counts and probabilities. In particular, both upper and lower bounds are given in terms of the right tail of the counting measure of P. Special attention is given to the case where the right tail is bounded by a regularly varying function. In this case, it is shown that our general results lead to an optimal-rate control of the expected occupancy counts and probabilities with explicit constants. Our results are also put in perspective with Turing’s formula and recent concentration bounds to deduce confidence regions.