Seminars 2010

Ms Miranda Holmes-Cerfon
Courant Institute, New York University
Jaunary 22, 2010

Particle Dispersion by Random Waves

Abstract

The ocean’s interior is filled with fast, small-scale motions called internal waves. These waves are too small to be resolved by numerical models, so it is important to know in what ways they affect the larger-scale circulation. In this talk we identify one possible mechanism: that they are important for mixing the ocean. We model the waves as a stationary Gaussian random wave field, and show that even though wave motion is periodic, a wave field can disperse particles horizontally in a diffusive manner because of two effects: (i) the correlation between a particle’s displacement and the velocity gradient, an effect called the “Stokes drift”, and (ii) nonlinear corrections to the wave field which are required to make the velocity field dynamically consistent. These effects are strongly correlated with each other, and we compute them using an estimate of the spectrum of internal waves in the ocean. Then, we will show how breaking waves can lead to particle diffusion in the vertical, using a theory for the asymptotic distribution of excursion sets of a functional of the random wave field.
Prof. Valerie Isham
University College London
February 19, 2010

Rumours and Epidemics on Random Networks

Abstract

The Susceptible-Infected-Removed (SIR) epidemic model is a fundamental model for the spread of infection in a homogeneously-mixing population. It is a special case of a more general stochastic rumour model in which there is an extra interaction. Thus, not only does an ignorant (susceptible) contacted by a spreader (infective) become a spreader, and spreaders may “forget” the rumour and become stiflers (removals), but also spreaders may become stiflers if they attempt to spread the rumour to a spreader or stifler (who will have already heard it). For both epidemics and rumours, there is particular interest in using a random network to represent population structure, with applications to the spread of infection or information on social networks. The talk will discuss a) the effect of the population size on thresholds for epidemic/rumour spread, and b) the effect of different network structures.
Prof. Jesper Møller
Aalborg University
March 4, 2010

Transforming Spatial Point Processes into Poisson Processes Using Random Superposition

Abstract

Most finite spatial point process models specified by a density are locally stable, implying that the Papangelou conditional intensity is bounded by some integrable function $\beta$ defined on the space for the points of the process. We show how to superpose such a locally stable spatial point process with a complementary spatial point process to obtain a Poisson process with intensity function $\beta$, and introduce a fast and easy simulation procedure for the complementary process. This may be used for model checking: given a model for the Papangelou intensity of the original spatial point process, this model is used to generate the complementary process, and the resulting superposition is a Poisson process with intensity function $\beta$ if and only if the true Papangelou conditional intensity is used. Whether the superposition is actually such a Poisson process can be examined using well known results and fast simulation procedures for Poisson processes. This part of the talk is based on the paper Møller and Berthelsen (2010). If time allows, we also discuss the more rare case where the Papangelou conditional intensity is bounded from below by a strictly positive and integrable function $\alpha$, and where a random thinning procedure is used to obtain a Poisson process with intensity function $\alpha$ (Møller and Schoenberg, 2010).

References
J. Møller and K.K. Berthelsen (2010). Transforming spatial point processes into Poisson processes using random superposition. In preparation.
J. Møller and R.P. Schoenberg (2010). Thinning spatial point processes into Poisson processes. To appear in Advances in Applied Probability.
Prof. Richard Huggins
University of Melbourne
March 11, 2010

A Measurement Error Model for Heterogeneous Capture Probabilities in Mark-Recapture Experiments

Abstract

Mark-recapture experiments are used to estimate the size of animal populations. Logistic models for capture probabilities that depend on covariates are effective if the covariates can be measured exactly. If there is measurement error so that a surrogate for the covariate is observed rather than the covariate itself, simple adjustments may be made if the parameters of joint distribution of the covariate and the surrogate are known. Here we consider the case when a surrogate is observed whenever an individual is captured and the parameters must also be estimated from the data. A regression calibration approach is developed and it is illustrated on a data set where the surrogate is an individual bird’s wing length.
Prof. Julie Forman
University of Copenhagen
March 18, 2010

On the Statistical Analysis of some Tractable Diffusion-Type Models: Simple Models for Complex Phenomena

Abstract

Diffusion models provide a natural and flexible framework for modeling phenomena that evolve continuously and randomly with time. Unfortunately their statistical analysis is difficult since only in few diffusion models the transition densities have closed form expressions. In this talk I will consider the Pearson diffusions (Forman & Sørensen, 2008), a class of simple diffusions that are highly tractable. I will demonstrate how these processes can be transformed in order to model complex phenomena, such as stochastic volatility and protein folding, while at the same time retaining much of their analytical tractability.

Joint work with Michael Sørensen and Jesper Pedersen, Department of Mathematical Science, University of Copenhagen.
Dr. Sylvain Sardy
Université de Genève
April 16, 2010

Smooth James-Stein Variable Selection

Abstract

The smooth James-Stein thresholding function links and extends the thresholding functions employed by the James-Stein estimator, the block- and adaptive-lasso, and the soft-, hard- and block-thresholding in wavelet smoothing. It can be employed blockwise or for a single parameter. The smooth James-Stein estimator is indexed by two hyperparameters and a smoothness parameter. For the selection of the hyperparameters, we propose two alternatives: minimize the bivariate Stein unbiased risk estimate (SURE), or minimize an information criterion. The third parameter induces smoothness to the estimator and incidentally to its bivariate SURE function for a better selection of the two hyperparameters. For blocks of a fixed size, we derive an oracle inequality for block thresholding repeated measurements. We use the smooth James-Stein thresholding and SURE for direct block sequence estimation. We also show with the classical regression setting how to define a smooth James-Stein estimator and derive its SURE in more complex settings. In particular we obtain the equivalent degrees of freedom of adaptive lasso.
Mr. Abhimanyu Mitra
Cornell University
April 16, 2010

Two Problems in Tail Probability Estimation

Abstract

Abstract. We will discuss two problems in tail probability estimation. First, we discuss the tail behavior of the distribution of the sum of asymptotically independent risks whose marginal distributions belong to the maximal domain of attraction of the Gumbel distribution. We impose conditions on the distribution of the risks (X,Y) such that P(X + Y > x)~cP(X > x). With examples we show that sub-exponentiality of the marginal distributions is not a necessary condition for the relation P(X +Y > x) ~ cP(X > x). Second, we discuss how the model of hidden regular variation estimates hidden risks more accurately than multivariate regular variation. We discuss subtleties of the model of hidden regular variation, detection of the model from data and estimation procedures of the joint tail probability using this model.
Ms Alina Crudu
Université de Rennes 1
April 22, 2010

Hybrid Stochastic Simplifications For Multiscale Gene Networks

Abstract

Stochastic simulation of gene networks by Markov processes has important applications in molecular biology. I propose a unified framework for hybrid simplifications of Markov models of multi-scale gene networks dynamics. I discuss several possible hybrid simplifications, and provide algorithms to obtain them from pure jump processes. In hybrid simplifications, some components are discrete and evolve by jumps, while others are continuous. Hybrid simplifications are obtained by partial Kramers-Moyal expansion, which is equivalent to the application of the central limit theorem to a sub-model. Furthermore, by averaging, we drastically reduce the simulation time. All methods will be illustrated on several gene networks examples.
Dr. Rajat Mukherjee
Nestlé Research Center
May 19, 2010

Power Enhanced, Bootstrapped Multiple Testing

Abstract

Existing multiple testing procedures rely on adjusting the size of each individual test or equivalently the individual p-value towards controlling the Type-I error in terms of the family-wise error rate (FWER) or the false discovery rate (FDR). This is done without taking into account the power of the individual tests and thus deviating from the classical Neyman-Pearson hypothesis testing framework. Recently, Pena et al. (2010, to appear in the Annals of Statistics) considered the power of the individual tests as a function of size and suggested choosing the size-vector that maximizes the average power while controlling the FWER (FDR). The power as a function of size, under regularity conditions, can be shown to be the cumulative density function of the p-values. For non-parametric tests and discrete data, randomized tests are to be considered. We present a nonparametric bootstrap implementation of this methodology. Using modest numerical and real data examples, we show the gain in efficiency (average power) in using this methodology.
CANCELLED Prof. Richard Olshen
Stanford University
May 21, 2010

Succesive Normalization of Rectangular Arrays

Abstract

When each subject in a study provides a vector of numbers (coordinates, features) for analysis and one wants to standardize coordinates, then for each coordinate one may subtract its mean across subjects and divide by its standard deviation. Each feature then has mean 0 and standard deviation 1. Covariances of features become correlations. Genomic and other data often come as rectangular arrays, where one coordinate (typically each column) denotes ÒsubjectÓ and the other a specific measurement (such as transformed expression of Òa geneÓ). When analyzing such data one may ask that subjects and features be Òon the same footing.Ó Thus, there may be a need to standardize across rows and columns of the matrix. We propose and investigate the convergence of what seems to us a natural approach to successive normalization, one we learned from colleague Bradley Efron. We study implementation on simulated data and data that arose in scientific experimentation. Time permitting there will be discussion of extensions to problems when some data are missing or when they have finite range (such as with SNPs), and of relationships to domains of attraction. This is joint work with colleague Bala Rajaratnam.
Prof. Vijay Nair
University of Michigan, Ann Arbor
September 27, 2010
15.15 – MA A1 10

Reliability Inference Based on Multistate and Degradation Models

Abstract

Reliability or survival analysis is traditionally based on time-to-failure data. In high-reliability applications, there is usually a high degree of censoring, which causes difficulties in making reasonable inference. There are a number of alternatives to increasing the efficiency of reliability inference in such cases: accelerated testing, collection and use of extensive covariate information, and the use of multistate and degradation data when available. This talk will focus on the last topic. We will describe different multistate models that arise in applications and discuss inference for semi-Markov multistate models with panel data (interval censoring), a common type of data collection scheme. The second part of the talk deals with degradation data. We will review some common models for analyzing degradation data and then describe a class of models based on non-homogeneous Gaussian processes. Properties of the models and methods for inference will be discussed. The talk is based on joint work with Yang Yang, Yves Atchade, and Xiao Wang.
Dr. Sonja Greven
Ludwig-Maximilians-Universität München
October 29, 2010
15.15 – MA A1 10

On the behaviour of marginal and conditional AIC in linear mixed models

Abstract

In linear mixed models, model selection frequently includes the selection of random effects. Two versions of the Akaike information criterion, AIC, have been used, based either on the marginal or on the conditional distribution. We show that the marginal AIC is no longer an asymptotically unbiased estimator of the Akaike information, and in fact favours smaller models without random effects. For the conditional AIC, we show that ignoring estimation uncertainty in the random effects covariance matrix, as is common practice, induces a bias that can lead to the selection of any random effect not predicted to be exactly zero. We derive an analytic representation of a corrected version of the conditional AIC, which avoids the high computational cost and imprecision of available numerical approximations. An implementation in an R package is provided. All theoretical results are illustrated in simulation studies, and their impact in practice is investigated in an analysis of childhood malnutrition in Zambia.
Prof. Ulrike Schneider
Georg-August-Universität Göttingen
November 19, 2010
15.15 – MA A1 10

On Distributional Properties of Penalized Maximum Likelihood Estimators

Abstract

Penalized least squares (or maximum likelihood) estimators, such as the famous Lasso estimator, have been studied intensively in the last few years. While many properties of these estimators are now well understood, the understanding of their distributional characteristics, such as finite-sample and large-sample limit distributions, risk properties and confidence sets, is still incomplete.

We study the distribution of several of these estimators, such as the Lasso, the adaptive Lasso and the hard-thresholding estimator within a normal orthogonal linear regression model. We derive finite-sample as well as large-sample limit distributions and demonstrate that these distributions are typically highly non-normal. Uniform convergence rates are obtained and shown to be slower than $n^{-1/2}$ in case the estimator is sparse, i.e. tuned to perform consistent model selection. We also calculate the risk of these estimators, derive honest confidence intervals, and discuss extensions to the non-orthogonal case. Finally, we provide an impossibility result regarding the estimability of the distribution function.
Prof. Helmut Rieder
Universität Bayreuth
December 2, 2010
15.15 – MA A3 30

Connections between Robustness and Semiparametrics

Abstract

Apart from their similar historical origin and common locally asymptotically normal framework, the relation of robust and semiparametric statistics may be clarified by an investigation of the following issues: Robustness of adaptive estimators, robust influence curves for semiparametric models with infinite dimensional nuisance parameter, adaptiveness in the sense of Stein (1956) of robust neighborhood models with respect to a nuisance parameter, interpretation of gross error deviations as the value of an infinite dimensional nuisance parameter, asymptotic normality of adaptive and robust estimators. We spell out the comparison for time series (ARMA, ARCH), semiparametric regression (Cox), and mixture models (Neyman-Scott). Our two fields may further be distinguished by tangent balls in the place of linear tangent spaces. In the context of estimation, an extended semipara- metric technique -projection on balls- turns out almost, but not quite, to reproduce optimally robust influence curves. However, a semiparametric saddle point result for testing (even more general) convex tangent sets, based on the pro jection on the set of di?erences made up by the two sets, does yield asymptotic versions of the robust tests based on least favorable pairs in the sense of Huber-Strassen (1973). Finally, a semiparametric result for tangent cones, as opposed to spaces, leads to an optimal but very unstable estimator and, in particular, renders a concentration bound by Pfanzagl and Wefelmeyer (1982) unattainable.
Dr. Claire Gormley
University College Dublin
December 10, 2010
15.15 – MA 10

Statistical modeling of social network data in the presence of covariates

Abstract

Social network data represent the interactions between a group of social actors. Interactions between colleagues and friendship networks are typical examples of such data. The latent space model for social network data locates each actor in a network in a latent (social) space and models the probability of an interaction between two actors as a function of their locations. The latent position cluster model extends the latent space model to deal with network data in which clusters of actors exist – actor locations are drawn from a nite mixture model, each component of which represents a cluster of actors. A mixture of experts model builds on the structure of a mixture model by taking account of both observations and associated covariates when modeling a heterogeneous population. Here, a mixture of experts extension of the latent position cluster model is developed. The mixture of experts framework allows covariates to enter the latent position cluster model in a number of ways, yielding dierent model interpre- tations. Estimates of the model parameters are derived in a Bayesian framework using a Markov Chain Monte Carlo algorithm. The algorithm is generally computationally expensive – ideas from optimization transfer algorithms are used to derive surrogate proposal distributions which shadow the target distributions, reducing the computational burden. The methodology is demonstrated through an illustrative example detailing relations between a group of lawyers in the USA. Joint work with Brendan Murphy.
Ms. Caroline Uhler
University of California, Berkeley
December 20, 2010
15.15 – GR A 330

Geometry of maximum likelihood estimation in Gaussian graphical models

Abstract

We study maximum likelihood estimation in Gaussian graphical models from the perspective of convex algebraic geometry. It is well-known that the maximum likelihood estimator (MLE) exists with probability one if the number of observations is at least as large as the number of variables. We examine the situation with fewer samples for bipartite graphs and grids. Taking an algebraic approach, we find the first example of a graph for which the MLE exists with probability one even when the number of observations equals the treewidth of the underlying graph.