10th World Congress in Probability and Statistics

Plenary Thu-1

IMS Medallion Lecture (Daniela Witten)

Conference
9:00 AM — 10:00 AM KST
Local
Jul 21 Wed, 8:00 PM — 9:00 PM EDT

Selective inference for trees

Daniela Witten (University of Washington)

7
As datasets grow in size, the focus of data collection has increasingly shifted away from testing pre-specified hypotheses, and towards hypothesis generation. Researchers are often interested in performing an exploratory data analysis to generate hypotheses, and then testing those hypotheses on the same data. Unfortunately, this type of 'double dipping' can lead to highly-inflated Type 1 errors. In this talk, I will consider double-dipping on trees. First, I will focus on trees generated via hierarchical clustering, and will consider testing the null hypothesis of equality of cluster means. I will propose a test for a difference in means between estimated clusters that accounts for the cluster estimation process, using a selective inference framework. Second, I'll consider trees generated using the CART procedure, and will again use selective inference to conduct inference on the means of the terminal nodes. Applications include single-cell RNA-sequencing data and the Box Lunch Study. This is collaborative work with Lucy Gao (U. Waterloo), Anna Neufeld (U. Washington), and Jacob Bien (USC).

Session Chair

Ja-Yong Koo (Korea University)

Plenary Thu-2

IMS Medallion Lecture (Andrea Montanari)

Conference
10:00 AM — 11:00 AM KST
Local
Jul 21 Wed, 9:00 PM — 10:00 PM EDT

High-dimensional interpolators: From linear regression to neural tangent models

Andrea Montanari (Stanford University)

8
Modern machine learning methods —most noticeably multi-layer neural networks — require to fit highly non-linear models comprising tens of thousands to millions of parameters. However, little attention is paid to the regularization mechanism to control model's complexity and the resulting models are often so complex as to achieve vanishing training error. Despite this, these models generalize well to unseen data : they have small test error. I will discuss several examples of this phenomenon, leading to two-layers neural networks in the so-called lazy regime. For these examples precise asymptotics could be determined mathematically, using tools from random matrix theory, and a unifying picture is emerging. A common feature is the fact that a complex unregularized nonlinear model becomes essentially equivalent to a simpler model, which is however regularized in a non-trivial way.

[Based on joint papers with: Michael Celentano, Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, Feng Ruan, Youngtak Sohn, Jun Yan, Yiqiao Zhong]

Session Chair

Myunghee Cho Paik (Seoul National University)

Invited 38

IMS Lawrence D. Brown Ph.D. Student Award Session (Organizer: Institute of Mathematical Statistics)

Conference
11:30 AM — 12:00 PM KST
Local
Jul 21 Wed, 10:30 PM — 11:00 PM EDT

Efficient manifold approximation with spherelets

Didong Li (Princeton University / University of California)

5
Data lying in a high dimensional ambient space are commonly thought to have a much lower intrinsic dimension. In particular, the data may be concentrated near a lower-dimensional subspace or manifold. There is an immense literature focused on approximating the unknown subspace, and in exploiting such approximations in clustering, data compression, and building of predictive models. Most of the literature relies on approximating subspaces using a locally linear, and potentially multiscale, dictionary. In this talk, a simple and general alternative is introduced, which instead uses pieces of spheres, or spherelets, to locally approximate the unknown subspace. Theory is developed showing that spherelets can produce lower covering numbers and MSEs for many manifolds. We develop spherical principal components analysis (SPCA). Results relative to state-of-the-art competitors show gains in ability to accurately approximate the subspace with fewer components. In addition, unlike most competitors, our approach can be used for data denoising and can efficiently embed new data without retraining. The methods are illustrated with standard toy manifold learning examples, and applications to multiple real data sets.

Toward instance-optimal reinforcement learning

Ashwin Pananjady (Georgia Institute of Technology)

3
The paradigm of reinforcement learning has now made inroads in a wide range of applied problem domains. This empirical research has revealed the limitations of our theoretical understanding: popular RL algorithms exhibit a variety of behavior across domains and problem instances, and existing theoretical bounds, which are generally based on worst-case assumptions, can often produce pessimistic predictions. An important goal is thus to develop instance-specific analyses that help to reveal what aspects of a given problem make it "easy" or "hard", and allow distinctions to be drawn between ostensibly similar algorithms. Taking an approach grounded in nonparametric statistics, we initiate a study of this question for the policy evaluation problem. We show via information-theoretic lower bounds that many popular variants of stochastic approximation or "temporal difference learning" algorithms *do not* exhibit the optimal instance-specific performance in the finite-sample regime. On the other hand, making careful modifications to these algorithms does result in automatic adaptation to the intrinsic difficulty of the problem. When there is function approximation involved, our bounds also characterize the instance-optimal tradeoff between approximation and estimation errors in solving projected fixed-point equations, a general class of problems that includes policy evaluation as a special case. These oracle inequalities, which are non-standard and involve a non-unit pre-factor multiplying the approximation error, may be of independent statistical interest.

Bayesian pyramids: identifying interpretable discrete latent structures from discrete data

Yuqi Gu (Columbia University)

3
High dimensional categorical data are routinely collected in biomedical and social sciences. It is of great importance to build interpretable models that perform dimension reduction and uncover meaningful latent structures from such discrete data. Identifiability is a fundamental requirement for valid modeling and inference in such scenarios, yet is challenging to address when there are complex latent structures. In this article, we propose a class of interpretable discrete latent structure models for discrete data and develop a general identifiability theory. Our theory is applicable to various types of latent structures, ranging from a single latent variable to deep layers of latent variables organized in a sparse graph (termed a Bayesian pyramid). The proposed identifiability conditions can ensure Bayesian posterior consistency under suitable priors. As an illustration, we consider the two-latent-layer model and propose a Bayesian shrinkage estimation approach. Simulation results for this model corroborate identifiability and estimability of the model parameters. Applications of the methodology to DNA nucleotide sequence data uncover discrete latent features that are both interpretable and highly predictive of sequence types. The proposed framework provides a recipe for interpretable unsupervised learning of discrete data, and can be a useful alternative to popular machine learning methods.

Q&A for Invited Session 38

0
This talk does not have an abstract.

Session Chair

Tracy Ke (Harvard University)

Organized 11

Random Growth, Spatial Processes and Related Models (Organizer: Erik Bates)

Conference
11:30 AM — 12:00 PM KST
Local
Jul 21 Wed, 10:30 PM — 11:00 PM EDT

Holes in first-passage percolation

Wai-Kit Lam (University of Minnesota)

6
In first-passage percolation (FPP), one places i.i.d. nonnegative weights (t_e) on the nearest-neighbor edges on Z^d, and studies the induced psuedometric. One of the main goals in FPP is to understand the geometry of the metric ball B(t) centered at the origin with radius t. When the weights (t_e) have a heavy-tailed distribution, it is known that B(t) consists of a lot of small holes. It is natural to ask whether large holes typically exist; if yes, how large can they be? In an ongoing project with M. Damron, J. Gold and X. Shen, we show that for any distribution with P(t_e = 0) < p_c, a.s. for all large t, the size of the largest hole is of order at least \log{t}, and the number of holes is of order at least t^{d-1}. If we assume the limiting shape of B(t) is uniformly curved (which is unproven), we can also show that in two dimensions, a.s. the size of the largest hole is at most of order (\log{t})^C.

Coalescence estimates for the corner growth model with exponential weights

Xiao Shen (University of Wisconsin-Madison)

5
We establish estimates for the coalescence time of semi-infinite directed geodesics in the planar corner growth model with i.i.d. exponential weights. There are four estimates: upper and lower bounds for both fast and slow coalescence on the correct scale with exponent 3/2. The lower bound for fast coalescence is new and has the optimal exponential order of magnitude. For the other three, we provide proofs that do not rely on integrable probability or on the connection with the totally asymmetric simple exclusion process, in order to provide a template for the extension to other models. We utilize a geodesic duality introduced by Pimentel and properties of the increment-stationary last-passage percolation process.

Scaling limits of sandpiles

Ahmed Bou-Rabee (University of Chicago)

6
The Abelian sandpile is a diffusion process on the integer lattice which produces striking, kaleidoscopic patterns. I will discuss recent progress towards understanding these patterns and their stability under randomness.

Q&A for Organized Contributed Session 11

0
This talk does not have an abstract.

Session Chair

Erik Bates (University of California Berkeley)

Organized 19

Recent Advances in Complex Data Analysis (Organizer: Seung Jun Shin)

Conference
11:30 AM — 12:00 PM KST
Local
Jul 21 Wed, 10:30 PM — 11:00 PM EDT

Kernel density estimation and deconvolution under radial symmetry

Kwan-Young Bak (Korea University)

6
This study illustrates a dimensionality reduction effect of radial symmetry in nonparametric density estimation. To deal with the class of radially symmetric functions, we adopt a generalized translation operation that preserves the symmetry structure. Radial kernel density estimators based on directly or indirectly observed random samples are proposed. For the latter case, we analyze deconvolution problems with four distinct scenarios depending on the symmetry assumptions on the signal and noise. Minimax upper and lower bounds are established for each scheme to investigate the role of the radial symmetry in determining optimal rates of convergence. The results confirm that the radial symmetry reduces the dimension of the estimation problems so that the optimal rate of convergence coincides with the univariate convergence rate except at the origin where a singularity occurs. The results also imply that the proposed estimators are rate optimal in the minimax sense for the Sobolev class of densities.

Penalized poly gram regression for bivariate smoothing

Jae-Hwan Jhong (Chungbuk National University)

3
We consider the problem of estimating a bivariate function over the plane using triangulation and penalization techniques. In order to provide a spatially adaptive method, total variation penalty for the bivariate spline function is considered to remove unnecessary common edges for the initial triangulation. A coordinate descent algorithm which we introduce can efficiently solve the convex optimization problem to handle the total variation penalty. The proposed estimator which is called Penalized Polygram Regression (PPR) has a form of piecewise linear and continuous within the adjacent polygons, not limited on triangles, and its corresponding basis functions can be obtained by the coordinate descent algorithm for eliminating common edges proceeds. Numerical studies using both simulated and real data examples are provided to illustrate the performance of the proposed method.

Penalized logistic regression using functional connectivity as covariates with an application to mild cognitive impairment

Eunjee Lee (Chungnam National University)

3
There is an emerging interest in brain functional connectivity (FC) based on functional Magnetic Resonance Imaging in Alzheimer’s disease (AD) studies. The complex and high-dimensional structure of FC makes it challenging to explore the association between altered connectivity and AD susceptibility. We develop a pipeline to refine FC as proper covariates in a penalized logistic regression model and classify normal and AD susceptible groups. Three different quantification methods are proposed for FC refinement. One of the methods is dimension reduction based on common component analysis (CCA), which is employed to address the limitations of the other methods. We applied the proposed pipeline to the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data and deduced pathogenic FC biomarkers associated with AD susceptibility. The refined FC biomarkers were related to brain regions for cognition, stimuli processing, and sensorimotor skills. We also demonstrated that a model using CCA performed better than others in terms of classification performance and goodness-of-fit.

Resmax: detecting voice spoofing attacks with residual network and max filter map

Il-Youp Kwak (Chung-Ang University)

3
The 2019 automatic speaker verification spoofing and countermeasures challenge (ASVspoof) competition aims to facilitate the design of highly accurate voice spoofing attack detection systems. However, they do not emphasize model complexity and latency requirements such constraints are strict and integral in real-world deployment. Hence, most of the top performing solutions from the competition use an ensemble approach, and combine multiple complex deep learning models to maximize detection accuracy. This kind of approach would sit uneasily with real-world deployment constraints. To design a light weight system, we combine the notions of skip connection (from ResNet) and max filter map (from Light CNN), and evaluate its accuracy using the ASVspoof 2019 dataset by optimizing a well known signal processing feature called constant Q transform (CQT), our single model achieved a spoofing attack detection equal error rate (EER) of 0.16%, outperforming the top ensemble system from the competition that achieved an EER of 0.39%

Weighted validation of heteroscedastic regression models for better selection

Yoonsuh Jung (Korea University)

3
Statistical modeling can be divided into two processes: model fitting and model selection for the given task. For model fitting, it is vital to select the appropriate type of model to use. This step is taken first. For model selection, the model is fine-tuned via variable and parameter selection. Improving model selection in the presence of heteroscedasticity is the main goal of this talk. Model selection is usually conducted by measuring the prediction error. When there is heteroscedasticity in the data, observations with high variation tend to produce larger prediction errors. In turn, model selection is strongly effected by observations with large variation. To reduce the effect of heteroscedastic data, we propose weighted selection during the model selection process. The proposed method reduces the impact of large prediction errors via weighted prediction and leads to better model and parameter selection. The benefits of the proposed method is demonstrated in simulations and with two real data sets.

Q&A for Organized Contributed Session 19

0
This talk does not have an abstract.

Session Chair

Seung Jun Shin (Korea University)

Organized 25

Recent Advances in Biostatistics (Organizer: Sangwook Kang)

Conference
11:30 AM — 12:00 PM KST
Local
Jul 21 Wed, 10:30 PM — 11:00 PM EDT

Bayesian nonparametric adjustment of confounding

Chanmin Kim (SungKyunKwan University)

5
In observational studies, confounder selection is a crucial task in estimation of causal effect of an exposure. Wang et al. (2012, 2015) propose Bayesian adjustment methods for confounding (BAC) to account for the uncertainty in confounder selection by jointly fitting parametric models for exposure and outcome, in which Bayesian model averaging (BMA) is utilized to obtain the causal effect averaged across all potential models according to their posterior weights. In this work, we propose a Bayesian nonparametric approach to select confounders and estimate causal effects without assuming any model structures for exposure and outcome. With the Bayesian additive regression trees (BART) method, the causal model can capture complex data structure flexibly and select a subset of true confounders by specifying a common prior on the selection probabilities in both exposure and outcome models. The proposed model does not require a separate BMA process to average effects across many models as, in our method, selection of confounders and estimation of causal effects based on the selected confounders are processed simultaneously within each MCMC iteration. A set of extensive simulation studies demonstrates that the proposed method outperforms in a variety of situations.

Multivariate point process models for microbiome image analysis

Kyu Ha Lee (Harvard University)

3
We investigate the spatial distribution of microbes to understand the role of biofilms in human and environmental health. Advances in spectral imaging technologies enable us to display how different taxa (e.g. species or genera) are located relative to one another and to host cells. However, most commonly used quantitative methods are limited to describing spatial patterns of bivariate data. Therefore, we propose a flexible multivariate spatial point process model that can quantify spatial relationships among the multiple taxa observable in biofilm images. We have developed an efficient computational scheme based on the Hamiltonian Monte Carlo algorithm, implemented in the R package. We applied the proposed model to tongue biofilm image data.

Look before you leap: systematic evaluation of tree-based statistical methods in subgroup identification

Xiaojing Wang (University of Connecticut)

3
Subgroup analysis, as the key component of personalized medicine development, has attracted a lot of interest in recent years. While a number of exploratory subgroup searching approaches have been proposed, informative evaluation criteria and scenario-based systematic comparison of these methods are still underdeveloped topics. In this article, we propose two evaluation criteria in connection with traditional type I error and power concepts, and another criterion to directly assess recovery performance of the underlying treatment effect structure. Extensive simulation studies are carried out to investigate empirical performance of a variety of tree-based exploratory sub- group methods under the proposed criteria. A real data application is also included to illustrate the necessity and importance of method evaluation.

Q&A for Organized Contributed Session 25

0
This talk does not have an abstract.

Session Chair

Sangwook Kang (Yonsei Univesity)

Organized 31

BOK Contributed Session: Finance and Contemporary Issues (Organizer: BOK Economic Statistics Department)

Conference
11:30 AM — 12:00 PM KST
Local
Jul 21 Wed, 10:30 PM — 11:00 PM EDT

Multi-step reflection principle and barrier options

Seongjoo Song (Korea University)

5
This paper examines a class of barrier options—multi-step barrier options, which can have any finite number of barriers of any level. We obtain a general, explicit expression of option prices of this type under the Black-Scholes model. Multi-step barrier options are not only useful in that they can handle barriers of different levels and time steps, but can also approximate options with arbitrary barriers. Moreover, they can be embedded in financial products such as deposit insurances based on jump models with simple barriers. Along the way, we derive multi-step reflection principle, which generalizes the reflection principle of Brownian motion.

Change point analysis in Bitcoin return series: a robust approach

Junmo Song (Kyungpook National University)

6
Over the last decade, Bitcoin has attracted a great deal of public interest and along with this, the Bitcoin market has grown rapidly. Its speculative price movements have also drawn the interest of many researchers as well as financial investors. Accordingly, numerous studies have been devoted to the analysis of Bitcoin, more exactly the volatility modelling of Bitcoin returns. In this study, we are interested in change point analysis in Bitcoin return data. Since Bitcoin returns has some outlying observations that can affect statistical inferences undesirably, we use a robust test for parameter change to locate some significant change points. We report some change points that are not detected by the existing tests and demonstrate that the model with parameter changes are better fitted to the data. Finally, we show that the model incorporating parameter change can improve the forecasting performance of Value-at-Risk.

A self-normalization test for correlation matrix change

Ji Eun Choi (Pukyong National University)

5
We construct a new test for correlation matrix break based on the self-normalization method. The self-normalization test has practical advantage over the existing test: easy and stable implementation; not having the singularity issue and the bandwidth selection issue of the existing test; remedying size distortion problem of the existing test under (near) singularity, serial dependence, conditional heteroscedasticity or unconditional heteroscedasticity. This advantage is demonstrated experimentally by a Monte-Carlo simulation and theoretically by showing no need for estimation of complicated covariance matrix of the sample correlations. We establish the asymptotic null distribution and consistency of the self-normalization test. We apply the correlation matrix break tests to the stock log returns of the companies of 10 largest weight of the NASDAQ 100 index and to five volatility indexes for options on individual equities.

Volatility as a risk measure of financial time series : high frequency and realized volatility

Sun Young Hwang (Sookmyung Women's University)

5
Volatility as a risk measure is defined as a time varying variance process of return of an asset. The GARCH have been useful to model volatilities of various financial time series. This talk reviews standard volatility computations via GARCH models and then discusses recent issues such as multivariate volatility, realized volatility and high frequency volatility of financial time series. To illustrate, applications to various Korean financial time series are made.

Q&A for Organized Contributed Session 31

0
This talk does not have an abstract.

Session Chair

Changryoung Baek (Sungkyunkwan University)

Poster III-1

Poster Session III-1

Conference
11:30 AM — 12:00 PM KST
Local
Jul 21 Wed, 10:30 PM — 11:00 PM EDT

A penalized matrix normal mixture model for clustering matrix data

Jinwon Heo (Chonnam National University)

3
Along with the advance of technologies, matrix data such as medical/industrial images have been emerged in many practical fields. These data usually have high dimension and are not easy to be clustered due to its intrinsic correlated structure among rows and columns. Most approaches convert matrix data to multi-dimensional vector and apply conventional clustering methods to them, and hence suffer from extreme high-dimensionality problem as well as lack of interpretability of the correlated structure among row/column variables. Gao et al. (2020) proposed a regularized mixture model for clustering matrix-valued data by imposing a sparsity structure for the mean signal of each cluster. We extend their approach by regularizing further on the covariance to cope with curse of dimensionality for images with large size. We propose a penalized matrix-normal mixture model with lasso-type penalty terms in both mean and covariance matrices, and then develop an expectation maximization algorithm to estimate the parameters. We apply the proposed method to simulated data as well as real data sets, and confirm its clustering accuracy performance over some conventional methods.

Univariate and multivariate normality tests using an entropy-based transformation

Shahzad Munir (Xiamen University)

2
We introduce a new normality test which may be applied to univariate and multivariate IID or time series data. In the univariate case, the test is constructed by first applying a transformation based on the definition of entropy. The test only requires the estimation of the variance of transformed data; however, it is less sensitive to errors from estimating the kurtosis coefficient but is able to detect deviations from this higher-order moment. In the univariate case, we show that in a broad class of stationary processes, the proposed test statistic asymptotically follows a standard normal distribution, and does not require any kernel smoothing to consistently estimate the asymptotic variance of the proposed test. The extension to the multivariate case is also straightforward and allows for alternatives to diagnostic testing of vector autoregressive models.

Geum river network data analysis via weighted PCA

Seeun Park (Seoul National University)

7
Various measurements of water quality are collected at monitoring sites, spread throughout the river network. Monitoring this kind of dataset is critical for water quality evaluation and improvement, but the unique structure of the river network interrupts PCA, achieving accurate results due to autocorrelation among variables. In literature, Gallacher et al. (2017) introduced a weighted PCA that reflects the known spatiotemporal structure of the river network to adjust the autocorrelation. This study aims to apply the weighted PCA method to Geum River network data in South Korea and improve the method itself. As a result, the weighted PCA successfully identified certain patterns in Geum River data that the conventional PCA cannot process. However, we believe that the weighted PCA method does not take into account the inhomogeneity on the covariance structure of the data, which might lead to inaccurate results in PCA. In fact, inhomogeneous covariance structures are found in Geum River data across regions and seasons. Therefore, our further plan is to improve the weighted PCA that can handle this problem due to the inhomogeneous structure.

Cauchy combination test with thresholding under arbitrary dependency structures

Junsik Kim (Seoul National University)

2
Combining individual p-values to aggregate sparse and weak effects is a substantial interest in large-scale data analysis. The individual p-values or test statistics are often correlated, although many p-values combining methods are developed under i.i.d. assumption. The Cauchy combination test is a method to combine p-values for this arbitrary dependence structure, but in practice, type I error increases as the correlation increases. In this paper, we propose a global test that extends the Cauchy combination test by thresholding arbitrarily dependent p-values. Under an arbitrary dependence structure, we show that the tail probability of the proposed method is asymptotically equivalent to that of the Cauchy distribution. In addition, we show that the power of the proposed test achieves the optimal detection boundary asymptotically in a strong sparsity condition. Extensive simulation results show that the power of the proposed test is robust to correlation coefficients and more powerful under a sparse situation. As a case study, we apply the proposed test to GWAS of Inflammatory bowel disease (IBD).

Control charts for monitoring linear profiles in the detection of network intrusion

Daeun Kim (Dankook University)

2
This study considers the problem of network intrusion detection. Sklavounos et al. (2017) proposed using EWMA charts for crucial characteristics as a network intrusion detection method. This paper expands on this idea and attempts to detect network intrusions by monitoring functional relationships between multiple features rather than a single feature. We consider profiles when the principal characteristic is functionally dependent on explanatory variables. Profile monitoring is then used to verify the stability of the functional relationship over time, which is widely applied in calibration applications. In particular, there has been much work on linear profiles. In this case, the stability of the profile is determined by monitoring statistics on the slope and intercept. We thus can consider Shewhart control charts or multivariate control charts. The previous studies assume that the explanatory variable has the same fixed value for each profile. Therefore, to consider the network intrusion problem, the explanatory variable should be expanded to the case observed differently for each profile. In this regard, we will evaluate the robustness of the existing control charts and determine whether the extended control charts effectively detect network intrusion. We perform real analysis using the NSL-KDD data, which is popular in evaluating the performance of network detection algorithms.

Benefits of international agreements as switching diffusions

Sheikh Shahnawaz (California State University)

2
We formulate a model to consider the dynamic stability of international agreements (IAs) such as those on disarmament, nuclear non-proliferation, the environment, or sovereign debt. An agreement is reached because all participants initially receive some benefit that is above a minimum threshold level, but the distribution of total benefits X (modeled as a stochastic process that solves a switching stochastic differential equation) varies beyond ratification. The agreement is sustained as long as participants receiving higher benefits transfer their surplus to those with slack but this comes at a cost alpha, which is a homogeneous continuous Markov chain. Under certain assumptions on uniqueness of the solution of our SDE and on alpha, we derive an optimal strategy that prolongs the life of the IA.

Estimation of Hilbertian varying coefficient models

Hyerim Hong (Seoul National University)

13
In this paper we discuss the estimation of a fairly general type of varying coefficient model. The model is for a response variable that takes values in a general Hilbert space and allows for various types of additive interaction terms in representing the effects of predictors. It also accommodates both continuous and discrete predictors. We develop a powerful technique of estimating the very general model. Our approach may be used in a variety of situations where one needs to analyze the relation between a set of predictors and a Hilbertian response. We prove the existence of the estimators of the model itself and of its components, and also the convergence of a backfitting algorithm that realizes the estimators. We derive the rates of convergence of the estimators and their asymptotic distributions. We also demonstrate via simulation study that our approach works efficiently, and illustrate its usefulness through a real data application.

Duality for a class of continuous-time reversible Markov models

Freddy Palma (Fundación Universidad de las Américas Puebla)

2
Using a conditional probability structure we build transition probabilities that drive appealing classes of reversible Markov processes. The mechanism used in such a construction allows to find a dual Markov process. This kind of duality is then used to compute the predictor operator of one process via its dual. In particular, we identify the dual of some non-conjugate models, namely the $M/M/\infty$ queue model and a simple birth, death and immigration process. Such duals ensures that the computation of the predictor operators can be done via finite sums.

Plenary Thu-3

Blackwell Lecture (Gabor Lugosi)

Conference
7:00 PM — 8:00 PM KST
Local
Jul 22 Thu, 6:00 AM — 7:00 AM EDT

Estimating the mean of a random vector

Gabor Lugosi (ICREA & Pompeu Fabra Universit)

7
One of the most basic problems in statistics is the estimation of the mean of a random vector, based on independent observations. This problem has received renewed attention in the last few years, both from statistical and computational points of view. In this talk we review some recent results on the statistical performance of mean estimators that allow heavy tails and adversarial contamination in the data. In particular, we are interested in estimators that have a near-optimal error in all directions in which the variance of the one dimensional marginal of the random vector is not too small. The material of this talk is based on a series of joint papers with Shahar Mendelson.

Session Chair

Byeong Uk Park (Seoul National University)

Plenary Thu-4

Tukey Lecture (Sara van de Geer)

Conference
8:00 PM — 9:00 PM KST
Local
Jul 22 Thu, 7:00 AM — 8:00 AM EDT

Max-margin classification and other interpolation methods

Sara van de Geer (Swiss Federal Institute of Technology Zürich)

7
John Tukey writes that detective work is an essential part of statistical analysis (Tukey [1969]). In this talk we discuss methods that do the opposite of detective work: data interpolation. This was often considered forbidden, but then again, statistical paradigms are not to be sanctified. We consider basis pursuit and one-bit compressed sensing. We re-establish the $\ell_{2}$-rates of convergence for noisy basis pursuit of Wojtaszczyk [2010]. For one-bit compressed sensing we study the algorithm of Plan and Vershynin [2013] and re-derive $\ell_{2}$-rates as well. The techniques used also allow deriving novel results for the max-margin classifier - related to the ada-boost algorithm - as given in Liang and Sur [2020].

This is joint work with Geoffrey Chinot, Felix Kuchelmeister and Matthias Löffler.

References
T. Liang and P. Sur. A precise high-dimensional asymptotic theory for boosting and minimum-$\ell_1$-norm interpolated classifiers, 2020. arXiv:2002.01586.
Y. Plan and R. Vershynin. One-bit compressed sensing by linear programming. Communications on Pure and Applied Mathematics, 66(8):1275–1297, 2013.
J. Tukey. Analyzing data: Sanctification or detective work? American Psychologist, 24:83–91, 1969.
P. Wojtaszczyk. Stability and instance optimality for gaussian measurements in compressed sensing. Foundations of Computational Mathematics, 10(1): 1–13, 2010.

Session Chair

Adam Jakubowski (Nicolaus Copernicus University)

Invited 09

Quantum Statistics (Organizer: Cristina Butucea)

Conference
9:30 PM — 10:00 PM KST
Local
Jul 22 Thu, 8:30 AM — 9:00 AM EDT

Estimation of quantum state and quantum channel

Masahito Hayashi (Southern University of Science and Technology)

5
We review the results of quantum state estimation by using Cramer-Rao type bound. The quantum channel estimation has two scaling, standard quantum limit and Heisenberg scaling. In the standard quantum limit, we present that the extension of the above method works well. For the Heisenberg scaling, we present that the method of Fourier transform method works.

Information geometry and local asymptotic normality for quantum Markov processes

Madalin Guta (University of Nottingham)

4
This talk deals with the problem of identifying and estimating dynamical parameters of quantum Markov processes, in the input-output formalism. I will discuss several aspects of this problem: The first aspect concerns the structure of the space of identifiable parameters for ergodic dynamics, assuming full access to the output state for arbitrarily long times. I will show that the equivalence classes of undistinguishable parameters are orbits of a Lie group acting on the space of dynamical parameters. The second aspect concerns the information geometric structure on this space. I will show that the space of identifiable parameters and carries a Riemannian metric based on the quantum Fisher information of the output. The metric can be computed explicitly in terms of the Markov covariance of certain fluctuation operators. The third aspect concerns the asymptotic statistical structure of the output state. I will show that the output state satisfy local asymptotic normality, i.e. they can be approximated by a Gaussian model constructed from the Markov covariance data.

Optimal adaptive strategies for sequential quantum hypothesis testing

Marco Tomamichel (National University of Singapore)

4
We consider sequential hypothesis testing between two quantum states using adaptive and non-adaptive strategies. In this setting, samples of an unknown state are requested sequentially and a decision to either continue or to accept one of the two hypotheses is made after each test. Under the constraint that the number of samples is bounded, either in expectation or with high probability, we exhibit adaptive strategies that minimize both types of misidentification errors. Namely, we show that these errors decrease exponentially (in the stopping time) with decay rates given by the measured relative entropies between the two states. Moreover, if we allow joint measurements on multiple samples, the rates are increased to the respective quantum relative entropies. We also fully characterize the achievable error exponents for non-adaptive strategies and provide numerical evidence showing that adaptive measurements are necessary to achieve our bounds under some additional assumptions.

Q&A for Invited Session 09

0
This talk does not have an abstract.

Session Chair

Cristina Butucea (École nationale de la statistique et de l'administration économique Paris)

Invited 22

Random Trees (Organizer: Anita Winter)

Conference
9:30 PM — 10:00 PM KST
Local
Jul 22 Thu, 8:30 AM — 9:00 AM EDT

1d Brownian loop soup, Fleming-Viot processes and Bass-Burdzy flow

Elie Aidekon (Fudan University)

5
We describe the connection between these three objects which appear in the problem of conditioning the so-called perturbed reflecting Brownian motion on its occupation field.

Joint work with Yueyun Hu and Zhan Shi.

A new state space of algebraic measure trees for stochastic processes

Wolfgang Löhr (University of Duisburg-Essen)

5
In the talk, I present a new topological space of “continuum” trees, which extends the set of finite graph-theoretic trees to uncountable structures, which can be seen as limits of finite trees. Unlike previous approaches, we do not use the graph-metric but formalize the tree-structure by a tertiary operation on the tree, namely the branch-point map. The resulting space of algebraic measure trees has coarser equivalence classes than the older space of metric measure trees, but the topology preserves more of the tree-structure in limits, so that it is incomparable to, and not coarser than, the standard topologies on metric measure trees. With the example of the Aldous chain on cladograms, I also illustrate that our new space can be very useful as state-space for stochastic processes in order to obtain path-space diffusion limits of tree-valued Markov chains.

Scaling Limits of critical rank-1 inhomogeneous random graphs

Minmin Wang (University of Sussex)

5
Continuum inhomogeneous random graphs arise in the scaling limits of critical rank-1 inhomogeneous random graphs. They are extensions of the continuum random graph introduced by Addario-Berry, Broutin & Goldschmidt which has appeared as the scaling limit of the Erdos-Renyi graph in the critical window. In this talk, we present a construction of these graphs from the Levy processes without replacement of Aldous & Limic. In particular, this construction reveals a close connection between the clusters of the graphs and Levy trees, which consists in an isometric embedding of the spanning trees of these clusters into Levy trees. As a consequence of this construction, we provide near optimal conditions for the convergence of critical rank-1 inhomogeneous random graphs. We also deduce the Hausdorff dimension and the packing dimension of the limit graphs.

Q&A for Invited Session 22

0
This talk does not have an abstract.

Session Chair

Anita Winter (University of Duisburg-Essen)

Invited 29

High Dimensional Data Inference (Organizer: Florentina Bunea)

Conference
9:30 PM — 10:00 PM KST
Local
Jul 22 Thu, 8:30 AM — 9:00 AM EDT

Minimax rates for derivative-free stochastic optimization with higher order smooth objectives

Alexandre Tsybakov (Center for Research in Economics and Statistics (CREST))

3
We study the problem of finding the minimizer of a function by a sequential exploration of its values, under measurement noise. In the stochastic optimization literature, this problem is known under the name of derivative-free or zero-order optimization. We consider an approximation of the gradient descent algorithm where the gradient is estimated by procedures involving function evaluations in randomized points and a smoothing kernel. We first assume that the objective function is $\beta$-Hölder and $\alpha$-strongly convex with some $\alpha>0$, and $\beta$ greater or equal to 2. Under general adversarial noise, we obtain non-asymptotic upper bounds, both for the optimization error and the cumulative regret of the algorithm, as functions of the quadruplet $(T, \alpha, \beta, d)$, where T is the number of queries and d is the dimension of the problem. Furthermore, we establish minimax lower bounds for any sequential search method implying that the suggested algorithm is nearly optimal in a minimax sense in terms of sample complexity and the problem parameters. Based on similar ideas, we solve several other problems. In particular, we propose an algorithm achieving almost sharp oracle behavior for the problem of estimating the minimum value of the function. Next, we extend the results to zero-order distributed optimization, where the aim is to minimize the average of local objectives associated to different nodes in a graph with an exchange of information permitted only between neighboring nodes. Finally, we apply similar ideas for the problem of non-convex optimization.

The talk is based on a joint work with Arya Akhavan and Massimiliano Pontil.

High-dimensional, multiscale online changepoint detection

Richard Samworth (University of Cambridge)

4
We introduce a new method for high-dimensional, online changepoint detection in settings where a $p$-variate Gaussian data stream may undergo a change in mean. The procedure works by performing likelihood ratio tests against simple alternatives of different scales in each coordinate, and then aggregating test statistics across scales and coordinates. The algorithm is online in the sense that both its storage requirements and worst-case computational complexity per new observation are independent of the number of previous observations. We prove that the patience, or average run length under the null, of our procedure is at least at the desired nominal level, and provide guarantees on its response delay under the alternative that depend on the sparsity of the vector of mean change. Numerical results confirm the practical effectiveness of our proposal, which is implemented in the R package 'ocd'.

Optimal transport and inference for stationary processes

Andrew Nobel (The University of North Carolina at Chapel Hill)

3
Optimal transport ideas have found widespread use in a variety of practical and theoretical statistical problems. In most of these problems the objects under study are fixed and do not possess dynamic structure. However, there are many problems in which the objects of interest are themselves stationary processes, and in these cases it is natural to consider couplings that preserve stationarity. In the talk I will describe several ways in which stationary couplings arise in inference problems for families of processes, how in the Markov setting consideration of transition couplings can lead to fast algorithms for finding optimal couplings, and how optimal couplings may be estimated from data. I will illustrate the potential utility of the Markov case by considering the problem of graph alignment.

Q&A for Invited Session 29

0
This talk does not have an abstract.

Session Chair

Florentina Bunea (Cornell University)

Invited 34

Random Walks on Random Media (Organizer: Alexander Drewitz)

Conference
9:30 PM — 10:00 PM KST
Local
Jul 22 Thu, 8:30 AM — 9:00 AM EDT

Random walk on a barely supercritical branching random walk

Jan Nagel (Technische Universität Dortmund)

5
The motivating question behind this project is how a random walk behaves on a barely supercritical percolation cluster, that is, an infinite percolation cluster when the percolation probability is close to the critical value. As a more tractable model, we approximate the percolation cluster by the embedding of a Galton-Watson tree into the lattice. When the random walk runs on the tree, the embedded process is a random walk on a branching random walk. Now we can consider a barely supercritical branching process conditioned on survival, with survival probability approaching zero. In this setting the tree structure allows a fine analysis of the random walk and we can prove a scaling limit for the embedded process under a nonstandard scaling, when the tree becomes more critical the longer the random walk runs on it. This scaling limit allows us to interpolate between supercritical and critical behavior.

Universality of cutoff for graphs with an added random matching

Perla Sousi (Cambridge University)

4
We establish universality of cutoff for simple random walk on a class of random graphs defined as follows. Given a finite graph G=(V,E) with |V| even we define a random graph $G^{\prime} = (V, E \cup E^{\prime})$ obtained by picking E$^{\prime}$ to be the (unordered) pairs of a random perfect matching of V. We show that for a sequence of such graphs of diverging sizes and of uniformly bounded degree, if the minimal size of a connected component of (the original graphs) is at least 3, then the obtained graphs are w.h.p. expanders and the random walk on them exhibits cutoff at a time with an entropic interpretation. This provides a simple generic operation of adding some randomness to a given graph, which results in cutoff. The emerging "cutoff at an entropic time" paradigm will be emphasized.

Invariance principle for a random walk among a Poisson field of moving traps

Rongfeng Sun (National University of Singapore)

4
For a random walk among an i.i.d. Bernoulli field of immobile traps on $Z^d$, it is a classic result that conditioned on survival up to time n, the random walk is confined in a ball of radius $R n^{1/(d+2)}$, and the rescaled path converges to a Brownian motion conditioned to stay inside a ball of radius R. When the traps are mobile and perform independent random walks, the only path result so far is that in dimension 1, the random walk (conditioned on survival) is sub-diffusive. We show that in dimension 5 and higher, instead of being confined on a sub-diffusive scale as in the case of immobile traps, the conditioned walk satisfies an invariance principle on the diffusive scale. Our proof is based on the theory of Thermodynamic Formalism.

Joint work with S. Athreya and A. Drewitz.

Q&A for Invited Session 34

0
This talk does not have an abstract.

Session Chair

Alexander Drewitz (Universität zu Köln)

Invited 37

Bernoulli Society New Researcher Award Session (Organizer: Bernoulli Society)

Conference
9:30 PM — 10:00 PM KST
Local
Jul 22 Thu, 8:30 AM — 9:00 AM EDT

Hydrodynamic large deviations of strongly asymmetric interacting particle systems

Li-Cheng Tsai (Rutgers University)

4
We consider the large deviations from the hydrodynamic limit of the Totally Asymmetric Simple Exclusion Process (TASEP), which is related to the entropy production in the inviscid Burgers equation. Here we prove the full large deviation principle. Our method relies on the explicit formula of Matetski, Quastel, and Remenik (2016) for the transition probabilities of the TASEP.

Conformal loop ensembles on Liouville quantum gravity with marked points

Nina Holden (Swiss Federal Institute of Technology Zürich)

3
Liouville quantum gravity (LQG) surfaces are a natural family of random fractal surfaces, while the conformal loop ensemble (CLE) is a random collection of conformally invariant non-crossing loops in the plane. In a joint work with Matthis Lehmkuehler we study CLE-decorated LQG surfaces whose law has been reweighted according to CLE loop nesting statistics around a fixed number of marked points.

Integrability of Schramm-Loewner evolution and Liouville quantum gravity

Xin Sun (University of Pennsylvania)

2
It appears that there are rich integrable structures in Schramm-Loewner evolution and Liouville quantum gravity. Namely, many important observables admit exact expressions. In this talk, I will review two major resources of such integrability: conformal field theory and random planar maps decorated with statistical physics models. I will then present a recent work with Morris Ang that proves an integrable result for conformal loop ensembles, which is analogous to the DOZZ formula in Liouville conformal field theory. Our result is an example of a series of results that are proved by blending these two sources of integrability.

Q&A for Invited Session 37

0
This talk does not have an abstract.

Session Chair

Imma Curato (Ulm University)

Organized 08

Rough Path Theory (Organizer: Ilya Chevyrev)

Conference
9:30 PM — 10:00 PM KST
Local
Jul 22 Thu, 8:30 AM — 9:00 AM EDT

Rough path theory and the stochastic Loewner equation

Vlad Margarint (New York University Shanghai)

5
In this talk, I will give an overview of some work carried at the intersection of Rough Path Theory and Schramm-Loewner Evolutions (SLE) Theory. Specifically, I will cover a study of the Loewner Differential Equation using Rough Path techniques (and beyond). The Loewner Differential Equation describes the evolution of a family of conformal maps. We rephrase this in terms of (Singular) Rough Differential Equations. In this context, it is natural to study questions on the stability, and approximations of solutions of this equation. First, I will present a result on continuity of the dynamics and related objects in a natural parameter that appears in the problem. The first approach will be based on Rough Path Theory, and a second approach will be based on a constructive method of independent interest: the square-root interpolation of the Brownian driver of the Loewner Differential Equation. In the second part, if time permits, I will present a result on the asymptotic radius of convergence of the Stochastic Taylor approximation of the Loewner Differential Equation and numerical simulations of the SLE trace using a novel numerical method: Ninomiya-Victoir (or Strang) splitting.

The first part is based on joint work with Dmitry Belyaev, Terry Lyons, and the second part on a collaboration with James Foster.

Rough path with jumps and its application in homogenization

Huilin Zhang (Fudan University)

6
In this talk I will present recent progress on rough path theory with jumps, consisting of Ito/forward theory and the Stratonovich/Marcus theory. It allows us to handle stochastic differential equations driven by jump noise. As an application, we show how rough paths can be applied in the homogenization of a fast-slow system proposed by Melbourne and Stuart.

This talk is based on works with Chevyrev, Friz, Korepanov and Melbourne.

Probabilistic rough paths

William Salkeld (Universite Cote d'Azur)

4
In this talk, I will explain some of the foundation results for a new regularity structure developed to study interactive systems of equations and their mean-field limits. At the heart of this solution theory is a Taylor expansion using the so called Lions measure derivative. This quantifies infinitesimal perturbations of probability measures induced by infinitesimal variations in a linear space of random variable.

This talk is based on preprints and ongoing work with my supervisor Francois Delarue at Universite Cote d'Azur.

Transport and continuity equations with (very) rough noise

Nikolas Tapia (Weierstrass Institute / Technische Universität Berlin)

4
We study the solution theory of linear transport equations driven with rough multiplicative noise. We show existence and uniqueness for rough flows driven by an arbitrary geometric rough path, and obtain a rough version of the classical method of characteristics, under a boundedness condition for the vector fields. We also obtain an adjoint RDE for the derivatives of the induced flow. Dually, we show existence and uniqueness for the associated continuity equation.

Rough walks in random environment

Tal Orenshtein (Technische Universität Berlin, Weierstrass Institute for Applied Analysis and Stochastics)

5
Random walk in random environment (RWRE) is a model to describe propagation of heat or diffusion of matter through a highly irregular medium. The latter is expressed locally in the model in terms of a random environment according to which the process evolves randomly in time. In a few fundamental classes the phenomenon of homogenization of the media takes place. One way this can be expressed is in the fact that on large scales, the RWRE fluctuates as a Brownian motion with a deterministic covariance matrix given in terms the (law of the) environment. Rough path theory enables the construction of solutions to SDEs so that the solution map is continuous with respect to the noise. One important application guarantees that if the approximation converges to the noise in the rough path topology, the SDEs driven by the noise approximations converge, in an appropriate sense, to a well-defined SDE which is different than the original one, so that the correction term is explicit in terms of the noise approximation. In this talk we shall present our current program, in which one lifts RWRE in various classes to the rough path space and shows a convergence to an enhanced Brownian motion in the rough path topology. Interestingly, the limiting second level of the lifted RWRE may have a linear correction, called area anomaly, which we identify. Except for the immediate application to approximations of SDEs, and potentially to SPDEs, this adds some new information on the RWRE limiting path. Time permitted, we shall elaborate on the tools to tackle these problems.

Based on joint works with Olga Lopusanschi, with Jean-Dominique Deuschel and Nicolas Perkowski and with Johaness Bäumler, Noam Berger and Martin Slowik.

Q&A for Organized Contributed Session 08

0
This talk does not have an abstract.

Session Chair

Ilya Chevyrev (University of Oxford)

Organized 21

Recent Advances in Statistics (Organizer: Yunjin Choi)

Conference
9:30 PM — 10:00 PM KST
Local
Jul 22 Thu, 8:30 AM — 9:00 AM EDT

Identifiability of additive noise models using conditional variances

Gunwoong Park (University of Seoul)

5
The identifiability assumption of structural equation models (SEMs) is considered in which each variable is determined by a arbitrary function of its parents plus an independent error. It has been shown that linear Gaussian structural equation models are fully identifiable if all error variances are the same or known. Hence, we prove the identifiability of SEMs with both homogeneous and heterogeneous unknown error variances. Our new identifiability assumption exploits not only error variances, but edge weights; hence, it is strictly milder than prior work on the identifiability result. We further provide a statistically consistent and computationally feasible learning algorithm. We verify through simulations that the proposed algorithm is statistically consistent and computationally feasible in the high-dimensional settings, and performs well compared to state-of-the-art US, GDS, LISTEN, PC, and GES algorithms. We also demonstrate through real human cell signalling and mathematics exam data that our algorithm is well-suited to estimating DAG models for multivariate data in comparison to other methods used for continuous data.

Multivariate functional group sparse regression: functional predictor selection

Jun Song (University of North Carolina at Charlotte)

4
In this talk, I will present a method for functional predictor selection and the estimation of smooth functional coefficients simultaneously in a scalar-on-function regression problem under a high-dimensional multivariate functional data setting. In particular, we develop two methods for functional group-sparse regression under a generic Hilbert space of infinite dimension. Then we show the convergence of algorithms and the consistency of the estimation and selection under infinite-dimensional Hilbert spaces. Simulation and fMRI data application will be presented at the end to show the effectiveness of the methods in both the selection and estimation of functional coefficients.

Causal foundations for fair and responsible machine learning

Joshua Loftus (London School of Economics)

6
Recently the social impacts of new data and information technologies started receiving more attention from scholars and the public. Many are concerned that algorithmic decision systems using machine learning or "artificial intelligence" may affect people negatively, especially in ways that reinforce harmful patterns in historic data related to attributes like race, gender, and other ethically or legally important attributes. This talk will briefly survey recent work in the field and then focus on causal modeling as a pathway beyond impossibility results and toward consensus.

Network change point detection

Yi Yu (University of Warwick)

5
This talk will be on three different projects, parametric network change point detection, nonparametric network change point detection and online network change point detection. I will provide theoretically-justified and computationally-efficient change point estimators in these three different scenarios, and discuss the theoretical difficulties in all settings.

Q&A for Organized Contributed Session 21

0
This talk does not have an abstract.

Session Chair

Yunjin Choi (University of Seoul)

Contributed 09

Topics Related to RMT

Conference
9:30 PM — 10:00 PM KST
Local
Jul 22 Thu, 8:30 AM — 9:00 AM EDT

On eigenvalue distributions of large auto-covariance matrices

Wangjun Yuan (The University of Hong Kong)

4
In comparison to Hermitian random matrices, the study of random matrices without Hermitian or Unitary structure is more limited. The famous circular law, that the eigenvalue empirical measures of random matrices whose entries are i.i.d. centered complex random variables with unit variance converges almost surely to the uniform distribution on the unit disk, was posed in 1950s. Important breakthroughs were made by Bai, Girko, Tao and Vu, and the conjecture was finally proved by Tao et al. Auto-covariance matrices are important in statistics, especially in high-dimensional time series analysis. Let (X_j)_{1<=j<=n} be the time series observed at time 1<=j<=n. We study the non-Hermitian matrix Y(k)=1/n (X_{k+1}X_1^* +...+ X_nX_{n-k}^*) is known as lag-k auto-covariance matrix of the time series. Recently, a similar model was studied in [1], where the matrix Z=Y(1)+1/n X_1X_n^* was considered. To obtained the limit of the sequence of eigenvalue empirical measures, [1] used the linearization technique, the small ball probability as well as the logarithmic potential. Although the two matrices Y(1) and Z differ only by a rank-one matrix, the limit eigenvalue empirical measure of Z does not imply anything a priori on the asymptotic properties of Y(1) since rank-one perturbations may destroy the limit completely. The linearization technique in [1] fails in our case. We design another auxiliary matrix to obtain the lower bound for the least singular value. The lower bound together with the small rank perturbations on the limit of singular values empirical measure leads to the limit of the sequence of eigenvalue empirical measures of Y(k). The results can be found in [2].

[1] Arup Bose and Walid Hachem, Smallest singular value and limit eigenvalue distribution of a class of non-Hermitian random matrices with statistical application, J. Multivariate Anal. 2020.
[2] Jianfeng Yao and Wangjun Yuan, On eigenvalue distributions of large auto-covariance, arXiv:2011.09165.

Linear spectral statistics of sequential sample covariance matrices

Nina Dörnemann (Ruhr University Bochum)

4
Estimation and testing of a high-dimensional covariance matrix is a fundamental problem of statistical inference with numerous applications in biostatistics, wireless communications and finance. Linear spectral statistics are frequently used to construct tests for various hypotheses. In this work, we consider linear spectral statistics from a sequential point of view. To be precisely, we prove that the stochastic process corresponding to a linear spectral statistic of the sequential empirical covariance estimator converges weakly to a non-standard Gaussian process. As an application we use these results to develop a novel approach for monitoring the sphericity assumption in a high-dimensional framework, even if the dimension of the underlying data is larger than the sample size. Compared to previous contributions in this field, the results of the present work are conceptually different, because the sequential parameter used in the definition of the process also appears in the eigenvalues. This “non-linearity” results in a substantially more complicated structure of the problem. In particular, the limiting processes are non-standard Gaussian processes, and the proofs of our results (in particular the proof of tightness) require an extended machinery, which has so far not been considered in the literature on linear spectral statistics. As a consequence, we provide a substantial generalization of the classical CLT for linear spectral statistics proven by Bai and Silverstein.

Couplings for Andersen dynamics and related piecewise deterministic Markov processes

Nawaf Bou-Rabee (Rutgers University Camden)

3
Piecewise Deterministic Markov Processes (e.g. bouncy particle and zigzag samplers) have recently garnered increased research interest for their potential to rapidly sample high-dimensional probability distributions. However, there remain many open questions and challenges to understanding their mixing time and convergence properties. In this talk, I will highlight recent progress on Andersen dynamics: a PDMP that iterates between Hamiltonian flows and velocity randomizations of randomly selected particles. Various couplings of Andersen dynamics will be used to obtain explicit convergence bounds in a Wasserstein sense. The bounds are dimension free for not necessarily convex potentials with weakly interacting components on a high dimensional torus, and for strongly convex and gradient Lipschitz potentials on a Euclidean product space.

Q&A for Contributed Session 09

0
This talk does not have an abstract.

Session Chair

Kyeongsik Nam (University of California at Los Angeles)

Contributed 11

Topics Related to KPZ Universality

Conference
9:30 PM — 10:00 PM KST
Local
Jul 22 Thu, 8:30 AM — 9:00 AM EDT

Upper tail decay of KPZ models with Brownian initial conditions

Balint Veto (Budapest University of Technology and Economics)

2
We consider the limiting distribution of KPZ growth models with random but not stationary initial conditions introduced by Chhita, Ferrari and Spohn. The one-point distribution of the limit is given in terms of a variational problem. We deduce the right tail asymptotic of the distribution function. This gives a rigorous proof and extends the results obtained by Meerson and Schmidt.

Bijective matching between q-Whittaker and periodic Schur measures

Matteo Mucciconi (Tokyo Institute of Technology)

4
We report on a combinatorial construction that allows to relate marginal distributions of the q-Whittaker and the periodic Schur measures. The periodic Schur measure is a generalization of the Schur measure introduced by Borodin in 2006 and that models lozenge tilings in a cylindrical domain. Its free fermionic origin yields a nice mathematical structure and its correlations are determinantal.The q-Whittaker measure is another generalization of the Schur measure, which has found application in the rigorous description of KPZ models. Since mathematical properties of q-Whittaker polynomials are much more complicated than the Schur polynomials, the q-Whittaker measure is a more difficult object to handle. Using a bijective combinatorial approach we are able to relate the theories of Schur and q-Whittaker polynomials producing a remarkable correspondence between the two measures. Our arguments pivot around a combination of various theories, which had not yet been used in integrable probability, that include Kirillov-Reschetikhin crystals, Demazure modules, the Box-Ball system or the skew RSK correspondence.

The talk is based on collaborations with Takashi Imamura and Tomohiro Sasamoto. Motivations and general ideas of our work are addressed by T. Imamura and applications to probabilistic systems are explained by T. Sasamoto.

A new approach to KPZ models by determinantal and Pfaffian measures

Tomohiro Sasamoto (Tokyo Institute of Technology)

3
Recently we have established bijectively an identity which relates certain sums of q-Whittaker polynomials and skew Schur polynomials. More precisely we have found that a marginal of the q-Whittaker measure is equivalent to that of the periodic Schur measure. This enables us to study various models in the KPZ universality class by using methods associated with determinantal point processes. The main purpose of this talk is to explain this new approach to KPZ models. By imposing a symmetry on the bijection, we have also found an identity which relates sums which include single q-Whittaker polynomials and skew Schur polynomials. This is in fact related to the KPZ models in half-space and our identity allows us to study such models using Pfaffian measures. In particular we will establish the limiting distributions for models in half-space, which have not been achieved by other approaches.

The talk is based on collaborations with Takashi Imamura and Matteo Mucciconi. Motivations and general ideas of our work are addressed by T. Imamura and the bijective proofs of identities are explained by M. Mucciconi.

Q&A for Contributed Session 11

0
This talk does not have an abstract.

Session Chair

Jaehoon Kang (Korea Advanced Institute of Science and Technology (KAIST))

Contributed 21

Dimension Reduction and Model Selection

Conference
9:30 PM — 10:00 PM KST
Local
Jul 22 Thu, 8:30 AM — 9:00 AM EDT

Probabilistic principal curves on Riemannian manifolds

Seungwoo Kang (Seoul National University)

7
This paper studies a new curve fitting approach that is useful for representation and dimension reduction of data on Riemannian manifolds. In this study, we extend the probabilistic formulation of the curve passing through the middle of data on Euclidean space by Tibshirani (1992) to Riemannian symmetric space. To this end, we define a principal curve based on a mixture model for observations and unobserved latent variables, and propose a new algorithm to estimate the principal curve for given data points on Riemannian manifolds using a series of procedures in ‘unrolling, unwrapping, and wrapping’ and EM algorithm. Some properties for justification of the estimation algorithm are further investigated. Results from numerical examples, including several simulation sets on hyperbolic space, sphere, special orthogonal group, and a real data example, demonstrate the promising empirical properties of the proposed probabilistic approach.

The elastic information criterion for multicollinearity detection

Kimon Ntotsis (University of the Aegean)

4
When it comes to factor interpretation, multicollinearity is among the biggest issues that must be surmounted especially in this new era, of Big Data Analytics. Since even moderate size multicollinearity can prevent a proper interpretation, special diagnostics must be recommended and implemented for identification purposes. In this work, we propose the Elastic Information Criterion which is capable of capturing multicollinearity accurately and effectively without factor over-elimination. The performance in simulated and real numerical studies is demonstrated.

A bidimensional shock model driven by the space-fractional Poisson process

Alessandra Meoli (Università degli Studi di Salerno)

4
We describe a competing risks model within the framework of bivariate random shock models, this being of great interest in reliability theory. Specifically, we assume that a system or an item fails when the sum of shocks of type 1 and of type 2 reaches a random threshold that takes values in the set of natural numbers. The two kinds of shock occur according to a bivariate space-fractional Poisson process, which is a time-changed bivariate homogeneous Poisson process where the time change is an independent stable subordinator. We obtain the failure densities, the survival function and other related quantities. In this way we generalize some results in the literature, which can be recovered when the index of stability characterizing the bivariate space-fractional Poisson process is equal to 1.

Q&A for Contributed Session 21

0
This talk does not have an abstract.

Session Chair

Jisu Kim (Inria Saclay)

Contributed 35

Financial Data Analysis

Conference
9:30 PM — 10:00 PM KST
Local
Jul 22 Thu, 8:30 AM — 9:00 AM EDT

Hedging portfolio for a degenerate market model

Ihsan Demirel (Koç University)

2
The purpose of this talk is to derive the hedging portfolio in a financial market where prices-per-shares are governed by a stochastic equation with a singular volatility matrix. The main mathematical tools of the study are the representation with respect to a minimal martingale and Malliavin calculus for the functionals of a degenerate diffusion process, which have been established in recent studies. We use those developments to prove a version of the Hausmann-Bismut-Ocone type representation formula derived for these functionals under an equivalent martingale measure. Consequently, we derive the hedging portfolio as a solution to a system of linear equations. The uniqueness of the solution is achieved by a projection idea that lies at the core of the martingale representation. We apply our result to exotic options, whose value at maturity depends on the prices over the entire time horizon. This work is supported by Tubitak Project No. 118F403.

An optimal combination of proportional - excess of loss reinsurance with random premiums

Suci Sari (Statistics Research Division, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung)

2
In reinsurance policy, the insurer needs to pay a reinsurance premium to the reinsurer. Nonetheless, the insurer will get an insurance premium from the insurance policyholder. This research addresses choosing an optimal reinsurance policy using a more realistic model, i.e., individual risk model with random insurance premiums. Here, we assume that the insurance premiums are random, and the reinsurance premiums is charged by the expected value principle. We use minimizing the risk exposure of the insurer as optimization criteria. The insurer’s risk exposure is measured by tail risk measure of the net cost of the insurer. To illustrate the applicability of our results, we consider an insurance company has two lines of business and derive the optimal reinsurance explicitly for Combination of Proportional and Excess of loss Reinsurance.

A novel inventory policy for imperfect items with stock dependent demand rate

Praveen V. P. (University of Calicut)

2
Adoption of trade credit financing policy is prevalent in inventory management as an important strategy to increase profitability with a major motive of attracting new customers and also to avoid lasting price competition. We revisit an economic order quantity model under conditionally permissible delay in payments, fix a certain period to settling the account, during this period supplier charges no interest, but beyond this period interest is being charged. On the other hand, retailer can earn interest on the revenue generated during this period. Optimal inventory policy can be managed with explicitly specifying the demand for fresh produce to be a function of its freshness expiration date and displayed volume. Shortages are allowed and it is partially backlogged. Keeping this scenario in mind, an attempt has been made to formulate an inventory policy for imperfect items with permissible delay in payments and expiration date under freshness and stock dependent demand rate. The formulated model is illustrated through numerical examples to determine the effectiveness of the proposed model. Further, the effects of changes of different inventory parameters have been studied by a sensitivity analysis.

Q&A for Contributed Session 35

0
This talk does not have an abstract.

Session Chair

Jae Youn Ahn (Ewha Womans University)

Invited 11

Analysis of Dependent Data (Organizer: Chae Young Lim)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

Statistical learning with spatially dependent high-dimensional data

Taps Maiti (Michigan State University)

3
The rapid development of information technology is making it possible to collect massive amounts of multidimensional, multimodal data with high dimensionality in diverse fields of science and engineering. New statistical and machine learning methods have been developing continuously to solve challenging problems arising out of these complex systems. This talk will discuss a specific type of statistical learning, namely feature selection and classification, when the features are multidimensional. More specifically, they are spatio-temporal by nature. Various machine learning techniques are suitable for this problem, although their underlying statistical theories are not well established. We start with linear discriminant analysis under spatial dependence, establish its statistical properties and then connecting to other machine learning tools for flexible data analysis in the context of brain imaging data.

Large-scale spatial data science with ExaGeoStat

Marc Genton (King Abdullah University of Science and Technology (KAUST))

3
Spatial data science aims at analyzing the spatial distributions, patterns, and relationships of data over a predefined geographical region. For decades, the size of most spatial datasets was modest enough to be handled by exact inference. Nowadays, with the explosive increase of data volumes, High-Performance Computing (HPC) can serve as a tool to handle massive datasets for many spatial applications. Big data processing becomes feasible with the availability of parallel processing hardware systems such as shared and distributed memory, multiprocessors and GPU accelerators. In spatial statistics, parallel and distributed computing can alleviate the computational and memory restrictions in large-scale Gaussian process inference and prediction. In this talk, we will describe cutting-edge HPC techniques and their applications in solving large-scale spatial problems with the new software ExaGeoStat.

Multivariate spatio-temporal Hawkes process models of terrorism

Mikyoung Jun (University of Houston)

3
We develop a flexible bivariate spatio-temporal Hawkes process model to analyze pat-terns of terrorism. Previous applications of point process methods to political violence data have mainly utilized temporal Hawkes process models, neglecting spatial variation in these attack patterns. This limits what can be learned from these models, as any effective counter-terrorism strategy requires knowledge on both when and where attacks are likely to occur. Even the existing work that does exist on spatio-temporal Hawkes processes impose restrictions on the triggering function that are not well-suited for terrorism data. Therefore, we generalize the structure of the spatio-temporal triggering function considerably, allowing for nonseprability, nonstatitionarity, and cross-triggering(i.e., across the groups). To demonstrate the utility of our model, we analyze two samples of real-world terrorism data: Afghanistan (2002-2013) and Nigeria (2010-2017). Jointly, these two studies demonstrate that our model dramatically outperforms standard Hawkes process models, besting widely-used alternatives in overall model fit and revealing spatio-temporal patterns that are, by construction, masked in these models (e.g., increasing dispersion in cross-triggering over time).
This is joint work with Scott Cook.

Q&A for Invited Session 11

0
This talk does not have an abstract.

Session Chair

Chae Young Lim (Seoul National University)

Invited 19

Randomized Algorithms (Organizer: Devdatt Dubhashi)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

Is your distribution in shape?

Ronitt Rubinfeld (Massachusetts Institute of Technology)

4
Algorithms for understanding data generated from distributions over large discrete domains are of fundamental importance. We consider the sample complexity of "property testing algorithms" that seek to to distinguish whether or not an underlying distribution satisfies basic shape properties. Examples of such properties include convexity, log-concavity, heavy-tailed, and approximability by $k$-histogram functions. In this talk, we will focus on the property of *monotonicity*, as tools developed for distinguishing the monotonicity property have proven to be useful for the all of the above properties as well as several others. We say a distribution $p$ is monotone if for any two comparableelements $x < y$ in the domain, we have that $p(x) < p(y)$. For example, for the classic $n$-dimensional hypercube domain, in which domain elements are described via $n$ different features, monotonicity implies that for every element, an increase in the value of one of the features can only increase its probability. We recount the development over the past nearly two decades of {\em monotonicity testing} algorithms for distributions over various discrete domains, which make no a priori assumptions on the underlying distribution. We study the sample complexity for testing whether a distribution is monotone as a function of the size of the domain, which can vary dramatically depending on the structure of the underlying domain. Not surprisingly, the sample complexity over high dimensional domains can be much greater than over low dimensional domains of the same size. Nevertheless, for many important domain structures, including high dimensional domains, the sample complexity is sublinear in the size of the domain. In contrast, when no a priori assumptions are made about the distribution, learning the distribution requires sample complexity that is linear in the size of the domain.The techniques used draw tools from a wide spectrum of areas, including statistics, optimization, combinatorics, and computational complexity theory.

Beyond independent rounding: strongly Rayleigh distributions and traveling salesperson problem

Shayan Oveis Gharan (University of Washington)

3
Strongly Rayleigh (SR) distributions are a family of probably distributions that generalize product distributions and satisfy strongest forms of negative dependence. Over the last last decade, these distributions have found numerous application in algorithm design. I this talk I will survey several fundamental properties of SR distributions and some of their applications in going beyond the limits of the Independent Rounding method for designing approximation algorithms for combinatorial optimization problems.

A survey of dependent randomized rounding

Aravind Srinivasan (University of Maryland, College Park)

3
In dependent randomized rounding, we take a point x inside a given high-dimensional object such as a polytope and “round” it probabilistically to a suitable point y, such as one with integer coordinates. The “dependence” arises from the fact that the dimensions of x are rounded in a carefully-correlated manner, so that some properties hold with probability one or with high probability, while others hold in expectation. We will discuss this methodology and some applications in combinatorial optimization and concentration of measure.

Q&A for Invited Session 19

0
This talk does not have an abstract.

Session Chair

Devdatt Dubhashi (Chalmers University)

Invited 23

Stochastic Partial Differential Equations (Organizer: Leonid Mytnik)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

Phase analysis for a family of stochastic reaction-diffusion equations

Carl Mueller (University of Rochester)

3
We consider a reaction-diffusion equation, that is, a one-dimensional heat equation with a drift function V(u) and two-parameter white noise with coefficient $\sigma(u)$, subject to a “nice” initial value and periodic boundary, where x lies in T=[-1,1]. The reaction term V(u) belongs to a large family of functions that includes Fisher-KPP nonlinearities V(x)=x(1-x) as well as Allen-Cahn potentials V(x)=x(1-x)(1+x), the multiplicative nonlinearity lambda*sigma is non random and Lipschitz continuous, and $\lambda>0$ is a non-random number that measures the strength of the effect of the noise W. Our principal finding is that: (i) When lambda is sufficiently large, the above equation has a unique invariant measure; and (ii) When lambda is sufficiently small, the collection of all invariant measures is a non-trivial line segment, in particular infinite. This proves an earlier prediction of Zimmerman. Our methods also give information about the structure of these invariant measures.

Regularization by noise for SPDEs and SDEs: a stochastic sewing approach

Oleg Butkovsky (Weierstrass Institute)

5

Stochastic quantization, large N, and mean field limit

Hao Shen (University of Wisconsin-Madison)

3
“Large N problems” in quantum field theory refer to the study of models with N field components for N large. We study these problems using SPDE methods via stochastic quantization. In the SPDE setting this is formulated as mean field problems. In particular we will consider the vector Phi^4 model (i.e. linear sigma model), whose stochastic quantization is a system of N coupled dynamical Phi^4 SPDEs. We discuss a series of results. First, in 2D, we prove mean field limit for these dynamics as N goes to infinity. We also show that the quantum field theory converges to massive Gaussian free field in this limit, in both 2D and 3D. Moreover we prove exact formulae for some correlations of O(N)-invariant observables in the large N limit.

(Joint work with Scott Smith, Rongchan Zhu and Xiangchan Zhu.)

Q&A for Invited Session 23

0
This talk does not have an abstract.

Session Chair

Leonid Mytnik (Israel Institute of Technology)

Invited 26

Pathwise Stochastic Analysis (Organizer: Hendrik Weber)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

Sig-Wasserstein Generative models to generate realistic synthetic time series

Hao Ni (University College London)

3
Wasserstein generative adversarial networks (WGANs) have been very successful in generating samples, from seemingly high dimensional probability measures. However, these methods struggle to capture the temporal dependence of joint probability distributions induced by time-series data. Moreover, training WGANs is computational expensive due to the min-max formulation of the loss function. To overcome these challenges, we integrate Wasserstein GANs with mathematically principled and efficient path feature extraction called the signature of a path. The signature of a path is a graded sequence of statistics that provides a universal and principled description for a stream of data, and its expected value characterises the law of the time-series model. In particular, we develop a new metric, (conditional) Sig-W1, that captures the (conditional) joint law of time series models, and use it as a discriminator. The signature feature space enables the explicit representation of the proposed discriminators which alleviates the need for expensive training. We validate our method on both synthetic and empirical dataset and our method achieved the superior performance than the other state-of-the-art benchmark methods.

This is the joint work with Lukasz Szpruch (Uni of Edinburgh), Magnus Wiese (Uni of Kaiserslautern), Shujian Liao (UCL), Baoren Xiao (UCL).

State space for the 3D stochastic quantisation equation of Yang-Mills

Ilya Chevyrev (University of Edinburgh)

3
In this talk I will present a proposed state space of distributions for the stochastic Yang-Mills equations (SYM) in 3D. I will show how the notion of gauge equivalence extends to this space and how one can construct a Markov process on the space of gauge orbits associated with the SYM. This partly extends a recent construction done in the less singular 2D setting.

Based on a joint work in progress with Ajay Chandra, Martin Hairer, and Hao Shen.

A priori bounds for quasi-linear parabolic equations in the full sub-critical regime

Scott Smith (Chinese Academy of Sciences)

3
We will discuss quasi-linear parabolic equations driven by an additive forcing, in the full sub-critical regime. Our arguments are inspired by Hairer’s regularity structures, however we work with a more parsimonious model indexed by multi-indices rather than trees. Assuming bounds on this model, which is modified in agreement with the concept of algebraic renormalization, we prove local a priori estimates on solutions to the quasi-linear equations modified by the corresponding counter terms.

Q&A for Invited Session 26

0
This talk does not have an abstract.

Session Chair

Hendrik Weber (University of Bath)

Organized 10

Random Conformal Geometry and Related Fields (Organizer: Nam-Gyu Kang)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

Loewner dynamics for the multiple SLE(0) process

Tom Alberts (The University of Utah)

5
Recently Peltola and Wang introduced the multiple SLE(0) process as the deterministic limit of the random multiple SLE(kappa) curves as kappa goes to zero. They prove this result by means of a “small kappa” large deviations principle, but the limiting curves also turn out to have important geometric characterizations that are independent of their relation to SLE$(\kappa)$. In particular, they show that the SLE(0) curves can be generated by a deterministic Loewner evolution driven by multiple points, and the vector field describing the evolution of these points must satisfy a particular system of algebraic equations. We show how to generate solutions to these algebraic equations in two ways: first in terms of the poles and critical points of an associated real rational function, and second via the well-known Caloger-Moser integrable system with particular initial velocities. Although our results are purely deterministic they are again motivated by taking limits of probabilistic constructions, which I will explain.

Conformal field theory for annulus SLE

Sung-Soo Byun (Seoul National University)

4
In this talk, I will present a constructive conformal field theory generated by central/background charge modifications of Gaussian free fields in a doubly conected domain and outline its connection to annulus SLE theory. Furthermore, I will explain some applications, which include Coulomb gas solutions to the null-vector equation for annulus SLE partition functions and hitting probabilities of level lines of Gaussian free fields.

Convergence of martingale observables in the massive FK-Ising model

S. C. Park (Korea Institute of Advanced Study)

3
We show the convergence of fermionic martingale observables (MO) of the FK-Ising model in massive scaling limit. We generalise, along with a recent work by Chelkak-Izyurov-Mahfouf (2021), the discrete complex analytic machinery developed in Chelkak-Smirnov (2012) for the critical isoradial setup into the massive setting. No assumptions on domain regularity or the direction of massive perturbation are imposed. We then discuss implications on the interface curves and Russo-Seymour-Welsh (RSW) type crossing estimates, as well as ongoing work on the spin model with Chelkak and Wan.

Boundary Minkowski content of multi-force-point SLE$_\kappa(\underline\rho)$ curves

Dapeng Zhan (Michigan State University)

3

Q&A for Organized Contributed Session 10

0
This talk does not have an abstract.

Session Chair

Nam-Gyu Kang (Korea Institute for Advanced Study)

Organized 30

Stochastic Adaptive Optimization Algorithms and their Applications to Neural Networks (Organizer: Miklos Rasonyi & Sotirios Sabanis)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

An adaptive strong order 1 method for SDEs with discontinuous drift coefficient

Larisa Yaroslavtseva (University of Passau)

3
In recent years, a number of results have been proven in the literature for strong approximation of stochastic differential equations (SDEs) with a drift coefficient that may have discontinuities in space. In many of these results it is assumed that the drift coefficient satisfies piecewise regularity conditions and the diffusion coefficient is Lipschitz continuous and non-degenerate at the discontinuity points of the drift coefficient. For scalar SDEs of that type the best L_p-error rate known so far for approximation of the solution at the final time point is 3/4 in terms of the number of evaluations of the driving Brownian motion and it is achieved by the transformed equidistant quasi-Milstein scheme. Recently in [1] it has been shown that for such SDEs the L_p-error rate 3/4 can not be improved in general by no numerical method based on evaluations of the driving Brownian motion at fixed time points. In this talk we present a numerical method based on sequential evaluations of the driving Brownian motion, which achieves an L_p-error rate of at least 1 in terms of the average number of evaluations of the driving Brownian motion for such SDEs.

The talk is based on joint work with Thomas Müller-Gronbach (University of Passau).

References

[1]. T. Müller-Gronbach, L. Yaroslavtseva. Sharp lower error bounds for strong approximation of SDEs with discontinuous drift coefficient by coupling of noise. arXiv:2010.00915.

Nonconvex optimization via TUSLA with discontinuous updating

Ying Zhang (Nanyang Technological University)

3
We study the tamed unadjusted stochastic Langevin algorithm (TUSLA) proposed in Lovas et al. (2021) in the context of nonconvex optimization. In particular, we consider the case where the objective function of the optimization problem has a superlinear and discontinuous stochastic gradient. In such a setting, nonasymptotic error bounds are provided for the TUSLA algorithm in Wasserstein-1 and Wasserstein-2 distances. The latter result enables us to further derive theoretical guarantees for the expected excess risk. Numerical experiments are presented for synthetic examples where popular algorithms, e.g. ADAM, AMSGRAD, RMSProp, and SGD, fail to find the minimizer of the objective functions due to the superlinearity and the discontinuity of the stochastic gradients, while the TUSLA algorithm converges rapidly to the optimal solution. Moreover, an example in transfer learning is provided to illustrate the applicability of the TUSLA algorithm, and its simulation results support our theoretical findings.

Approximation of stochastic equations with irregular drifts

Konstantinos Dareiotis (University of Leeds)

3
In this talk we will discuss about the rate of convergence of the Euler scheme for stochastic differential equations with irregular drifts. Our approach relies on regularisation-by-noise techniques and more specifically, on the recently developed stochastic sewing lemma. The advantages of this approach are numerous and include the derivation of improved (optimal) rates and the treatment of non-Markovian settings. We will consider drifts in Hölder and Sobolev classes, but also merely bounded and measurable. The latter is the first and at the same time optimal quantification of a convergence theorem of Gyöngy and Krylov.

This talk is based on joint works with Oleg Butkovsky, Khoa Lê, and Máté Gerencsér.

Neural SDEs: deep generative models in the diffusion limit

Maxim Raginsky (University of Illinois at Urbana-Champaign)

3
In deep generative models, the latent variable is generated by a time-inhomogeneous Markov chain, where at each time step we pass the current state through a parametric nonlinear map, such as a feedforward neural net, and add a small independent Gaussian perturbation. In this talk, based on joint work with Belinda Tzen, I will discuss the diffusion limit of such models, where we increase the number of layers while sending the step size and the noise variance to zero. I will first provide a stochastic control formulation of sampling in such generative models. Then I will show how we can quantify the expressiveness of diffusion-based generative models. Specifically, I will prove that one can efficiently sample from a wide class of terminal target distributions by choosing the drift of the latent diffusion from the class of multilayer feedforward neural nets, with the accuracy of sampling measured by the Kullback-Leibler divergence to the target distribution.

Diffusion approximations and control variates for MCMC

Eric Moulines (Ecole Polytechnique)

4
A new method is introduced for the construction of control variates to reduce the variance of additive functionals of Markov Chain Monte Carlo (MCMC) samplers. These control variates are obtained by minimizing the asymptotic variance associated with the Langevin diffusion over a family of functions. To motivate our approach, it is shown that the asymptotic variance of some well-known MCMC algorithms, including the Random Walk Metropolis and the (Metropolis) Unadjusted/Adjusted Langevin Algorithm, are well approximated by that of the Langevin diffusion. When applied to as class of linear control variates, it is established that the variance of the resulting estimators is smaller, for a given computational complexity, than the standard Monte Carlo estimator. Several examples of Bayesian inference problems support our findings.

Q&A for Organized Contributed Session 30

0
This talk does not have an abstract.

Session Chair

Sotirios Sabanis (University of Edinburgh)

Contributed 04

Stochastic Processes and Related Topics

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

Parameter estimation for weakly interacting particle systems and stochastic McKean-Vlasov processes

Louis Sharrock (Imperial College London)

4
In this presentation, we consider the problem of parameter estimation for a fully observed McKean-Vlasov stochastic differential equation (MVSDE), and the associated system of weakly interacting particles. We begin by establishing consistency and asymptotic normality of the offline maximum likelihood estimator (MLE) of the interacting particle system (IPS) in the limit as the number of particles (N) tends to infinity. We then propose a recursive MLE for the MVSDE, which evolves according to a stochastic gradient ascent algorithm on the asymptotic log-likelihood of the IPS. Under suitable assumptions which guarantee exponential ergodicity and uniform-in-time propagation of chaos for the MVSDE and the IPS, we prove that this estimator converges in L1 to the stationary points of the asymptotic log-likelihood of the MVSDE in the joint limit as N and t tend to infinity. Under the additional assumption of global strong concavity, we also demonstrate that our estimator converges in L2 to the unique maximiser of the asymptotic log-likelihood of the MVSDE, and establish an L2 convergence rate. Our results are demonstrated via several numerical examples of practical interest, including a linear mean field model, and a stochastic opinion dynamics model.

CLT for cyclic long-memory processes

Andriy Olenko (La Trobe University)

3
Cyclic long-memory stochastic processes are studied. Cyclic long-memory behavior attracted increasing attention in recent years due to its importance in finance, hydrology, cosmology, internet modelling, and other applications to data with non-seasonal cyclicity. However, various functionals of cyclic long-memory processes have complex asymptotic behavior that has not yet been fully understood and investigated. Spectral singularities at non-zero frequencies play an important role in investigating cyclic processes. The publication [1] introduced the generalized filtered method-of-moments approach to simultaneously estimate singularity location and long-memory parameters. The law of large numbers for the proposed estimators was proved. This talk discusses the central limit theorem for these simultaneous estimators. A wide class of Gegenbauer-type semi-parametric models is considered. Asymptotic normality of several functionals of the cyclic long-memory processes is proved. For the case when values of the functionals are outside the feasible region, we propose new adjusted estimators and investigate their properties. It is shown that they have same asymptotic distributions as the corresponding ones in [1], but are computationally simpler. The methodology includes wavelet transformations as a particular case.

This presentation is based on recent joint results in [2] with A.Ayache, M.Fradon (University of Lille, France) and R. Nanayakkara (La Trobe University, Australia).

[1] Alomari, H.M., Ayache, A., Fradon, M., Olenko, A. Estimation of cyclic long-memory parameters. Scand. J. Statist., 47, 1, 2020, 104-133.

[2] Ayache, A., Fradon, M., Nanayakkara, R., Olenko, A. Asymptotic normality of simultaneous estimators of cyclic long-memory processes. submitted, 1-30, https://arxiv.org/abs/2011.06229.

Q&A for Contributed Session 04

0
This talk does not have an abstract.

Session Chair

Yeonwoo Rho (Michigan Technology University)

Contributed 17

Various Limit Theorems

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

Limit theorems for non-stationary strongly mixing random fields

Cristina Tone (University of Louisville)

3
In applications of statistics to data indexed by location, there is often an apparent lack of both stationarity and independence, but with a reasonable indication of “weak dependence” between data whose locations are “far apart”. This has motivated a large amount of research on the theoretical question of to what extent central limit theorems hold for non-stationary random fields. Bradley and Tone examined this theoretical question for “arrays of (non- stationary) random fields” under certain mixing assumptions. Our main result presents a central limit theorem for sequences of random fields that satisfy a Lindeberg condition and uniformly satisfy both strong mixing and an upper bound less than 1 on $\rho^{\prime}(\cdot, 1)$, in the absence of stationarity. There is no requirement of either a mixing rate assumption or the existence of moments of order higher than two. The additional assumption of a uniform upper bound less than 1 for $\rho^{\prime}(\cdot, 1)$ cannot simply be deleted altogether from the theorem, even in the case of strict stationarity.

On the law of the iterated logarithm and strong invariance principles in stochastic geometry

Johannes Krebs (Heidelberg University)

3

Functional limit theorems for U-statistics

Mikolaj Kasprzak (University of Luxembourg)

4
I will discuss a number of results obtained through three pieces of joined work with Christian Doebler and Giovanni Peccati. Firstly, I will talk about sequences of U-processes based on symmetric kernels of a fixed order that may depend on the sample size and present analytic sufficient conditions under which they converge to a linear combination of time-changed Brownian Motions. I will show how these conditions may be applied to deduce functional convergence of quadratic estimators in certain non-parametric models. Secondly, I will present quantitative bounds on the rate of functional convergence of vectors of weighted degenerate U-statistics to time-changed Brownian Motion, obtained via Stein’s method of exchangeable pairs. Finally, I will discuss a multivariate functional version of de Jong's CLT, yielding that, given a sequence of vectors of degenerate U-statistics, the corresponding empirical processes on [0,1] weakly converge in the Skorohod space as soon as their fourth cumulants in t=1 vanish asymptotically and a certain strengthening of the Lindeberg-type condition is verified.

Proving Liggett's FCLT via Stein's method

Wasamon Jantai (Oregon State University)

3
In 1990, A. D. Barbour extended Stein's method for approximation by Gaussian processes, including Brownian bridge. In this work, we rederive Liggett's functional central limit theorem (FCLT) using Barbour's approach.

Q&A for Contributed Session 17

0
This talk does not have an abstract.

Session Chair

Chi Tim Ng (Hang Seng University of Hong Kong)

Contributed 32

Statistical Modeling and Prediction

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

An evolution of the beta regression for non-monotone relations

Gloria Gheno (Ronin Institute)

2
The beta regression is based on the beta distribution or its reparameterizations, which are used to obtain a regression structure on the mean which is much easier to analyze and interpret. This method analyzes data whose value is within the range (0.1), such as rates, proportions or percentage and is useful for studying how the explanatory variables affect them. For the mean of the beta regression scholars continued to use the traditional link functions used for binary regressions, i.e. the logit, the probit and the complementary log-log. In this paper I propose a new link function for the parameter mean of a beta regression which has as its particular cases the logit, representing a traditional symmetric link function, and the gev (Generalized extreme value), introduced precisely because of its asymmetry. In the simplest form of the beta regression, the inverse of the link function, called response function, represents the link between the mean and the explanatory variables. In this paper the response function can become non-monotone and therefore the link function is calculated for intervals. This particularity has never been proposed so far in the literature although some scholars have found non-monotone relationships between the response variable and its explanatory variables. The parameters of the function are estimated by maximizing the likelihood function, using my modified version of the genetic algorithm. I compare my method with the one proposed by Cribari-Neto, in which the link function is decided a priori, using simulated data, so as to be able to compare which of the two methods is closest to the true values. My method is better because it is able to correctly determine the link function with which the data are simulated and to estimate the parameters with less error.

Robust censored regression with l1-norm regularization

Jad Beyhum (ORSTAT, Katholieke Universiteit Leuven)

2
This paper considers inference in a linear regression model with random right-censoring and outliers. The number of outliers can grow with sample size while their proportion goes to 0. We propose to penalize the estimator of Stute (1993) by the l1-norm. We derive rates of convergence and establish asymptotic normality. Our estimator has the same asymptotic variance as Stute's estimator in the censored linear model without outliers. Tests and confidence sets can therefore rely on the theory developed by Stute. The outlined procedure is also computationally advantageous, it amounts to solving a convex optimization program. We also propose a second estimator which uses the l1-norm penalized Stute estimator as a first step to detect outliers. It has similar theoretical properties but better performances in finite samples as assessed by simulations.

SPLVC modal regression with error-prone linear covariate

Tao Wang (University of California, Riverside)

2
To broaden the scope of existing modal regressions, we in this paper propose two procedures, called B-splines-based procedure and stepwise-based procedure, to retrieve the estimates for a semiparametric partially linear varying coefficient (SPLVC) modal regression with error-prone linear covariate in which a linear covariate is not observed, but an ancillary variable is available. With B-splines-based procedure, varying coefficients are approximated through B-splines, and a deconvoluting kernel-based objective function is constructed straightly. For the stepwise-based procedure, by defining restricted regression mode via imposing a constrictive condition on model format, a two-step method is developed in which the varying coefficients are concentrated out by applying the "correction for attenuation" methodology in mean regression to alter the original model to a reduced parametric modal regression. Consistency and asymptotic properties of the estimators for these two newly proposed procedures are investigated under mild conditions according to the tail behavior of the characteristic function of the error distribution, either ordinary smooth distribution or super smooth distribution. Bandwidth selection in theory and practice are explored. For comparison, we also develop the asymptotic theorems for the SPLVC modal estimators with B-splines approximation without covariate measurement error. Monte Carlo simulations are conducted to examine the finite sample performance of the estimators and a pseudo data analysis is presented to further illustrate the proposed estimation procedures.

Regularized double machine learning in partially linear models with unobserved confounding

Corinne Emmenegger (Swiss Federal Institute of Technology Zürich)

4
Double machine learning (DML) can be used to estimate the linear coefficient in a partially linear model with confounding variables. However, the standard DML estimator has a two-stage least squares interpretation and may yield overly wide confidence intervals. To address this issue, we present the regularization-selection regsDML method that leads to narrower confidence intervals but preserves coverage guarantees. We rely on DML to estimate nuisance parameters with arbitrary machine learning algorithms and combine it with a regularization and selection scheme. Our regsDML method is fully data driven and optimizes the estimated asymptotic mean squared error of the coefficient estimate. The regsDML estimator can be expected to converge at the parametric rate and to follow an asymptotic Gaussian distribution. Empirical examples demonstrate our theoretical and methodological developments. Software code for the regsDML method is available in the R-package dmlalg.


Q&A for Contributed Session 32

0
This talk does not have an abstract.

Session Chair

Eun Jeong Min (Catholic University)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.