We study transfer learning for contextual joint assortment-pricing under a multinomial logit choice model with bandit feedback. A seller operates across multiple related markets and observes only posted prices and realized purchases. While data from source markets can accelerate learning in a target market, cross-market differences in customer preferences may introduce systematic bias if pooled indiscriminately. We model heterogeneity through a structured utility shift, where markets share a common contextual utility structure but differ along a sparse set of latent preference coordinates. Building on this, we develop Transfer Joint Assortment-Pricing (TJAP), a bias-aware framework that combines aggregate-then-debias estimation with a UCB-style policy. TJAP constructs two-radius confidence bounds that separately capture statistical uncertainty and transfer-induced bias, uniformly over continuous prices. We establish matching minimax regret bounds of order $\tilde{O}\!\left(d\sqrt{\frac{T}{1+H}} + s_0\sqrt{T}\right),$revealing a transparent variance-bias tradeoff: transfer accelerates learning along shared preference directions, while heterogeneous components impose an irreducible adaptation cost. Numerical experiments corroborate the theory, showing that TJAP outperforms both target-only learning and naive pooling while remaining robust to cross-market differences.
Motivated by the EVA 2025 Data Challenge, we address the problem of predicting extreme rainfall in the eastern United States using data from a large ensemble of climate model runs. The challenge focuses on three quantities of interest related to the spatial extent and/or temporal duration of extreme rainfall, each requiring extrapolation. To tackle these questions, we adopt the recently developed geometric framework for extreme-value analysis, offering substantial flexibility for capturing complex extremal dependence structures and enabling extrapolation across the entire multivariate tail. In this work, we focus on the spatial geometric framework for analysing the spatial extent and consider a sampling procedure that retains the temporal information in the data, thereby enabling estimation of the duration of extreme rainfall events. We also account for the non-stationary behaviour, arising from topographical and seasonal effects, that commonly characterises extreme weather events in both space and time. Using diagnostic metrics, we demonstrate that the proposed model is appropriate for inferring extreme events on this dataset and apply it to estimate target quantities of interest.
We establish convergence of the training dynamics of residual neural networks (ResNets) to their joint infinite depth L, hidden width M, and embedding dimension D limit. Specifically, we consider ResNets with two-layer perceptron blocks in the maximal local feature update (MLU) regime and prove that, after a bounded number of training steps, the error between the ResNet and its large-scale limit is O(1/L + sqrt(D/(L M)) + 1/sqrt(D)). This error rate is empirically tight when measured in embedding space. For a budget of P = Theta(L M D) parameters, this yields a convergence rate O(P^(-1/6)) for the scalings of (L, M, D) that minimize the bound. Our analysis exploits in an essential way the depth-two structure of residual blocks and applies formally to a broad class of state-of-the-art architectures, including Transformers with bounded key-query dimension. From a technical viewpoint, this work completes the program initiated in the companion paper [Chi25] where it is proved that for a fixed embedding dimension D, the training dynamics converges to a Mean ODE dynamics at rate O(1/L + sqrt(D)/sqrt(L M)). Here, we study the large-D limit of this Mean ODE model and establish convergence at rate O(1/sqrt(D)), yielding the above bound by a triangle inequality. To handle the rich probabilistic structure of the limit dynamics and obtain one of the first rigorous quantitative convergence for a DMFT-type limit, we combine the cavity method with propagation of chaos arguments at a functional level on so-called skeleton maps, which express the weight updates as functions of CLT-type sums from the past.
When working with real-world insurance data, practitioners often encounter challenges during the data preparation stage that can undermine the statistical validity and reliability of downstream modeling. This study illustrates that conventional data preparation procedures such as random train-test partitioning, often yield unreliable and unstable results when confronted with highly imbalanced insurance loss data. To mitigate these limitations, we propose a novel data preparation framework leveraging two recent statistical advancements: support points for representative data splitting to ensure distributional consistency across partitions, and the Chatterjee correlation coefficient for initial, non-parametric feature screening to capture feature relevance and dependence structure. We further integrate these theoretical advances into a unified, efficient framework that also incorporates missing-data handling, and embed this framework within our custom InsurAutoML pipeline. The performance of the proposed approach is evaluated using both simulated datasets and datasets often cited in the academic literature. Our findings definitively demonstrate that incorporating statistically rigorous data preparation methods not only significantly enhances model robustness and interpretability but also substantially reduces computational resource requirements across diverse insurance loss modeling tasks. This work provides a crucial methodological upgrade for achieving reliable results in high stakes insurance applications.
The Highly Adaptive Lasso (HAL) delivers unprecedented guarantees in nonparametric minimum loss estimation under minimal smoothness assumptions, such as dimension-free minimax optimal rates. However, the practical use of HAL has been severely limited by its exponentially growing computationally prohibitive indicator basis expansion in moderate to high dimensions. Existing screening strategies drastically reduce this dimension but lack any theoretical justification. We introduce the Principal Component Highly Adaptive (PC-HA) family of estimators, which for the first time provide a principled and theoretically valid dimension reduction. We establish formal results on the score equations solved by these PC-HA estimators, allowing to transfer plug-in efficiency and pointwise asymptotic normality results from HAL to these PC-HA estimators, under comparable complexity control.
Predicting stress fields in hyperelastic materials with complex microstructures remains challenging for traditional deep learning surrogates, which struggle to capture both sharp stress concentrations and the wide dynamic range of stress magnitudes. Convolutional architectures such as UNet tend to oversmooth high-frequency gradients, while neural operators like DeepONet exhibit spectral bias and underpredict localized extremes. Diffusion models can recover fine-scale structure but often introduce low-frequency amplitude drift, degrading physical scaling. To address these limitations, we propose a hybrid surrogate framework, cDDPM-DeepONet, that decouples stress morphology from magnitude. A conditional denoising diffusion probabilistic model (cDDPM), built on a UNet backbone, generates normalized von Mises stress fields conditioned on geometry and loading. In parallel, a modified DeepONet predicts global scaling parameters (minimum and maximum stress), enabling reconstruction of full-resolution physical stress maps. This separation allows the diffusion model to focus on spatial structure while the operator network corrects global amplitude, mitigating spectral and scaling biases. We evaluate the framework on nonlinear hyperelastic datasets with single and multiple polygonal voids. The proposed model consistently outperforms UNet, DeepONet, and standalone cDDPM baselines by one to two orders of magnitude. Spectral analysis shows strong agreement with finite element solutions across all wavenumbers, preserving both global behavior and localized stress concentrations.
In Structural Health Monitoring (SHM), sensor measurements and derived features such as eigenfrequencies often exhibit systematic daily patterns and can therefore be naturally represented as functional data. Furthermore, these patterns are typically influenced by environmental factors, particularly temperature, which can substantially affect the observed system response. While most existing methods for removing environmental effects assume that confounding influences affect only the mean response, it has been shown that environmental and operational factors may also alter the covariance structure of the residual process. To address this limitation in a functional data monitoring framework, we incorporate so-called covariate-dependent functional principal component analysis (CD-FPCA), which allows eigenfunctions and eigenvalues of the residual process to vary smoothly with covariates such as temperature. The proposed methodology is illustrated using an extended version of the KW51 railway bridge eigenfrequency dataset. This case study suggests that accounting for covariate effects beyond the functional mean can improve the robustness of the monitoring procedure, in particular by reducing environmentally induced (false) alarms under challenging low-temperature conditions.
Estimation of the mean and covariance functions is a fundamental problem in functional data analysis, particularly for discretely observed functional data. In this work, we study a regularization-based framework for estimating the mean and the covariance functions within a reproducing kernel Hilbert space (RKHS) setting. Our approach utilizes a spectral regularization technique under Hölder-type source conditions, allowing for a broad class of regularization schemes and accommodating a wide range of smoothness assumptions on the target functions. Unlike previous works in the literature, the proposed work does not require the target functions to belong to the underlying RKHS. Convergence rates for the proposed estimators are derived, and optimality is established by obtaining matching minimax lower bounds.
Geostatistics is a branch of statistics concerned with stochastic processes over continuous domains, with Gaussian processes (GPs) providing a flexible and principled modelling framework. However, the high computational cost of simulating or computing likelihoods with GPs limits their scalability to large datasets. This paper introduces the piecewise continuous Gaussian process (PCGP), a new process that retains the rich probabilistic structure of traditional GPs while offering substantial computational efficiency. As will be shown and discussed, existing scalable approaches that define stochastic processes on continuous domains -- such as the nearest-neighbour GP (NNGP) and the radial-neighbour GP (RNGP) -- rely on conditional independence structures that effectively constrain the measurable space on which the processes are defined, which may induce undesirable probabilistic behaviour and compromise their practical applicability, particularly in complex latent GP models. The PCGP mitigates these limitations and provides a theoretically grounded and computationally efficient alternative, as demonstrated through numerical illustrations.
The use of synthetic data to deidentify data and to improve predictive models is well-attested to. The augmentation of datasets using synthetically generated data is an alluring proposition: in the best case, it generates realistic data \textit{in silico} at a fraction of the cost of authentic data which may be found \textit{in vivo} or \textit{in vitro}. This poses novel epistemic challenges. We contend that synthetic data augmentation is best understood as a novel way of accounting for prior knowledge. In this manuscript, we propose a definition of synthetic distributions and analyze how synthetic data augmentation interplays with standard accounts of maximum likelihood and Bayesian estimation. We observe that the marginal Fisher information contributed by synthetic data processes is subject to fundamental bounds, and enumerate obstacles to the use of synthetic data augmentation to aid in inferential tasks. We then articulate a Bayesian formulation of the way that synthetic data augmentation can be coherently understood, but argue that naive approaches to the specification of the prior are epistemically unjustifiable. This suggests that enhanced scrutiny must be placed on identifying justifiable priors to warrant the use and inclusion of data drawn from specific synthetic distributions. While our analysis shows the challenges and limitations of using synthetic data augmentation to improve upon traditional statistical model reasoning, it does suggest that augmentation is the principal approach analysts using outcome reasoning (i.e. using train/test splits to justify the analysis) can constrain an otherwise high-dimensional model space, providing an alternative to trying to encode the constraints into the potentially complex architecture of the algorithm.
Biclustering is a powerful unsupervised learning technique for simultaneously identifying coherent subsets of rows and columns in a data matrix, thus revealing local patterns that may not be apparent in global analyses. However, most biclustering methods are developed for continuous data and are not applicable for binary datasets such as single-nucleotide polymorphism (SNP) or protein-protein interaction (PPI) data. Existing biclustering algorithms for binary data often struggle to recover biclustering patterns under noise, face scalability issues, and/or bias the final results towards biclusters of a particular size or characteristic. We propose a Bayesian method for biclustering binary datasets called Binary Spike-and-Slab Lasso Biclustering (BiSSLB). Our method is robust to noise and allows for overlapping biclusters of various sizes without prior knowledge of the noise level or bicluster characteristics. BiSSLB is based on a logistic matrix factorization model with spike-and-slab priors on the latent spaces. We further incorporate an Indian Buffet Process (IBP) prior to automatically determine the number of biclusters from the data. We develop a novel coordinate ascent algorithm with proximal steps which allows for scalable computation. The performance of our proposed approach is assessed through simulations and two real applications on HapMap SNP and Homo Sapiens PPI data, where BiSSLB is shown to outperform other state-of-the-art binary biclustering methods when the data is very noisy.
Causal representation learning (CRL) aims to learn low-dimensional causal latent variables from high-dimensional observations. While identifiability has been extensively studied for CRL, estimation has been less explored. In this paper, we explore the use of empirical Bayes (EB) to estimate causal representations. In particular, we consider the problem of learning from data from multiple domains, where differences between domains are modeled by interventions in a shared underlying causal model. Multi-domain CRL naturally poses a simultaneous inference problem that EB is designed to tackle. Here, we propose an EB $f$-modeling algorithm that improves the quality of learned causal variables by exploiting invariant structure within and across domains. Specifically, we consider a linear measurement model and interventional priors arising from a shared acyclic SCM. When the graph and intervention targets are known, we develop an EM-style algorithm based on causally structured score matching. We further discuss EB $\rmg$-modeling in the context of existing CRL approaches. In experiments on synthetic data, our proposed method achieves more accurate estimation than other methods for CRL.
A data analysis pipeline is a structured sequence of steps that transforms raw data into meaningful insights by integrating multiple analysis this http URL many practical applications, analytical findings are obtained only after data pass through several data-dependent procedures within such this http URL this study, we address the problem of quantifying the statistical reliability of results produced by data analysis this http URL a proof of concept, we focus on clustering pipelines that identify cluster structures from complex and heterogeneous data through procedures such as outlier detection, feature selection, and this http URL propose a novel statistical testing framework to assess the significance of clustering results obtained through these this http URL framework, based on selective inference, enables the systematic construction of valid statistical tests for clustering pipelines composed of predefined this http URL prove that the proposed test controls the type I error rate at any nominal level and demonstrate its validity and effectiveness through experiments on synthetic and real datasets.
In the present paper we study the performance of linear denoisers for noisy data of the form $\mathbf{x} + \mathbf{z}$, where $\mathbf{x} \in \mathbb{R}^d$ is the desired data with zero mean and unknown covariance $\mathbf{\Sigma}$, and $\mathbf{z} \sim \mathcal{N}(0, \mathbf{\Sigma}_{\mathbf{z}})$ is additive noise. Since the covariance $\mathbf{\Sigma}$ is not known, the standard Wiener filter cannot be employed for denoising. Instead we assume we are given samples $\mathbf{x}_1,\dots,\mathbf{x}_n \in \mathbb{R}^d$ from the true distribution. A standard approach would then be to estimate $\mathbf{\Sigma}$ from the samples and use it to construct an ``empirical" Wiener filter. However, in this paper, motivated by the denoising step in diffusion models, we take a different approach whereby we train a linear denoiser $\mathbf{W}$ from the data itself. In particular, we synthetically construct noisy samples $\hat{\mathbf{x}}_i$ of the data by injecting the samples with Gaussian noise with covariance $\mathbf{\Sigma}_1 \neq \mathbf{\Sigma}_{\mathbf{z}}$ and find the best $\mathbf{W}$ that approximates $\mathbf{W}\hat{\mathbf{x}}_i \approx \mathbf{x}_i$ in a least-squares sense. In the proportional regime $\frac{n}{d} \rightarrow \kappa > 1$ we use the {\it Convex Gaussian Min-Max Theorem (CGMT)} to analytically find the closed form expression for the generalization error of the denoiser obtained from this process. Using this expression one can optimize over $\mathbf{\Sigma}_1$ to find the best possible denoiser. Our numerical simulations show that our denoiser outperforms the ``empirical" Wiener filter in many scenarios and approaches the optimal Wiener filter as $\kappa\rightarrow\infty$.
We investigate Bayesian nonparametric density estimation via orthogonal polynomial expansions in weighted Sobolev spaces. A core challenge is establishing minimax optimal posterior convergence rates, especially for densities on unbounded domains without a strictly positive lower bound. For densities bounded away from zero, we give sufficient conditions under which the framework of \cite{shen2001} applies directly. For densities lacking a positive lower bound, the equivalence between Hellinger and weighted $L_2$-norm distance fails, invalidating the original theory. We propose a novel shifting method that lifts the true density $g_0$ to a sequence of proxy densities $g_{0,n}$. We prove a modified convergence theorem applicable to these shifted densities, preserving the optimal rate. We also construct a Gaussian sieve prior that achieves the minimax rate $\varepsilon_n=n^{-p/(2p+1)}$ for any integer $p\geq1$. Numerical results confirm that our estimator approximates the true density well and validates the theoretical convergence rate.
We prove that finite multivariate Erlang mixture densities with a common rate parameter are dense in the class of probability densities on $\mathbb{R}_{+}^{d}$ that belong to $L^{p}$, for every dimension $d\in\mathbb{N}$ and every $1\le p<\infty$. The argument is constructive: the one-dimensional Szász--Mirakjan--Kantorovich operator yields Erlang mixture approximations, and its tensor product yields multivariate approximants with a common scale. We then obtain several quantitative consequences. These include compact-set uniform approximation bounds and, under local Hölder conditions of order $\alpha\in(0,1]$, rates of order $n^{-\alpha/2}$ as the common scale $1/n$ tends to zero, whole-domain convergence in weighted sup norms, weighted and unweighted $L^{p}$ rates, and explicit rates for finite mixtures indexed by the number of mixture components. In particular, if the approximating density is required to have at most $K$ mixture components, then on fixed compact cubes we obtain an algebraic rate of order $K^{-\alpha/(2d)}$; in global weighted sup norms we obtain the explicit algebraic component-count rate $K^{-\alpha/[2d(2d+\alpha)]}$; and for $1<p<\infty$ we obtain corresponding weighted $L^{p}$ component-count rates. The results strengthen the weak-approximation theory for multivariate Erlang mixture distributions and yield immediate corollaries for broader classes such as product-gamma mixtures. \noindent\textbf{Keywords:} multivariate Erlang mixtures; Erlang distributions; Szász--Mirakjan--Kantorovich operator; density approximation; weighted $L^{p}$ approximation; approximation rates.
Motivated by the principle of satisficing in decision-making, we study satisficing regret guarantees for nonstationary $K$-armed bandits. We show that in the general realizable, piecewise-stationary setting with $L$ stationary segments, the optimal regret is $\Theta(L\log T)$ as long as $L\geq 2$. This stands in sharp contrast to the case of $L=1$ (i.e., the stationary setting), where a $T$-independent $\Theta(1)$ satisficing regret is achievable under realizability. In other words, the optimal regret has to scale with $T$ even if just a little nonstationarity presents. A key ingredient in our analysis is a novel Fano-based framework tailored to nonstationary bandits via a \emph{post-interaction reference} construction. This framework strictly extends the classical Fano method for passive estimation as well as recent interactive Fano techniques for stationary bandits. As a complement, we also discuss a special regime in which constant satisficing regret is again possible.
A basic issue in both teaching of and practice of statistics is the interplay between modelling assumptions and inference performance. The general message conveyed is that stronger assumptions lead to better statistical performance of the relevant estimators, tests and confidence intervals, provided that these assumptions hold. On the other hand, fewer assumptions often lead to safer and more robust methods that are good also outside narrow conditions, but not quite as good as specialist methods that exploit such narrower conditions, if these are fulfilled. This interplay is nicely illustrated in the context of density estimation, where parametric and nonparametric methods can be contrasted. The parametric ones have mean squared errors of size $O(n^{-1})$ in terms of sample size $n$ if the parametric model is right, but are not even consistent outside the model. The nonparametric methods are everywhere consistent and have mean squared errors of size $O(n^{-4/5})$ for broad classes of estimands. The point we are making here is that this picture is not universally true! We show that a simple kernel density estimator can perform better than a directly estimated parametric density on the latter's home turf, for small sample sizes, in the sense of mean integrated squared error. Our main example is that of estimating an unknown normal density. In the process of developing and discussing this somewhat counter-intuitive and half-paradoxical example we touch on several tangential issues of interest, pertaining to exact small-sample analysis of density estimators.
The No-U-Turn Sampler (NUTS) is the computational workhorse of modern Bayesian software libraries, yet its qualitative and quantitative convergence guarantees were established only recently. A significant gap remains in the theoretical comparison of its two main variants: NUTS-mul and NUTS-BPS, which use multinomial sampling and biased progressive sampling, respectively, for index selection. In this paper, we address this gap in three contributions. First, we derive the first necessary conditions for geometric ergodicity for both variants. Second, we establish the first sufficient conditions for geometric ergodicity and ergodicity for NUTS-mul. Third, we obtain the first mixing time result for NUTS-BPS on a standard Gaussian distribution. Our results show that NUTS-mul and NUTS-BPS exhibit nearly identical qualitative behavior, with geometric ergodicity depending on the tail properties of the target distribution. However, they differ quantitatively in their convergence rates. More precisely, when initialized in the typical set of the canonical Gaussian measure, the mixing times of both NUTS-mul and NUTS-BPS scale as $O(d^{1/4})$ up to logarithmic factors, where $d$ denotes the dimension. Nevertheless, the associated constants are strictly smaller for NUTS-BPS.
Recursive partitioning methods provide computationally efficient surrogates for the Wasserstein distance, yet their statistical behavior and their resolution in the small-discrepancy regime remain insufficiently understood. We study Recursive Rank Matching (RRM) as a representative instance of this class under a population-anchored reference. In this setting, we establish consistency and an explicit convergence rate for the anchored empirical RRM under the quadratic cost. We then identify a dominant mismatch mechanism responsible for the loss of resolution in the small-discrepancy regime. Based on this analysis, we introduce Selective Recursive Rank Matching (SRRM), which suppresses the resulting dominant mismatches and yields a higher-fidelity practical surrogate for the Wasserstein distance at moderate additional computational cost.
Sparse functional data arise when measurements are observed infrequently and at irregular time points for each subject, often in the presence of measurement error. These characteristics introduce additional challenges for functional principal component analysis. In this paper, we propose a new approach for extracting functional principal components from such data by combining basis expansion with maximum likelihood estimation. Orthogonality of the estimated eigenfunctions is preserved throughout the optimization using modified Gram-Schmidt orthonormalization. An information criterion is proposed to select both the optimal number of basis functions and the rank of the covariance structure. Principal component scores are subsequently estimated via conditional expectation, enabling accurate reconstruction of the underlying functional trajectories across the full domain despite sparse observations. Simulation studies demonstrate the effectiveness of the proposed method and show that it performs favorably compared with existing approaches. Its practical utility is illustrated through applications to CD4 cell count data from the Multicenter AIDS Cohort Study and somatic cell count data from Irish research dairy cattle. Supplementary materials, including technical details, additional simulation results, and the R package mGSFPCA, are available online.
Although Hamiltonian Monte Carlo (HMC) scales as O(d^(1/4)) in dimension, there is a large constant factor determined by the curvature of the target density. This constant factor can be reduced in most cases through preconditioning, the state of the art for which uses diagonal or dense penalized maximum likelihood estimation of (co)variance based on a sample of warmup draws. These estimates converge slowly in the diagonal case and scale poorly when expanded to the dense case. We propose a more effective estimator based on minimizing the sample Fisher divergence from a linearly transformed density to a standard normal distribution. We present this estimator in three forms, (a) diagonal, (b) dense, and (c) low-rank plus diagonal. Using a collection of 114 models from posteriordb, we demonstrate that the diagonal minimizer of Fisher divergence outperforms the industry-standard variance-based diagonal estimators used by Stan and PyMC by a median factor of 1.3. The low-rank plus diagonal minimizer of the Fisher divergence outperforms Stan and PyMC's diagonal estimators by a median factor of 4.
Sensitivity analysis for unmeasured confounding in observational studies is commonly based on threshold quantities, such as the Cornfield condition or the E-value, which quantify how strong a confounder must be to explain away an observed association. However, these approaches do not address a fundamental inferential question: how plausible is it that such a confounder exists? In this work, we propose a Bayesian reformulation of Cornfield-type sensitivity analysis in which the strength of unmeasured confounding is treated as a random variable. Within this framework, the E-value is reinterpreted as a threshold, and the central inferential quantity becomes the posterior probability that confounding exceeds this threshold. This transforms sensitivity analysis from a descriptive diagnostic into a probabilistic assessment of robustness. We develop a simple generative model linking observed effect estimates to true causal effects and confounding bias, and we specify prior distributions reflecting plausible confounding mechanisms. The resulting framework yields posterior measures of evidential vulnerability that are directly interpretable and applicable to summary-level data. Illustrations based on empirical case studies show that the proposed approach preserves the interpretability of the E-value while providing a more nuanced and decision-relevant characterization of robustness. More broadly, the framework aligns sensitivity analysis with Bayesian principles of inference under uncertainty, offering a coherent alternative to purely threshold-based reasoning.
We study contextual bandits with finitely many actions in which the reward of each arm follows a single-index model with an arm-specific index parameter and an unknown nonparametric link function. We consider a regime in which arms correspond to stable decision options and covariates evolve adaptively under the bandit policy. This setting creates significant statistical challenges: the sampling distribution depends on the allocation rule, observations are dependent over time, and inverse-propensity weighting induces variance inflation. We propose a kernelized $\varepsilon$-greedy algorithm that combines Stein-based estimation of the index parameters with inverse-propensity-weighted kernel ridge regression for the reward functions. This approach enables flexible semiparametric learning while retaining interpretability. Our analysis develops new tools for inference with adaptively collected data. We establish asymptotic normality for the single-index estimator under adaptive sampling, yielding valid confidence regions, and derive a directional functional central limit theorem for the RKHS estimator, which provides asymptotically valid pointwise confidence intervals. The analysis relies on concentration bounds for inverse-weighted Gram matrices together with martingale central limit theorems. We further obtain finite-time regret guarantees, including $\tilde{O}(\sqrt{T})$ rates under common-link Lipschitz conditions, showing that semiparametric structure can be exploited without sacrificing statistical efficiency. These results provide a unified framework for simultaneous learning and inference in single-index contextual bandits.
The topic of Multivariate Time Series Anomaly Detection (MTSAD) has grown rapidly over the past years, with a steady rise in publications and Deep Learning (DL) models becoming the dominant paradigm. To address the lack of systematization in the field, this study introduces a novel and unified taxonomy with eleven dimensions over three parts (Input, Output and Model) for the categorization of DL-based MTSAD methods. The dimensions were established in a two-fold approach. First, they derived from a comprehensive analysis of methodological studies. Second, insights from review papers were incorporated. Furthermore, the proposed taxonomy was validated using an additional set of recent publications, providing a clear overview of methodological trends in MTSAD. Results reveal a convergence toward Transformer-based and reconstruction and prediction models, setting the foundation for emerging adaptive and generative trends. Building on and complementing existing surveys, this unified taxonomy is designed to accommodate future developments, allowing for new categories or dimensions to be added as the field progresses. This work thus consolidates fragmented knowledge in the field and provides a reference point for future research in MTSAD.
Deep learning models have become the dominant approach for multivariate time series anomaly detection (MTSAD), often reporting substantial performance improvements over classical statistical methods. However, these gains are frequently evaluated under heterogeneous thresholding strategies and evaluation protocols, making fair comparisons difficult. This work revisits OmniAnomaly, a widely used stochastic recurrent model for MTSAD, and systematically compares it with a simple linear baseline based on Principal Component Analysis (PCA) on the Server Machine Dataset (SMD). Both methods are evaluated under identical thresholding and evaluation procedures, with experiments repeated across 100 runs for each of the 28 machines in the dataset. Performance is evaluated using Precision, Recall and F1-score at point-level, with and without point-adjustment, and under different aggregation strategies across machines and runs, with the corresponding standard deviations also reported. The results show large variability across machines and show that PCA can achieve performance comparable to OmniAnomaly, and even outperform it when point-adjustment is not applied. These findings question the added value of more complex architectures under current benchmarking practices and highlight the critical role of evaluation methodology in MTSAD research.
Distributed lag non-linear models (DLNMs) are a popular approach to flexibly model the effect of time-delayed exposures. Classical DLNMs specify a common exposure-lag-response relationship across geographical areas. However, this relationship might be altered by an effect modifier that differs between spatial units. Although some methods have been proposed to account for effect modification, their applicability is context-dependent. For example, a meta-analysis can account for heterogeneity between groups, but this technique requires sufficiently large study groups. This limitation is particularly relevant when working with count data, where small numbers of events are often encountered. In this paper, we review existing methods that allow for spatial effect modification for count-based outcomes and propose a Bayesian DLNM alternative method that accounts for the modifier through flexible interaction effects. Through the use of Laplacian P-splines, we provide a computationally fast estimation procedure by avoiding the use of classical Markov Chain Monte Carlo (MCMC) approaches. The performance of the different methods is evaluated through simulation studies. Moreover, the practical applicability of our proposed method is showcased through a data application, containing daily temperature and mortality count data in 87 Italian cities.
Autoregressive (AR) models remain widely used in time series analysis due to their interpretability, but convencional parameter estimation methods can be computationally expensive and prone to convergence issues. This paper proposes a Neural Network (NN) formulation of AR estimation by embedding the autoregressive structure directly into a feedforward NN, enabling coefficient estimation through backpropagation while preserving interpretability. Simulation experiments on 125,000 synthetic AR(p) time series with short-term dependence (1 <= p <= 5) show that the proposed NN-based method consistently recovers model coefficients for all series, while Conditional Maximum Likelihood (CML) fails to converge in approximately 55% of cases. When both methods converge, estimation accuracy is comparable with negligible differences in relative error, R2 and, perplexity/likelihood. However, when CML fails, the NN-based approach still provides reliable estimates. In all cases, the NN estimator achieves substantial computational gains, reaching a median speedup of 12.6x and up to 34.2x for higher model orders. Overall, results demonstrate that gradient-descent NN optimization can provide a fast and efficient alternative for interpretable AR parameter estimation.
Longitudinal cluster randomized trials (L-CRTs) are increasingly used to evaluate the cost-effectiveness of healthcare interventions across multiple assessment periods, yet design methods for powering these trials remain underdeveloped. Existing methods for cost-effectiveness analyses in cluster settings are limited to simple parallel-arm cluster randomized trials with a single follow-up assessment period. These methods cannot accommodate the complex correlation structures in L-CRTs conducted over multiple periods, which require differentiation between within-period and between-period correlations for both clinical and cost outcomes, as well as between-outcome correlations. Moreover, while substantial methodological advances have been made for the design of L-CRTs with univariate outcomes, none specifically address cost-effectiveness objectives where clinical and cost outcomes must be jointly modeled. We provide a design-stage framework for powering cost-effectiveness L-CRTs across three design variants: parallel-arm, crossover, and stepped wedge designs. We derive closed-form variance expressions for the generalized least squares estimator of the average incremental net monetary benefit under a bivariate linear mixed model. We propose a standardized ceiling ratio that adjusts willingness-to-pay for relative outcome variability to inform optimal design. We then develop local optimal designs that maximize statistical power under known correlation parameters and MaxiMin designs that ensure robust performance across parameter uncertainty for all three design variants. Through a real stepped wedge trial data example, we demonstrate the sample size calculation for testing intervention cost-effectiveness under local optimal and MaxiMin designs.
Kernel-based multivariate statistical process control (K-MSPC) extends classical monitoring to nonlinear industrial processes. Its performance depends critically on kernel parameters such as lengthscales and variance terms. In current practice these parameters are typically selected by heuristics or deterministic optimisation, and then treated as fixed, despite being inferred from finite and noisy data. This can lead to overconfident control limits and unstable alarm behaviour when the kernel choice is uncertain. This work proposes a probabilistic K-MSPC framework that quantifies and propagates kernel parameter uncertainty to the monitoring statistics. The approach follows a two-stage workflow: (i) deterministic kernel calibration using supervised or unsupervised models, and (ii) Bayesian inference of kernel parameters via Markov chain Monte Carlo. Posterior samples are propagated through kernel Principal Component Analysis to produce probabilistic $T^2$ and squarred prediction error control charts, together with uncertainty-aware contribution plots. The framework is evaluated on the Tennessee Eastman Process benchmark. Results show that posterior-mean monitoring often improves fault detection compared to deterministic prior-mean charts for the squared exponential kernel, while credible bands remain narrow in-control and widen under faults, reflecting amplified epistemic uncertainty in abnormal regimes. The automatic relevance determination kernel reduces posterior uncertainty and yields performance close to the deterministic baseline, whereas unsupervised calibration produces wider posterior bands but still robust fault detection.
Non-Gaussian statistics are a challenge for data assimilation. Linear methods oversimplify the problem, yet fully nonlinear methods are often too expensive to use in practice. The best solution usually lies between these extremes. Triangular measure transport offers a flexible framework for nonlinear data assimilation. Its success, however, depends on how the map is parametrized. Too much flexibility leads to overfitting; too little misses important structure. To address this balance, we develop an adaptation algorithm that selects a parsimonious parametrization automatically. Our method uses P-spline basis functions and an information criterion as a continuous measure of model complexity. This formulation enables gradient descent and allows efficient, fine-scale adaptation in high-dimensional settings. The resulting algorithm requires no hyperparameter tuning. It adjusts the transport map to the appropriate level of complexity based on the system statistics and ensemble size. We demonstrate its performance in nonlinear, non-Gaussian problems, including a high-dimensional distributed groundwater model.
This paper presents uniform-in-time finite-sample bounds for regularized linear regression with vector-valued outputs and conditionally zero-mean subgaussian noise. By revisiting classical self-normalized martingale arguments, we obtain bounds that apply directly to multi-output regression, unlike most of the prior work. Compared to the state of the art, the new results are more general and yield tighter bounds, even for scalar-valued outputs. The mild assumptions we use allow for unknown dependencies between regressors and past noise terms, typically induced by system dynamics or feedback mechanisms. Therefore, these novel finite-sample bounds can be applied to many affine-in-parameter system identification problems, including the identification of a linear time-invariant system from full-state measurements. These new results may lead to significant improvements in stochastic learning-based controllers for safety-critical applications.
Direct air carbon capture and storage (DACCS) is a promising CO2 removal technology, but its deployment at scale remains speculative. Yet, its technological, economic, and policy-related uncertainties have often been overlooked in mitigation pathways. This paper conducts the first uncertainty quantification and global sensitivity analysis of DACCS on technological, market, financial and public support drivers, using a detailed-process Integrated Assessment Model and newly developed sensitivity algorithms. We find that DACCS deployment exhibits a fat-tailed distribution: most scenarios show modest technology uptake, but there is a small but non-zero probability (4-6%) of achieving gigaton-scale removals by mid-century. Scaling DACCS to gigaton levels requires subsidies that always exceed 200-330 USD/tCO2 and are sustained for decades, resulting in a public support programme of 900-3000 USD Billions. Such an effort pays back by mid-century, but only if accompanied by strong emission reduction policies. These findings highlight the critical role of climate policies in enabling a robust and economically sustainable CO2 removal strategy.
Prediction-powered inference (PPI) is a rapidly growing framework for combining machine learning predictions with a small set of gold-standard labels to conduct valid statistical inference. In this article, I argue that the core estimators underlying PPI are equivalent to well-established estimators from the survey sampling literature dating back to the 1970s. Specifically, the PPI estimator for a population mean is algebraically equivalent to the difference estimator of Cassel et al. (1976), and PPI plus corresponds to the generalized regression (GREG) estimator of Sarndal et al. (2003). Recognizing this equivalence, I consider what part of PPI is inherited from a long-standing literature in statistics, what part is genuinely new, and where inferential claims require care. After introducing the two frameworks and establishing their equivalence, I break down where PPI diverges from model-assisted estimation, including differences in the mode of inference, the role of the unlabeled data pool, and the consequences of differential prediction error for subgroup estimands such as the average treatment effect. I then identify what each framework offers the other: PPI researchers can draw on the survey sampling literature's well-developed theory of calibration, optimal allocation, and design-based diagnostics, while survey sampling researchers can benefit from PPI's extensions to non-standard estimands and its accessible software ecosystem. The article closes with a call for integration between these two communities, motivated by the growing use of large language models as measurement instruments in applied research.
The signature is a canonical representation of a multidimensional path over an interval. However, it treats all historical information uniformly, offering no intrinsic mechanism for contextualising the relevance of the past. To address this, we introduce the Exponentially Weighted Signature (EWS), generalising the Exponentially Fading Memory (EFM) signature from diagonal to general bounded linear operators. These operators enable cross-channel coupling at the level of temporal weighting together with richer memory dynamics including oscillatory, growth, and regime-dependent behaviour, while preserving the algebraic strengths of the classical signature. We show that the EWS is the unique solution to a linear controlled differential equation on the tensor algebra, and that it generalises both state-space models and the Laplace and Fourier transforms of the path. The group-like structure of the EWS enables efficient computation and makes the framework amenable to gradient-based learning, with the full semigroup action parametrised by and learned through its generator. We use this framework to empirically demonstrate the expressivity gap between the EWS and both the signature and EFM on two SDE-based regression tasks.
To estimate the causal effect of an intervention, researchers need to identify a control group that represents what might have happened to the treatment group in the absence of that intervention. This is challenging without a randomized experiment and further complicated when few units (possibly only one) are treated. Nevertheless, when data are available on units over time, synthetic control (SC) methods provide an opportunity to construct a valid comparison by differentially weighting control units that did not receive the treatment so that their resulting pre-treatment trajectory is similar to that of the treated unit. The hope is that this weighted ``pseudo-counterfactual" can serve as a valid counterfactual in the post-treatment time period. Since its origin twenty years ago, SC has been used over 5,000 times in the literature (Web of Science, December 2025), leading to a proliferation of descriptions of the method and guidance on proper usage that is not always accurate and does not always align with what the original developers appear to have intended. As such, a number of accepted pieces of wisdom have arisen: (1) SC is robust to various implementations; (2) covariates are unnecessary, and (3) pre-treatment prediction error should guide model selection. We describe each in detail and conduct simulations that suggest, both for standard and alternative implementations of SC, that these purported truths are not supported by empirical evidence and thus actually represent misconceptions about best practice. Instead of relying on these misconceptions, we offer practical advice for more cautious implementation and interpretation of results.
In the artificial intelligence (AI) age, firms increasingly invest in AI technology innovation to secure competitive advantages. However, the relationship between firms' AI technology innovation and consumer complaints remains insufficiently explored. Drawing on Protection Motivation Theory (PMT), this paper investigates how firms' AI technology innovation influences consumer complaints. Employing a multimethod approach, Study 1 analyzes panel data from S&P 500 firms (N = 2,758 firm-year observations), Study 2 examines user-generated Reddit data (N = 2,033,814 submissions and comments), and Study 3 involves two controlled experiments (N = 410 and N = 500). The results reveal that firms' AI technology innovation significantly increases consumers' threat-related emotions, heightening their complaints. Furthermore, compared to AI process innovation, AI product innovation leads to higher consumer complaints. This paper advances the understanding of consumers' psychological responses to firms' AI innovation and provides practical implications for managing consumer complaints effectively.
Anomaly and failure detection methods are crucial in identifying deviations from normal system operational conditions, which allows for actions to be taken in advance, usually preventing more serious damages. Long-lasting deviations indicate failures, while sudden, isolated changes in the data indicate anomalies. However, in many practical applications, changes in the data do not always represent abnormal system states. Such changes may be recognized incorrectly as failures, while being a normal evolution of the system, e.g. referring to characteristics of starting the processing of a new product, i.e. realizing a domain shift. Therefore, distinguishing between failures and such ''healthy'' changes in data distribution is critical to ensure the practical robustness of the system. In this paper, we propose a method that not only detects changes in the data distribution and anomalies but also allows us to distinguish between failures and normal domain shifts inherent to a given process. The proposed method consists of a modified Page-Hinkley changepoint detector for identification of the domain shift and possible failures and supervised domain-adaptation-based algorithms for fast, online anomaly detection. These two are coupled with an explainable artificial intelligence (XAI) component that aims at helping the human operator to finally differentiate between domain shifts and failures. The method is illustrated by an experiment on a data stream from the steel factory.
Online social platforms increasingly rely on crowd-sourced systems to label misleading content at scale, but these systems must both aggregate users' evaluations and decide whose evaluations to trust. To address the latter, many platforms audit users by rewarding agreement with the final aggregate outcome, a design we term consensus-based auditing. We analyze the consequences of this design in X's Community Notes, which in September 2022 adopted consensus-based auditing that ties users' eligibility for participation to agreement with the eventual platform outcome. We find evidence of strategic conformity: minority contributors' evaluations drift toward the majority and their participation share falls on controversial topics, where independent signals matter most. We formalize this mechanism in a behavioral model in which contributors trade off private beliefs against anticipated penalties for disagreement. Motivated by these findings, we propose a two-stage auditing and aggregation algorithm that weights contributors by the stability of their past residuals rather than by agreement with the majority. The method first accounts for differences across content and contributors, and then measures how predictable each contributor's evaluations are relative to the latent-factor model. Contributors whose evaluations are consistently informative receive greater influence in aggregation, even when they disagree with the prevailing consensus. In the Community Notes data, this approach improves out-of-sample predictive performance while avoiding penalization of disagreement.
Adapting Large Language Models in complex technical service domains is constrained by the absence of explicit cognitive chains in human demonstrations and the inherent ambiguity arising from the diversity of valid responses. These limitations severely hinder agents from internalizing latent decision dynamics and generalizing effectively. Moreover, practical adaptation is often impeded by the prohibitive resource and time costs associated with standard training paradigms. To overcome these challenges and guarantee computational efficiency, we propose a lightweight adaptation framework comprising three key contributions. (1) Latent Logic Augmentation: We introduce Planning-Aware Trajectory Modeling and Decision Reasoning Augmentation to bridge the gap between surface-level supervision and latent decision logic. These approaches strengthen the stability of Supervised Fine-Tuning alignment. (2) Robust Noise Reduction: We construct a Multiple Ground Truths dataset through a dual-filtering method to reduce the noise by validating diverse responses, thereby capturing the semantic diversity. (3) Lightweight Adaptation: We design a Hybrid Reward mechanism that fuses an LLM-based judge with a lightweight relevance-based Reranker to distill high-fidelity reward signals while reducing the computational cost compared to standard LLM-as-a-Judge reinforcement learning. Empirical evaluations on real-world Cloud service tasks, conducted across semantically diverse settings, demonstrate that our framework achieves stability and performance gains through Latent Logic Augmentation and Robust Noise Reduction. Concurrently, our Hybrid Reward mechanism achieves alignment comparable to standard LLM-as-a-judge methods with reduced training time, underscoring the practical value for deploying technical service agents.
Contrastive learning methods for time series anomaly detection (TSAD) heavily depend on the quality of negative sample construction. However, existing strategies based on random perturbations or pseudo-anomaly injection often struggle to simultaneously preserve temporal semantic consistency and provide effective decision-boundary supervision. Most existing methods rely on prior anomaly injection, while overlooking the potential of generating hard negatives near the data manifold boundary directly from normal samples themselves. To address this issue, we propose a reconstruction-driven boundary negative generation framework that automatically constructs hard negatives through the reconstruction process of normal samples. Specifically, the method first employs a reconstruction network to capture normal temporal patterns, and then introduces a reinforcement learning strategy to adaptively adjust the optimization update magnitude according to the current reconstruction state. In this way, boundary-shifted samples close to the normal data manifold can be induced along the reconstruction trajectory and further used for subsequent contrastive representation learning. Unlike existing methods that depend on explicit anomaly injection, the proposed framework does not require predefined anomaly patterns, but instead mines more challenging boundary negatives from the model's own learning dynamics. Experimental results show that the proposed method effectively improves anomaly representation learning and achieves competitive detection performance on the current dataset.
This paper pays tribute to Professor Giovanni Andrea Cornia's lifelong contributions to the measurement of global inequality. We review twelve world and regional databases of the Gini coefficient, illustrate their coverage, overlapping, and data gaps, and analyse the major sources of discrepancy among published Ginis. Merging all databases into a unified collection of over 122,000 observations spanning 222 countries from 1867 to 2024, we document how differences in welfare metrics, reference units, sub-metric definitions, post-survey adjustments, and survey design produce Gini estimates that diverge considerably -- sometimes by as much as 50 percentage points -- for the same country and year. We quantify pairwise cross-database discordance, document the income-consumption Gini gap by region and income group, and discuss the contributions of welfare metric and equivalence scale choices to cross-database dispersion. We extend the analysis with a dedicated discussion of comparability across time and across measurement dimensions, showing how multiple layers of methodological choice interact to make any single Gini figure a product of a complex chain of decisions that are rarely fully disclosed. Our analysis confirms that the choice of welfare metric remains the single most important source of cross-country non-comparability, while sub-metric definitions and equivalence scales introduce further systematic differences that are routinely overlooked in comparative work.
Artificial Intelligence (AI) systems are increasingly prominent in emerging smart cities, yet their reliability remains a critical concern. These systems typically operate through a sequence of interconnected functional stages, where upstream errors may propagate to downstream stages, ultimately affecting overall system reliability. Quantifying such error propagation is essential for accurate modeling of AI system reliability. However, this task is challenging due to: i) data availability: real-world AI system reliability data are often scarce and constrained by privacy concerns; ii) model validity: recurring error events across sequential stages are interdependent, violating the independence assumptions of statistical inference; and iii) computational complexity: AI systems process large volumes of high-speed data, resulting in frequent and complex recurrent error events that are difficult to track and analyze. To address these challenges, this paper leverages a physics-based autonomous vehicle simulation platform with a justifiable error injector to generate high-quality data for AI system reliability analysis. Building on this data, a new reliability modeling framework is developed to explicitly characterize error propagation across stages. Model parameters are estimated using a computationally efficient, theoretically guaranteed composite likelihood expectation - maximization algorithm. Its application to the reliability modeling for autonomous vehicle perception systems demonstrates its predictive accuracy and computational efficiency.
Bayesian methods lie at the heart of modern data science and provide a powerful scaffolding for estimation in data-constrained settings and principled quantification and propagation of uncertainty. Yet in many real-world use cases where these methods are deployed, there is a natural need to preserve the privacy of the individuals whose data is being scrutinized. While a number of works have attempted to approach the problem of differentially private Bayesian estimation through either reasoning about the inherent privacy of the posterior distribution or privatizing off-the-shelf Bayesian methods, these works generally do not come with rigorous utility guarantees beyond low-dimensional settings. In fact, even for the prototypical tasks of Gaussian mean estimation and linear regression, it was unknown how close one could get to the Bayes-optimal error with a private algorithm, even in the simplest case where the unknown parameter comes from a Gaussian prior. In this work, we give the first efficient algorithms for both of these problems that achieve mean-squared error $(1+o(1))\mathrm{OPT}$ and additionally show that both tasks exhibit an intriguing computational-statistical gap. For Bayesian mean estimation, we prove that the excess risk achieved by our method is optimal among all efficient algorithms within the low-degree framework, yet is provably worse than what is achievable by an exponential-time algorithm. For linear regression, we prove a qualitatively similar lower bound. Our algorithms draw upon the privacy-to-robustness framework of arXiv:2212.05015, but with the curious twist that to achieve private Bayes-optimal estimation, we need to design sum-of-squares-based robust estimators for inherently non-robust objects like the empirical mean and OLS estimator. Along the way we also add to the sum-of-squares toolkit a new kind of constraint based on short-flat decompositions.
Chain-of-thought reasoning, where language models expend additional computation by producing thinking tokens prior to final responses, has driven significant advances in model capabilities. However, training these reasoning models is extremely costly in terms of both data and compute, as it involves collecting long traces of reasoning behavior from humans or synthetic generators and further post-training the model via reinforcement learning. Are these costs fundamental, or can they be reduced through better algorithmic design? We show that autocurriculum, where the model uses its own performance to decide which problems to focus training on, provably improves upon standard training recipes for both supervised fine-tuning (SFT) and reinforcement learning (RL). For SFT, we show that autocurriculum requires exponentially fewer reasoning demonstrations than non-adaptive fine-tuning, by focusing teacher supervision on prompts where the current model struggles. For RL fine-tuning, autocurriculum decouples the computational cost from the quality of the reference model, reducing the latter to a burn-in cost that is nearly independent of the target accuracy. These improvements arise purely from adaptive data selection, drawing on classical techniques from boosting and learning from counterexamples, and requiring no assumption on the distribution or difficulty of prompts.
The distance from calibration, introduced by Błasiok, Gopalan, Hu, and Nakkiran (STOC 2023), has recently emerged as a central measure of miscalibration for probabilistic predictors. We study the fundamental problems of computing and estimating this quantity, given either an exact description of the data distribution or only sample access to it. We give an efficient algorithm that exactly computes the calibration distance when the distribution has a uniform marginal and noiseless labels, which improves the $O(1/\sqrt{|\mathcal{X}|})$ additive approximation of Qiao and Zheng (COLT 2024) for this special case. Perhaps surprisingly, the problem becomes $\mathsf{NP}$-hard when either of the two assumptions is removed. We extend our algorithm to a polynomial-time approximation scheme for the general case. For the estimation problem, we show that $\Theta(1/\epsilon^3)$ samples are sufficient and necessary for the empirical calibration distance to be upper bounded by the true distance plus $\epsilon$. In contrast, a polynomial dependence on the domain size -- incurred by the learning-based baseline -- is unavoidable for two-sided estimation. Our positive results are based on simple sparsifications of both the distribution and the target predictor, which significantly reduce the search space for computation and lead to stronger concentration for the estimation problem. To prove the hardness results, we introduce new techniques for certifying lower bounds on the calibration distance -- a problem that is hard in general due to its $\textsf{co-NP}$-completeness.
Standard decoding strategies for text generation, including top-k, nucleus sampling, and contrastive search, select tokens based on likelihood, restricting selection to high-probability regions. Human language production operates differently: tokens are chosen for communicative appropriateness rather than statistical frequency. This mismatch creates a truncation blind spot: contextually appropriate but statistically rare tokens remain accessible to humans yet unreachable by likelihood-based decoding. We hypothesize this contributes to the detectability of machine-generated text. Analyzing over 1.8 million texts across eight language models, five decoding strategies, and 53 hyperparameter configurations, we find that 8-18% of human-selected tokens fall outside typical truncation boundaries. Simple classifiers trained on predictability and lexical diversity achieve remarkable detection rates. Crucially, neither model scale nor architecture correlates strongly with detectability; truncation parameters account for most variance. Configurations achieving low detectability often produce incoherent text, indicating that evading detection and producing natural text are distinct objectives. These findings suggest detectability is enhanced by likelihood-based token selection, not merely a matter of model capability.
Decentralized Federated Learning (DFL) remains highly vulnerable to adaptive backdoor attacks designed to bypass traditional passive defense metrics. To address this limitation, we shift the defensive paradigm toward a novel active, interventional auditing framework. First, we establish a dynamical model to characterize the spatiotemporal diffusion of adversarial updates across complex graph topologies. Second, we introduce a suite of proactive auditing metrics, stochastic entropy anomaly, randomized smoothing Kullback-Leibler divergence, and activation kurtosis. These metrics utilize private probes to stress-test local models, effectively exposing latent backdoors that remain invisible to conventional static detection. Furthermore, we implement a topology-aware defense placement strategy to maximize global aggregation resilience. We provide theoretical property for the system's convergence under co-evolving attack and defense dynamics. Numeric empirical evaluations across diverse architectures demonstrate that our active framework is highly competitive with state-of-the-art defenses in mitigating stealthy, adaptive backdoors while preserving primary task utility.
Reservoir computing is a well-established approach for processing data with a much lower complexity compared to traditional neural networks. Despite two decades of experimental progress, the core properties of reservoir computing (namely separation, robustness, and fading memory) still lack rigorous mathematical foundations. This paper addresses this gap by providing a control-theoretic framework for the analysis of time-delay-based reservoir computers. We introduce formal definitions of the separation property and fading memory in terms of functional norms, and establish their connection to well-known stability notions for time-delay systems as incremental input-to-state stability. For a class of linear reservoirs, we derive an explicit lower bound for the separation distance via Fourier analysis, offering a computable criterion for reservoir design. Numerical results on the NARMA10 benchmark and continuous-time system prediction validate the approach with a minimal digital implementation.
Despite the success of reinforcement learning from human feedback (RLHF) in aligning language models, current reward modeling heavily relies on experimental feedback data collected from human annotators under controlled and costly conditions. In this work, we introduce observational reward modeling -- learning reward models with observational user feedback (e.g., clicks, copies, and upvotes) -- as a scalable and cost-effective alternative. We identify two fundamental challenges in this setting: (1) observational feedback is noisy due to annotation errors, which deviates it from true user preference; (2) observational feedback is biased by user preference, where users preferentially provide feedback on responses they feel strongly about, which creats a distribution shift between training and inference data. To address these challenges, we propose CausalRM, a causal-theoretic reward modeling framework that aims to learn unbiased reward models from observational feedback. To tackle challenge (1), CausalRM introduces a noise-aware surrogate loss term that is provably equivalent to the primal loss under noise-free conditions by explicitly modeling the annotation error generation process. To tackle challenge (2), CausalRM uses propensity scores -- the probability of a user providing feedback for a given response -- to reweight training samples, yielding a loss function that eliminates user preference bias. Extensive experiments across diverse LLM backbones and benchmark datasets validate that CausalRM effectively learns accurate reward signals from noisy and biased observational feedback and delivers substantial performance improvements on downstream RLHF tasks -- including a 49.2% gain on WildGuardMix and a 32.7% improvement on HarmBench. Code is available on our project website.
Striking an optimal balance between predictive performance and fairness continues to be a fundamental challenge in machine learning. In this work, we propose a post-processing framework that facilitates fairness-aware prediction by leveraging model ensembling. Designed to operate independently of any specific model internals, our approach is widely applicable across various learning tasks, model architectures, and fairness definitions. Through extensive experiments spanning classification, regression, and survival analysis, we demonstrate that the framework effectively enhances fairness while maintaining, or only minimally affecting, predictive accuracy.
Foundation models are used to extract transferable representations from large amounts of unlabeled data, typically via self-supervised learning (SSL). However, many of these models rely on architectures that offer limited interpretability, which is a critical issue in high-stakes domains such as medical imaging. We propose Dual-IFM, a foundation model that is interpretable-by-design in two ways: First, it provides local interpretability for individual images through class evidence maps that are faithful to the decision-making process. Second, it provides global interpretability for entire datasets through a 2D projection layer that allows for direct visualization of the model's representation space. We trained our model on over 800,000 color fundus photography from various sources to learn generalizable, interpretable representations for different downstream tasks. Our results show that our model reaches a performance range similar to that of state-of-the-art foundation models with up to $16\times$ the number of parameters, while providing interpretable predictions on out-of-distribution data. Our results suggest that large-scale SSL pretraining paired with inherent interpretability can lead to robust representations for retinal imaging.
Clustered sampling is prevalent in empirical regression discontinuity (RD) designs, but it has not received much attention in the theoretical literature. In this paper, we introduce a general model-based framework for such settings and derive high-level conditions under which the standard local linear RD estimator is asymptotically normal. We verify that our high-level assumptions hold across a wide range of empirical designs, including settings of growing cluster sizes. We further show that clustered standard errors that are currently used in practice can be either inconsistent or overly conservative in finite samples. To address these issues, we propose a novel nearest-neighbor-type variance estimator and illustrate its properties in a diverse set of empirical applications.
Recent advances in drug discovery have demonstrated that incorporating side information (e.g., chemical properties about drugs and genomic information about diseases) often greatly improves prediction performance. However, these side features can vary widely in relevance and are often noisy and high-dimensional. We propose Bayesian Variable Selection-Guided Inductive Matrix Completion (BVSIMC), a new Bayesian model that enables variable selection from side features in drug discovery. By learning sparse latent embeddings, BVSIMC improves both predictive accuracy and interpretability. We validate our method through simulation studies and two drug discovery applications: 1) prediction of drug resistance in Mycobacterium tuberculosis, and 2) prediction of new drug-disease associations in computational drug repositioning. On both synthetic and real data, BVSIMC outperforms several other state-of-the-art methods in terms of prediction. In our two real examples, BVSIMC further reveals the most clinically meaningful side features.
Maximum entropy reinforcement learning motivates agents to explore states and actions to maximize the entropy of some distribution, typically by providing additional intrinsic rewards proportional to that entropy function. In this paper, we study intrinsic rewards proportional to the entropy of the discounted distribution of state-action features visited during future time steps. This approach is motivated by two results. First, we show that the expected sum of these intrinsic rewards is a lower bound on the entropy of the discounted distribution of state-action features visited in trajectories starting from the initial states, which we relate to an alternative maximum entropy objective. Second, we show that the distribution used in the intrinsic reward definition is the fixed point of a contraction operator and can therefore be estimated off-policy. Experiments highlight that the new objective leads to improved visitation of features within individual trajectories, in exchange for slightly reduced visitation of features in expectation over different trajectories, as suggested by the lower bound. It also leads to improved convergence speed for learning exploration-only agents. Control performance remains similar across most methods on the considered benchmarks.
Data science plays a critical role in transforming complex data into actionable insights across numerous domains. Recent developments in large language models (LLMs) and artificial intelligence (AI) agents have significantly automated data science workflow. However, it remains unclear to what extent AI agents can match the performance of human experts on domain-specific data science tasks, and in which aspects human expertise continues to provide advantages. We introduce AgentDS, a benchmark and competition designed to evaluate both AI agents and human-AI collaboration performance in domain-specific data science. AgentDS consists of 17 challenges across six industries: commerce, food production, healthcare, insurance, manufacturing, and retail banking. We conducted an open competition involving 29 teams and 80 participants, enabling systematic comparison between human-AI collaborative approaches and AI-only baselines. Our results show that current AI agents struggle with domain-specific reasoning. AI-only baselines perform near or below the median of competition participants, while the strongest solutions arise from human-AI collaboration. These findings challenge the narrative of complete automation by AI and underscore the enduring importance of human expertise in data science, while illuminating directions for the next generation of AI. Visit the AgentDS website here: this https URL and open source datasets here: this https URL .
We establish new exponential in dimension lower bounds for the Maximum Halfspace Discrepancy problem, which models linear classification. Both are fundamental problems in computational geometry and machine learning in their exact and approximate forms. However, only $O(n^d)$ and respectively $\tilde O(1/\varepsilon^d)$ upper bounds are known and complemented by polynomial lower bounds that do not support the exponential in dimension dependence. We close this gap up to polylogarithmic terms by reduction from widely-believed hardness conjectures for Affine Degeneracy testing and $k$-Sum problems. Our reductions yield matching lower bounds of $\tilde\Omega(n^d)$ and respectively $\tilde\Omega(1/\varepsilon^d)$ based on Affine Degeneracy testing, and $\tilde\Omega(n^{d/2})$ and respectively $\tilde\Omega(1/\varepsilon^{d/2})$ conditioned on $k$-Sum. The first bound also holds unconditionally if the computational model is restricted to make sidedness queries, which corresponds to a widely spread setting implemented and optimized in many contemporary algorithms and computing paradigms.
This report examines numerical aspects of constructing Karhunen-Loève expansions (KLEs) for second-order stochastic processes. The KLE relies on the spectral decomposition of the covariance operator via the Fredholm integral equation of the second kind, which is then discretized on a computational grid, leading to an eigendecomposition task. We derive the algebraic equivalence between this Fredholm-based eigensolution and the singular value decomposition of the weight-scaled sample matrix, yielding consistent solutions for both model-based and data-driven KLE construction. Analytical eigensolutions for exponential and squared-exponential covariance kernels serve as reference benchmarks to assess numerical consistency and accuracy in 1D settings. The convergence of SVD-based eigenvalue estimates and of the empirical distributions of the KL coefficients to their theoretical $\mathcal{N}(0,1)$ target are characterized as a function of sample count. Higher-dimensional configurations include a two-dimensional irregular domain discretized by unstructured triangular meshes with two refinement levels, and a three-dimensional toroidal domain whose non-simply-connected topology motivates a comparison between Euclidean and shortest interior path distances between the grid points. The numerical results highlight the interplay between the discretization strategy, quadrature rule, and sample count, and their impact on the KLE results.
Debiased machine learning estimators for smooth functionals in nonparametric models can exhibit substantial variability and instability, often leading practitioners to instead rely on parametric or semiparametric working models. Such models, however, may be misspecified and can therefore introduce bias. We study how data-driven model selection can be combined with debiased machine learning to construct estimators that adapt to structure in the data-generating distribution. To this end, we propose Adaptive Debiased Machine Learning (ADML), a nonparametric framework for constructing superefficient estimators of pathwise differentiable parameters. The framework unifies a broad class of previously proposed adaptive estimators, including methods based on variable selection, learned feature representations, and collaborative targeted learning. It requires only high-level conditions and approximate validity of the selection procedure, which are implied by lower-level conditions already assumed in important settings, including sieve-based selection, sparsity-based methods such as the Lasso, and data-adaptive feature representations. We show that ADML estimators yield regular and efficient root-\(n\) inference for an oracle projection parameter induced by a data-adaptive oracle submodel. This oracle parameter coincides with the target parameter at the true distribution but typically has a smaller efficiency bound, thereby yielding superefficiency for the target parameter. As a practical illustration, we introduce a broad class of automatic ADML estimators for continuous linear functionals of the outcome regression, in which model selection is performed directly on the regression itself. Motivated by overlap challenges in causal inference, we develop new superefficient plug-in estimators for the average treatment effect based on calibration in semiparametric regression models.
In the era of fast-paced precision medicine, observational studies play a major role in properly evaluating new treatments in clinical practice. Yet, unobserved confounding can significantly compromise causal conclusions drawn from non-randomized data. We propose a novel strategy that leverages randomized trials to quantify unobserved confounding. First, we design a statistical test to detect unobserved confounding with strength above a given threshold. Then, we use the test to estimate an asymptotically valid lower bound on the unobserved confounding strength. We evaluate the power and validity of our statistical test on several synthetic and semi-synthetic datasets. Further, we show how our lower bound can correctly identify the absence and presence of unobserved confounding in a real-world setting.
We introduce efficient plug-in (EP) learning, a novel framework for the estimation of heterogeneous causal contrasts, such as the conditional average treatment effect and conditional relative risk. The EP-learning framework enjoys the same oracle efficiency as Neyman-orthogonal learning strategies, such as DR-learning and R-learning, while addressing some of their primary drawbacks: (i) their practical applicability can be hindered by non-convex loss functions; and (ii) they may suffer from poor performance and instability due to inverse probability weighting and pseudo-outcomes that violate bounds. To overcome these issues, the EP-learner leverages an efficient plug-in estimator of the population risk function for the causal contrast. In doing so, it inherits the stability of plug-in strategies such as T-learning, while improving on their efficiency. Under reasonable conditions, EP-learners based on empirical risk minimization are oracle-efficient, exhibiting asymptotic equivalence to the minimizer of an oracle-efficient one-step debiased estimator of the population risk function. In simulation experiments, we show that EP-learners of the conditional average treatment effect and conditional relative risk outperform state-of-the-art competitors, including the T-learner, R-learner, and DR-learner. Open-source implementations of the proposed methods are available in our \texttt{R} package \texttt{hte3}.
We develop a robust Bayesian analysis based on heavy-tailed modeling. It is common to impose a Student-$t$ distribution to eliminate the influence of outliers. We apply it to large-scale studies in Bayesian inference, and provide diagnoses for detecting outliers using the posterior predictive $p$-value ($ppp$). In addition, we propose an adaptive method to decide the level of the posterior FDR. We suggest an adaptive method to determine it using an estimated ratio of true null genes using Storey's $q$-value method. Our methods are demonstrated on gene expression data for colorectal cancer.
Chest X-ray (CXR) images are among the most commonly used diagnostic imaging modalities in clinical practice. Stringent privacy constraints often limit the public dissemination of patient CXR images, contributing to the increasing use of synthetic images produced by deep generative models for data sharing and training machine learning models. Given the high-stakes downstream applications of CXR images, it is crucial to evaluate how faithfully synthetic images reflect the underlying target distribution. We propose the embedded characteristic score (ECS), a flexible evaluation procedure that compares synthetic and patient CXR samples through characteristic function transforms of feature embeddings. The choice of embedding can be tailored to the clinical or scientific context of interest. By leveraging the behavior of characteristic functions near the origin, ECS is sensitive to differences in higher moments and distribution tails, aspects that are often overlooked by commonly used evaluation metrics such as the Fréchet Inception Distance (FID). We establish theoretical properties of ECS and describe a calibration strategy based on a simple resampling procedure. We compare the empirical performance of ECS against FID via simulations and standard benchmark imaging datasets. Assessing synthetic CXR images with ECS uncovers clinically relevant distributional discrepancies relative to patient CXR images. These results highlight the importance of reliable evaluation of synthetic data that inform high-stakes decisions.
Across many domains of science, stochastic models are an essential tool to understand the mechanisms underlying empirically observed data. Models can be of different levels of detail and accuracy, with models of high-fidelity (i.e., high accuracy) to the phenomena under study being often preferable. However, inferring parameters of high-fidelity models via simulation-based inference is challenging, especially when the simulator is computationally expensive. We introduce a multifidelity approach to neural posterior estimation that uses transfer learning to leverage inexpensive low-fidelity simulations to efficiently infer parameters of high-fidelity simulators. Our method applies the multifidelity scheme to both amortized and non-amortized neural posterior estimation. We further improve simulation efficiency by introducing a sequential variant that uses an acquisition function targeting the predictive uncertainty of the density estimator to adaptively select high-fidelity parameters. On established benchmark and neuroscience tasks, our approaches require up to two orders of magnitude fewer high-fidelity simulations than current methods, while showing comparable performance. Overall, our approaches open new opportunities to perform efficient Bayesian inference on computationally expensive simulators.
We propose a grid-based methodology for online changepoint detection that allows offline changepoint tests to be applied to sequentially observed data. The methodology achieves low update and storage costs by testing for changepoints over a dynamically updating grid of candidate changepoint locations. For a broad class of test statistics, including those based on empirical averages and certain likelihood ratios, we show that the resulting online procedure has update and storage costs that grow at most logarithmically with the sample size. We further show that finite-sample power guarantees for the offline test translate directly into non-asymptotic upper bounds on the detection delay, under a mild robustness assumption. Building upon the methodology, we construct methods for detecting changes in the mean and in the covariance matrix of multivariate data, and prove near-optimal non-asymptotic upper bounds on their detection delays. The effectiveness of the methodology is supported by a simulation study, where we compare its performance for detecting mean changes with that of state-of-the-art online methods. To illustrate its practical applicability, we use the methodology to detect structural changes in currency exchange rates in real time.
We propose a novel modeling framework for time-evolving networks allowing for long-term dependence in network features that update in continuous time. Dynamic network growth is functionally parameterized via the conditional intensity of a marked point process. This characterization enables flexible, joint modeling of both update timing and the network updates themselves, dependent on the entire left-continuous sample path. We propose a path dependent nonlinear marked Hawkes process as an expressive platform for modeling such data; its dynamic mark space embeds the time-evolving network. We prove well-posedness and establish sufficient stability conditions, demonstrate simulation and subsequent feasible likelihood-based inference through numerical study, and illustrate the methodology with an application to conference attendee social network data. The proposed formulation provides a flexible and principled foundation for statistical inference on complex network evolution in continuous time.
In this paper, we provide a comprehensive cross-country validation study of compositional mortality modeling and forecasting methods. Thus, we consider two one-to-one transformations: the cumulative distribution function and the centered log-ratio transformation in compositional data analysis. Between the two transformations, the cumulative distribution function provides a scale-free way to visualize the gender gap and cross-country heterogeneity in the probability of dying by sex and country. Drawing on age-specific period life-table death counts from 24 countries in the Human Mortality Database (2025), we assess and compare the point and interval forecast accuracy of the two transformations, using the same forecasting method. Enhancing the forecast accuracy of period life-table death counts is of significant value to demographers, who rely on such forecasts to estimate survival probabilities and life expectancy, and to actuaries, who use them to price annuities across various entry ages and maturities.
Forward regression is a classical and effective tool for variable screening in ultra-high dimensional linear models, but its standard projection-based implementation can be computationally costly and numerically unstable when predictors are strongly collinear. Motivated by this limitation, we propose an orthogonalized forward regression procedure, implemented recursively through Gram-Schmidt updates, that ranks predictors according to their unique contributions after removing the effects of variables already selected. This approach preserves the interpretability of forward regression while substantially reducing the cost of repeated projections. We further develop a path-based model size selection rule using statistics computed directly from the forward sequence, thereby avoiding cross-validation and extensive tuning. The resulting method is particularly well suited to settings in which the number of predictors far exceeds the sample size and strong collinearity renders the conventional forward fitting ineffective. Theoretically, we derive the optimal convergence rate for the proposed Gram-Schmidt forward regression, thereby extending existing results for projection-based forward regression, and further show that it enjoys sure screening property and variable selection consistency under suitable conditions. Simulation studies and empirical examples demonstrate that it provides a favorable balance among computational efficiency, numerical stability, screening accuracy, and predictive performance, especially in highly correlated ultra-high dimensional settings.
Designing efficient experiments under practical constraints is critical in both scientific research and industrial practice. Focusing on minimizing the average variance of the parameter estimates, A-optimal designs show advantages in screening factors and reducing prediction errors. Compared with other criteria, however, algorithms and software for generating A-optimal designs are scarce. In this paper, we characterize A-optimal designs under generalized linear models theoretically and develop efficient algorithms for identifying them. When a predetermined finite set of experimental settings is given, we derive analytic solutions or establish necessary and sufficient conditions for obtaining A-optimal approximate allocations. We show that a lift-one algorithm based on our formulae outperforms commonly used algorithms for finding A-optimal allocations. When continuous factors or design regions get involved, we develop a ForLion algorithm that is guaranteed to find A-optimal designs with mixed factors. Numerical studies show that our algorithms can find highly efficient designs with reduced numbers of distinct experimental settings, which may save both experimental time and cost significantly. Along with a rounding-off algorithm that converts approximate allocations to exact ones, we demonstrate that stratified samplers based on A-optimal allocations may provide more accurate parameter estimates than commonly used samplers.
While achieving exceptional generative quality, modern diffusion, flow, and other matching models suffer from slow inference, as they require many steps of iterative generation. Recent distillation methods address this by training efficient one-step generators under the guidance of a pre-trained teacher model. However, these methods are often constrained to only one specific framework, e.g., only to diffusion or only to flow models. Furthermore, these methods are naturally data-free, and to benefit from the usage of real data, it is required to use an additional complex adversarial training with an extra discriminator model. In this paper, we present RealUID, a universal distillation framework for all matching models that seamlessly incorporates real data into the distillation procedure without GANs. Our RealUID approach offers a simple theoretical foundation that covers previous distillation methods for Flow Matching and Diffusion models, and is also extended to their modifications, such as Bridge Matching and Stochastic Interpolants. The code can be found in this https URL.
This study develops an AI-based pose estimation pipeline for quantifying movement kinematics in resistance training. Using videos from Wolf et al. (2025), comprising 303 recordings of 26 participants performing eight upper-body exercises under full (fROM) and lengthened partial (pROM) conditions, we extract joint-angle trajectories using five distinct deep-learning pose estimation models and a unified signal-processing framework. From these trajectories, we derive repetition-level metrics including range of motion (ROM) and repetition duration. We use these outputs as dependent variables in a crossed random-effects model that accounts for participant-, exercise-, and model-level variability to assess systematic differences between ROM conditions. Results indicate that pROM reduces range of motion without significantly affecting repetition duration. Variance decomposition shows that pROM increases both between-participant and between-exercise variability, suggesting reduced consistency in execution. To enable cross-exercise comparison, we model ROM on a logarithmic scale and define %ROM as the proportion of fROM achieved under pROM. While the estimated mean is approximately 56\%, significant heterogeneity across exercises indicates that lengthened partials are not characterized by a fixed proportion of full ROM. The results demonstrate that AI-based motion analysis can provide reliable kinematic insights to inform evidence-based training recommendations.
Prediction is a central task of statistics and machine learning, yet many inferential settings provide only partial information, typically in the form of moment constraints or estimating equations. We develop a finite, fully Bayesian framework for propagating such partial information through predictive distributions. Building on de Finetti's representation theorem, we construct a curvature-adaptive version of exchangeable updating that operates directly under finite constraints, yielding an explicit discrete-Gaussian mixture that quantifies predictive uncertainty. The resulting finite-sample bounds depend on the smallest eigenvalue of the information-geometric Hessian, which measures the curvature and identification strength of the constraint manifold. This approach unifies empirical likelihood, Bayesian empirical likelihood, and generalized method-of-moments estimation within a common predictive geometry. On the operational side, it provides computable curvature-sensitive uncertainty bounds for constrained prediction; on the theoretical side, it recovers de Finetti's coherence, Doob's martingale convergence and local asymptotic normality as limiting cases of the same finite mechanism. Our framework thus offers a constructive bridge between partial information and full Bayesian prediction.
Accelerated failure time (AFT) models provide a direct and interpretable time-scale description of covariate effects in lifetime data analysis, but classical formulations rely on linear predictors and are therefore limited in their ability to represent nonlinear relationships. Moreover, in heterogeneous clinical settings with complex covariate structures and varying censoring mechanisms, standard survival models such as the Cox proportional hazards model or AFT formulations may be inadequate due to restrictive structural assumptions. We propose a structured nonparametric extension of the AFT framework in which the regression function governing log-survival time is an unknown smooth function represented through Kolmogorov--Arnold representations. We formalize the nonlinear AFT estimand under independent right-censoring and show that the proposed function class strictly contains the classical linear AFT model as a special case. Estimation is carried out through a unified framework that accommodates several censoring-adjusted losses such as Buckley--James, inverse probability of censoring weight and transformation methods. Structural regularization and pruning promote parsimony, and symbolic approximation yields analytic representations of learned component functions. Simulation studies show that the method recovers linear structure when appropriate and captures nonlinear effects when present. Applications to multiple clinical datasets demonstrate competitive predictive performance and transparent covariate-effect estimation.
Expand-and-sparsify representations are a class of theoretical models that capture sparse representation phenomena observed in the sensory systems of many animals. At a high level, these representations map an input $x \in \mathbb{R}^d$ to a much higher dimension $m \gg d$ via random linear projections before zeroing out all but the $k \ll m$ largest entries. The result is a $k$-sparse vector in $\{0,1\}^m$. We study the suitability of this representation for two fundamental statistical problems: density estimation and mode estimation. For density estimation, we show that a simple linear function of the expand-and-sparsify representation produces an estimator with minimax-optimal $\ell_{\infty}$ convergence rates. In mode estimation, we provide simple algorithms on top of our density estimator that recover single or multiple modes at optimal rates up to logarithmic factors under mild conditions.
We consider Bayesian inverse problems arising in data assimilation for dynamical systems governed by partial and stochastic partial differential equations. The space-time dependent field is inferred jointly with static parameters of the prior and likelihood densities. Particular emphasis is placed on the hyperparameter controlling the prior smoothness and regularity, which is critical in ensuring well-posedness, shaping posterior structure, and determining predictive uncertainty. Commonly it is assumed to be known and fixed a priori; however in this paper we will adopt a hierarchical Bayesian framework in which smoothness and other hyperparameters are treated as unknown and assigned hyperpriors. Posterior inference is performed using Metropolis-within-Gibbs sampling suitable to high dimensions, for which hyperparameter estimation involves little computational overhead. The methodology is demonstrated on inverse problems for the Navier-Stokes equations and the stochastic advection-diffusion equation, under sparse and dense observation regimes, using Gaussian priors with different covariance structure. Numerical results show that jointly estimating the smoothness substantially reduces the errors in uncertainty quantification and parameter estimation induced by smoothness misspecification, by achieving performance comparable to scenarios in which the true smoothness is known.
The Bayesian and Akaike information criteria aim at finding a good balance between under- and over-fitting. They are extensively used every day by practitioners. Yet we contend they suffer from at least two afflictions: their penalty parameter $\lambda=\log n$ and $\lambda=2$ are too small, leading to many false discoveries, and their inherent (best subset) discrete optimization is infeasible in high dimension. We alleviate these issues with the pivotal information criterion: PIC is defined as a continuous optimization problem, and the PIC penalty parameter $\lambda$ is selected at the detection boundary (under pure noise). PIC's choice of $\lambda$ is the quantile of a statistic that we prove to be (asymptotically) pivotal, provided the loss function is appropriately transformed. As a result, simulations show a phase transition in the probability of exact support recovery with PIC, a phenomenon studied with no noise in compressed sensing. Applied on real data, for similar predictive performances, PIC selects the least complex model among state-of-the-art learners.
Learning-to-Defer routes each input to the expert that minimizes expected cost, but it assumes that the information available to every expert is fixed at decision time. Many modern systems violate this assumption: after selecting an expert, one may also choose what additional information that expert should receive, such as retrieved documents, tool outputs, or escalation context. We study this problem and call it Learning-to-Defer with advice. We show that a broad family of natural separated surrogates, which learn routing and advice with distinct heads, is inconsistent even in the smallest non-trivial setting. We then introduce an augmented surrogate that operates on the composite expert--advice action space and prove an $\mathcal{H}$-consistency guarantee together with an excess-risk transfer bound, yielding recovery of the Bayes-optimal policy in the limit. Experiments on tabular, language, and multi-modal tasks show that the resulting method improves over standard Learning-to-Defer while adapting its advice-acquisition behavior to the cost regime; a synthetic benchmark confirms the failure mode predicted for separated surrogates.
This paper investigates testing for deviation of a high-dimensional mean vector $\boldsymbol{\mu}$. In contrast to the standard one-sample significance test of the form: $H_0^\texttt{e} : \boldsymbol{\mu} = \boldsymbol{\mu}_0$ versus $H_1^\texttt{e} : \boldsymbol{\mu} \neq \boldsymbol{\mu}_0$, we focus on testing the deviation $H_0 : \|\boldsymbol{\mu} - \boldsymbol{\mu}_0\|_2 \ge d_0$ versus $H_1 : \|\boldsymbol{\mu} - \boldsymbol{\mu}_0\|_2 < d_0$ for a prespecified length $d_0 > 0$. Constructing a valid test statistic for this problem is technically nontrivial. By applying the concept of positive and negative feedback processes from control theory, we propose a test statistic based on a two-armed bandit (TAB) process. The deviation test is also extended to the two-sample setting. Simulation experiments confirm a good performance of the tests in finite samples. Finally, a real data analysis demonstrates the practical significance of the proposed deviation tests.
Asymptotically linear estimators in semiparametric models achieve their point-estimation guarantees via a von Mises expansion in which a second-order remainder is declared negligible. Confidence intervals then treat the first-order influence-function term as the sole source of sampling variability. This reasoning is asymptotically exact but can fail materially in finite samples whenever the second-order remainder contributes variation of the same order as the influence-function variance -- a regime we call the \emph{near-boundary regime}, characterized by nuisance estimation operating at or near the product-rate threshold. We develop a general theory of inference for this regime. Our contributions are: (i) a \emph{finite-sample variance decomposition} that separates influence-function variance from remainder-induced variance and the covariance between them; (ii) a \emph{sandwich consistency theorem} that gives a precise necessary and sufficient condition -- strong remainder negligibility -- for the standard sandwich to be consistent for the total sampling variance, and shows this is strictly stronger than the product-rate condition that guarantees asymptotic linearity; (iii) two \emph{refined variance estimators} -- leave-one-unit-out jackknife and pairs cluster bootstrap -- each with full asymptotic validity guarantees in the near-boundary regime, together with a heteroskedasticity-corrected sandwich interpretation that is numerically equivalent to the jackknife Wald interval; and (iv) a \emph{clustered-data extension} in which the remainder interacts with intra-cluster correlation to produce an analytic formula for sandwich gap amplification.
We study the Fréchet $k-$means of a metric measure space when both the measure and the distance are unknown and have to be estimated. We prove a general result that states that the $k-$means are continuous with respect to the measured Gromov-Hausdorff topology. In this situation, we also prove a stability result for the Voronoi clusters they determine. We do not assume uniqueness of the set of $k-$means, but when it is unique, the results are stronger. This framework provides a unified approach to proving consistency for a wide range of metric learning procedures. As concrete applications, we obtain new consistency results for several important estimators that were previously unestablished, even when $k=1$. These include $k-$means based on: (i) Isomap and Fermat geodesic distances on manifolds, (ii) difussion distances, (iii) Wasserstein distances computed with respect to learned ground metrics. Finally, we consider applications beyond the statistical inference paradigm like (iv) first passage percolation and (v) discrete approximations of length spaces.
Stochastic optimization is a cornerstone of modern machine learning. This paper studies the generalization performance of two classical stochastic optimization algorithms: stochastic gradient descent (SGD) and Nesterov's accelerated gradient (NAG). We establish new learning rates for both algorithms, with improved guarantees in some settings or comparable rates under weaker assumptions in others. We also provide numerical experiments to support the theory.
In order to identify expertise, forecasters should not be tested by their calibration score, which can always be made arbitrarily small, but rather by their Brier score. The Brier score is the sum of the calibration score and the refinement score; the latter measures how good the sorting into bins with the same forecast is, and thus attests to "expertise." This raises the question of whether one can gain calibration without losing expertise, which we refer to as "calibeating." We provide an easy way to calibeat any forecast, by a deterministic online procedure. We moreover show that calibeating can be achieved by a stochastic procedure that is itself calibrated, and then extend the results to simultaneously calibeating multiple procedures, and to deterministic procedures that are continuously calibrated.
In recent years, a certain type of problems have become of interest where one wants to query a trained classifier. Specifically, one wants to find the closest instance to a given input instance such that the classifier's predicted label is changed in a desired way. Examples of these "inverse classification" problems are counterfactual explanations, adversarial examples and model inversion. All of them are fundamentally optimization problems over the input instance vector involving a fixed classifier, and it is of interest to achieve a fast solution for interactive or real-time applications. We focus on solving this problem efficiently with the squared Euclidean distance for two of the most widely used classifiers: logistic regression and softmax classifier. Owing to special properties of these models, we show that the optimization can be solved in closed form for logistic regression, and iteratively but extremely fast for the softmax classifier. This allows us to solve either case exactly (to nearly machine precision) in a runtime of milliseconds to around a second even for very high-dimensional instances and many classes.
In the past several years, the last-iterate convergence of the Stochastic Gradient Descent (SGD) algorithm has triggered people's interest due to its good performance in practice but lack of theoretical understanding. For Lipschitz convex functions, different works have established the optimal $O(\log(1/\delta)\log T/\sqrt{T})$ or $O(\sqrt{\log(1/\delta)/T})$ high-probability convergence rates for the final iterate, where T is the time horizon and \delta is the failure probability. However, to prove these bounds, all the existing works are either limited to compact domains or require almost surely bounded noise. It is natural to ask whether the last iterate of SGD can still guarantee the optimal convergence rate but without these two restrictive assumptions. Besides this important question, there are still lots of theoretical problems lacking an answer. For example, compared with the last-iterate convergence of SGD for non-smooth problems, only few results for smooth optimization have yet been developed. Additionally, the existing results are all limited to a non-composite objective and the standard Euclidean norm. It still remains unclear whether the last-iterate convergence can be provably extended to wider composite optimization and non-Euclidean norms. In this work, to address the issues mentioned above, we revisit the last-iterate convergence of stochastic gradient methods and provide the first unified way to prove the convergence rates both in expectation and in high probability to accommodate general domains, composite objectives, non-Euclidean norms, Lipschitz conditions, smoothness, and (strong) convexity simultaneously. Additionally, we extend our analysis to obtain the last-iterate convergence under heavy-tailed and sub-Weibull noise.
We consider a statistical model for symmetric matrix factorization with additive Gaussian noise in the high-dimensional regime, where the rank of the signal matrix to infer $M$ scales with its size $N$ as $M=\mathrm{o}(\sqrt{\ln N})$. Allowing for an $N$-dependent rank offers new challenges and requires new methods. Working in the Bayes-optimal setting, we show that whenever the signal has i.i.d. entries, the limiting mutual information between signal and data is given by a variational formula involving a rank-one replica symmetric potential. In other words, from the information-theoretic perspective, the case of a (slowly) growing rank is the same as when $M=1$ (namely, the standard spiked Wigner model). The proof is primarily based on a novel multiscale cavity method allowing for growing rank along with some information-theoretic identities on worst noise for the vector Gaussian channel. We believe that the cavity method developed here will play a role in the analysis of a broader class of inference and spin models where the degrees of freedom are large arrays instead of vectors.
Recent literature proposes combining short-term experimental and long-term observational data to provide alternatives to conventional observational studies for the identification of long-term average treatment effects (LTEs). This paper re-examines the identification problem and uncovers that assumptions restricting temporal link functions -- relationships between short-term and mean long-term potential outcomes -- are central in this context. The experimental data serve to amplify the identifying power of such assumptions; absent them, the combined data are no more informative than the observational data alone. Plausible inference thus hinges on justifiable restrictions in this class. Motivated by this, I introduce two treatment response assumptions that may be defensible based on economic theory or intuition. To utilize them and facilitate future developments, I develop a novel unifying identification framework that computationally produces sharp bounds on the LTE for a general class of temporal link function restrictions and accommodates imperfect experimental compliance -- thereby also extending existing approaches. I illustrate the method by estimating the long-term effects of Head Start participation. The findings indicate that the effects on educational attainment, employment, and criminal involvement are lasting but smaller in magnitude than those established by sibling comparisons.
We study black-box vector optimization with Gaussian process bandits, where there is an incomplete order relation on objective vectors described by a polyhedral convex cone. Existing black-box vector optimization approaches either suffer from high sample complexity or lack theoretical guarantees. We propose Vector Optimization with Gaussian Process (VOGP), an adaptive elimination algorithm that identifies Pareto optimal solutions sample efficiently by exploiting the smoothness of the objective function. We establish theoretical guarantees, deriving information gain-based and kernel-specific sample complexity bounds. Finally, we conduct a thorough empirical evaluation of VOGP and compare it with the state-of-the-art multi-objective and vector optimization algorithms on several real-world and synthetic datasets, emphasizing VOGP's efficiency (e.g., $\sim18\times$ lower sample complexity on average). We also provide heuristic adaptations of VOGP for cases where the design space is continuous and where the Gaussian process model lacks access to the true kernel hyperparameters. This work opens a new frontier in sample-efficient multi-objective black-box optimization by incorporating preference structures while maintaining theoretical guarantees and practical efficiency.
Deep neural networks have attained remarkable success across diverse classification tasks. Recent empirical studies have shown that deep networks learn features that are linearly separable across classes. However, these findings often lack rigorous justifications, even under relatively simple settings. In this work, we address this gap by examining the linear separation capabilities of shallow nonlinear networks. Specifically, inspired by the low intrinsic dimensionality of image data, we model inputs as a union of low-dimensional subspaces (UoS) and demonstrate that a single nonlinear layer can transform such data into linearly separable sets. Theoretically, we show that this transformation occurs with high probability when using random weights and quadratic activations. Notably, we prove this can be achieved when the network width scales polynomially with the intrinsic dimension of the data rather than the ambient dimension. Experimental results corroborate these theoretical findings and demonstrate that similar linear separation properties hold in practical scenarios beyond our analytical scope. This work bridges the gap between empirical observations and theoretical understanding of the separation capacity of nonlinear networks, offering deeper insights into model interpretability and generalization.
The Fréchet mean is a fundamental notion of central tendency defined as a minimizer of a sum of squared distances in a general metric space. In this paper, we study Fréchet means in tropical geometry -- a piecewise linear, combinatorial, and polyhedral variant of algebraic geometry -- by formulating and solving the associated tropical quadratic optimization problem. We give a geometric characterization of the collection of all tropical Fréchet means as a bounded set that is simultaneously tropically and classically convex, hence a polytrope. We establish the existence of positivity certificates for maxima of finitely many quadratic polynomials in $\mathbb{R}[x_1,\ldots,x_n]$ whose homogeneous quadratic components are sums of squares, which provides a symbolic framework for exact optimization. Using this structure, we develop algorithms for computing tropical Fréchet means and the associated Fréchet mean polytrope. We further describe a combinatorial type decomposition of the objective function induced by braid arrangements, yielding a piecewise quadratic representation and a fully symbolic method for exact computation.
Determining whether an algorithmic decision-making system discriminates against a specific demographic typically involves comparing a single point estimate of a fairness metric against a predefined threshold. This practice is statistically brittle: it ignores sampling error and treats small demographic subgroups the same as large ones. The problem intensifies in intersectional analyses, where multiple sensitive attributes are considered jointly, giving rise to a larger number of smaller groups. As these groups become more granular, the data representing them becomes too sparse for reliable estimation, and fairness metrics yield excessively wide confidence intervals, precluding meaningful conclusions about potential unfair treatments. In this paper, we introduce a unified, size-adaptive, hypothesis-testing framework that turns fairness assessment into an evidence-based statistical decision. Our contribution is twofold. (i) For sufficiently large subgroups, we prove a Central-Limit result for the statistical parity difference, leading to analytic confidence intervals and a Wald test whose type-I (false positive) error is guaranteed at level $\alpha$. (ii) For the long tail of small intersectional groups, we derive a fully Bayesian Dirichlet-multinomial estimator; Monte-Carlo credible intervals are calibrated for any sample size and naturally converge to Wald intervals as more data becomes available. We validate our approach empirically on benchmark datasets, demonstrating how our tests provide interpretable, statistically rigorous decisions under varying degrees of data availability and intersectionality.
Decentralized learning provides a scalable alternative to parameter-server-based training, yet its performance is often hindered by limited peer-to-peer communication. In this paper, we study how communication should be scheduled over time, including determining when and how frequently devices synchronize. Counterintuitive empirical results show that concentrating communication budgets in the later stages of decentralized training remarkably improves global test performance. Surprisingly, we uncover that fully connected communication at the final step, implemented by a single global merging, can significantly improve the performance of decentralized learning under high data heterogeneity. Our theoretical contributions, which explain these phenomena, are the first to establish that the globally merged model of decentralized SGD can match the convergence rate of parallel SGD. Technically, we reinterpret part of the discrepancy among local models, which were previously considered as detrimental noise, as constructive components essential for matching this rate. This work provides evidence that decentralized learning is able to generalize under high data heterogeneity and limited communication, while offering broad new avenues for model merging research.
The lack of high-quality public cyber incident data limits empirical research and predictive modeling for cyber risk assessment. This challenge persists due to the reluctance of companies to disclose incidents that could damage their reputation or investor confidence. Therefore, from an actuarial perspective, potential resolutions conclude two aspects: the enhancement of existing cyber incident datasets and the implementation of advanced modeling techniques to optimize the use of the available data. A review of existing data-driven methods highlights a significant lack of entity-specific organizational features in publicly available datasets. To address this gap, we propose a novel InsurTech framework that enriches cyber incident data with entity-specific attributes. We develop various machine learning (ML) models: a multilabel classification model to predict the occurrence of cyber incident types (e.g., Privacy Violation, Data Breach, Fraud and Extortion, IT Error, and Others) and a multioutput regression model to estimate their annual frequencies. While classifier and regressor chains are implemented to explore dependencies among cyber incident types as well, no significant correlations are observed in our datasets. Besides, we apply multiple interpretable ML techniques to identify and cross-validate potential risk factors developed by InsurTech across ML models. We find that InsurTech empowered features enhance prediction occurrence and frequency estimation robustness compared to only using conventional risk factors. The framework generates transparent, entity-specific cyber risk profiles, supporting customized underwriting and proactive cyber risk mitigation. It provides insurers and organizations with data-driven insights to support decision-making and compliance planning.
In Online Convex Optimization (OCO), when the stochastic gradient has a finite variance, many algorithms provably work and guarantee a sublinear regret. However, limited results are known if the gradient estimate has a heavy tail, i.e., the stochastic gradient only admits a finite $\mathsf{p}$-th central moment for some $\mathsf{p}\in\left(1,2\right]$. Motivated by it, this work examines different old algorithms for OCO (e.g., Online Gradient Descent) in the more challenging heavy-tailed setting. Under the standard bounded domain assumption, we establish new regrets for these classical methods without any algorithmic modification. Remarkably, these regret bounds are fully optimal in all parameters (can be achieved even without knowing $\mathsf{p}$), suggesting that OCO with heavy tails can be solved effectively without any extra operation (e.g., gradient clipping). Our new results have several applications. A particularly interesting one is the first provable and optimal convergence result for nonsmooth nonconvex optimization under heavy-tailed noise without gradient clipping. Furthermore, we explore broader settings (e.g., smooth OCO) and extend our ideas to optimistic algorithms to handle different cases simultaneously.
Trawl processes are a family of continuous-time, infinitely divisible, stationary processes whose correlation structure is entirely characterized by their so-called trawl function. This paper investigates the problem of estimating non-linear functionals of a trawl function under in-fill and long-span sampling schemes. Specifically, building on the work of \cite{SauriVeraart23}, we introduce non-parametric estimators for functionals of the type $\Psi_{t}(g)=\int_{0}^{t}g(a(s))\mathrm{d}s$ and $ \Lambda_t(g)=\int_{t}^{\infty}g(a(s))\mathrm{d}s$, where $a$ represents the trawl function of interest and $g$ a non-linear test function. We show that our estimator for $\Psi_{t}(g)$ is consistent and asymptotically Gaussian regardless of the memory of the process. We further demonstrate that the same phenomenon occurs for the estimation of $\Lambda_t(g)$ as long as $g(x)= \mathrm{O} (\lvert x\rvert^p)$, as $x\to0$, for some $p>3$. Additionally, we illustrate how our results can be used to construct a test statistic robust to memory effects for the presence of $T$-dependent.
For overparameterized linear regression with isotropic Gaussian design and minimum-$\ell_p$ interpolator $p\in(1,2]$, we give a unified, high-probability characterization for the scaling of the family of parameter norms $ \\{ \lVert \widehat{w_p} \rVert_r \\}_{r \in [1,p]} $ with sample size. We solve this basic, but unresolved question through a simple dual-ray analysis, which reveals a competition between a signal *spike* and a *bulk* of null coordinates in $X^\top Y$, yielding closed-form predictions for (i) a data-dependent transition $n_\star$ (the "elbow"), and (ii) a universal threshold $r_\star=2(p-1)$ that separates $\lVert \widehat{w_p} \rVert_r$'s which plateau from those that continue to grow with an explicit exponent. This unified solution resolves the scaling of *all* $\ell_r$ norms within the family $r\in [1,p]$ under $\ell_p$-biased interpolation, and explains in one picture which norms saturate and which increase as $n$ grows. We then study diagonal linear networks (DLNs) trained by gradient descent. By calibrating the initialization scale $\alpha$ to an effective $p_{\mathrm{eff}}(\alpha)$ via the DLN separable potential, we show empirically that DLNs inherit the same elbow/threshold laws, providing a predictive bridge between explicit and implicit bias. Given that many generalization proxies depend on $\lVert \widehat {w_p} \rVert_r$, our results suggest that their predictive power will depend sensitively on which $l_r$ norm is used.
Pass$@k$ is widely used to report the reasoning performance of LLMs, but it often produces unstable and potentially misleading rankings, especially when the number of trials (samples) is limited and computational resources are constrained. We present a principled Bayesian evaluation framework that replaces Pass$@k$ and average accuracy over $N$ trials (avg$@N$) with posterior estimates of a model's underlying success probability and credible intervals, yielding stable rankings and a transparent decision rule for differences. Evaluation outcomes are modeled as categorical (not just 0/1) with a Dirichlet prior, giving closed-form expressions for the posterior mean and uncertainty of any weighted rubric and enabling the use of prior evidence when appropriate. Theoretically, under a uniform prior, the Bayesian posterior mean is order-equivalent to average accuracy (Pass$@1$), explaining its empirical robustness while adding principled uncertainty. Empirically, in simulations with known ground-truth success rates and on AIME'24/'25, HMMT'25, and BrUMO'25, the posterior-based procedure achieves faster convergence and greater rank stability than Pass$@k$ and recent variants, enabling reliable comparisons at far smaller sample counts. The framework clarifies when observed gaps are statistically meaningful (non-overlapping credible intervals) versus noise, and it naturally extends to graded, rubric-based evaluations. Together, these results recommend replacing Pass$@k$ for LLM evaluation and ranking with a posterior-based, compute-efficient protocol that unifies binary and non-binary evaluation while making uncertainty explicit. Source code is available at this https URL
This paper develops limit theorems for random variables with network dependence, without requiring the individuals in the network to be located in a Euclidean or metric space. This distinguishes our approach from most existing limit theorems in network statistics and econometrics, which are based on weak dependence concepts such as strong mixing, near-epoch dependence, or $\psi$-dependence. All these weak dependence concepts presuppose an underlying metric. By relaxing the assumption of an underlying metric space, our theorems can be applied to a broader range of network data, including financial and social networks. To derive the limit theorems, we generalize the concept of functional dependence (also known as physical dependence) from time series to random variables with network dependence. Using this framework, we establish several inequalities, a law of large numbers, and central limit theorems. Furthermore, we demonstrate the verifiability of our high-level conditions by deriving primitive sufficient conditions for spatial autoregressive models, which are widely used in network data analysis.
Optimization under heavy-tailed noise has become popular recently, since it better fits many modern machine learning tasks, as captured by empirical observations. Concretely, instead of a finite second moment on gradient noise, a bounded ${\frak p}$-th moment where ${\frak p}\in(1,2]$ has been recognized to be more realistic (say being upper bounded by $\sigma_{\frak l}^{\frak p}$ for some $\sigma_{\frak l}\ge0$). A simple yet effective operation, gradient clipping, is known to handle this new challenge successfully. Specifically, Clipped Stochastic Gradient Descent (Clipped SGD) guarantees a high-probability rate ${\cal O}(\sigma_{\frak l}\ln(1/\delta)T^{1/{\frak p}-1})$ (resp. ${\cal O}(\sigma_{\frak l}^2\ln^2(1/\delta)T^{2/{\frak p}-2})$) for nonsmooth convex (resp. strongly convex) problems, where $\delta\in(0,1]$ is the failure probability and $T\in\mathbb{N}$ is the time horizon. In this work, we provide a refined analysis for Clipped SGD and offer two faster rates, ${\cal O}(\sigma_{\frak l}d_{\rm eff}^{-1/2{\frak p}}\ln^{1-1/{\frak p}}(1/\delta)T^{1/{\frak p}-1})$ and ${\cal O}(\sigma_{\frak l}^2d_{\rm eff}^{-1/{\frak p}}\ln^{2-2/{\frak p}}(1/\delta)T^{2/{\frak p}-2})$, than the aforementioned best results, where $d_{\rm eff}\ge1$ is a quantity we call the $\textit{generalized effective dimension}$. Our analysis improves upon the existing approach on two sides: better utilization of Freedman's inequality and finer bounds for clipping error under heavy-tailed noise. In addition, we extend the refined analysis to convergence in expectation and obtain new rates that break the known lower bounds. Lastly, to complement the study, we establish new lower bounds for both high-probability and in-expectation convergence. Notably, the in-expectation lower bounds match our new upper bounds, indicating the optimality of our refined analysis for convergence in expectation.
The impact of routine smaller outages on distribution system customers in terms of customer minutes interrupted can be tracked using conventional reliability indices. However, the customer minutes interrupted in large blackout events are extremely variable, and this makes it difficult to quantify the customer impact of these extreme events with resilience metrics. We solve this problem with the System Average Large Event Duration Index SALEDI that logarithmically transforms the customer minutes interrupted. We explain how this new resilience metric works, compare it with alternatives, quantify its statistical accuracy, and illustrate its practical use with standard outage data from five utilities.
Robustness under latent distribution shift remains challenging in partially observable reinforcement learning. We formalize a focused setting where an adversary selects a hidden initial latent distribution before the episode, termed an adversarial latent-initial-state POMDP. Theoretically, we prove a latent minimax principle, characterize worst-case defender distributions, and derive approximate best-response inequalities with finite-sample concentration bounds that make the optimization and sampling terms explicit. Empirically, using a Battleship benchmark, we demonstrate that targeted exposure to shifted latent distributions reduces average robustness gaps between Spread and Uniform distributions from 10.3 to 3.1 shots at equal budget. Furthermore, iterative best-response training exhibits budget-sensitive behavior that is qualitatively consistent with the theorem-guided diagnostics once one accounts for discounted PPO surrogates and finite-sample noise. Ultimately, we show that for latent-initial-state problems, the framework yields a clean evaluation game and useful theorem-motivated diagnostics while also making clear where implementation-level surrogates and optimization limits enter.
We study the aggregate hazard rate of a heterogeneous population whose individual event intensities are modeled as Cox (doubly stochastic) processes. In the deterministic hazard setting, the observed pool hazard is the survival weighted mean of the individual hazards, and its time derivative equals the mean individual hazard drift minus a variance term. This yields a transparent structural explanation of burnout in mortgage pools. We extend this perspective to stochastic intensity models. The observed pool hazard remains a survival-weighted mean, but now evolves as an Ito process whose drift contains the mean drift of the individual hazards and a negative selection term driven by cross-sectional dispersion, together with a diffusion term inherited from the common factor. We formulate the general identity and discuss special cases relevant to mortgage prepayment modeling.
Interstellar objects (ISOs) motivate a coupled mission-design and inference question relevant to spacecraft dynamics and control in extreme environments: if volatile-rich, rotating comet-like bodies were used for sustained deep-space navigation by exploiting pre-existing hyperbolic motion and in-situ propellant, what stability requirements arise under non-gravitational forcing, and what astrometric signatures might distinguish active stabilization from uncontrolled natural dynamics? We develop a stability-theoretic framework for trajectory tracking with jet-actuated correction, and show that high-speed transit geometry -- including debris-belt avoidance and encounter phasing -- tightly constrains feasible trajectories, making long-horizon tracking stability mission-critical. We model tracking residuals as the balance of disturbances and corrective action, and derive stability conditions across four levels: disturbance-energy stability, outer-loop contraction, actuator-memory stability, and rotation-mediated (Floquet) stability. The analysis implies residual diagnostics that can motivate empirical tests: under comparable forcing, effective stabilization is expected to strengthen short-horizon error correction, reduce event-conditioned persistence and variance clustering, regularize standardized innovations, and yield bounded post-shock recovery. More broadly, the framework provides a reference for deep-space guidance and control under nonlinear, multi-field disturbances and for planetary-defense concepts involving attitude shaping or impulsive kinetic impact.