New articles on Quantitative Finance


[1] 2603.18021

Anomaly prediction in XRP price with topological features

The aim of this research is to study XRP cryptoasset price dynamics, with a particular focus on forecasting atypical price movements. Recent studies suggest that topological properties of transaction graphs are highly informative for understanding cryptocurrency price behavior. In this work, we show that specific topological properties of the XRP transaction graphs provide important information about extreme XRP price surges, and can be used for more competitive prediction of anomalous price dynamics.


[2] 2603.18195

The Role of Data and Metrics in Measuring Inequality Worldwide. A Tribute to Giovanni Andrea Cornia's Lifelong Work on the World Ginis

This paper pays tribute to Professor Giovanni Andrea Cornia's lifelong contributions to the measurement of global inequality. We review twelve world and regional databases of the Gini coefficient, illustrate their coverage, overlapping, and data gaps, and analyse the major sources of discrepancy among published Ginis. Merging all databases into a unified collection of over 122,000 observations spanning 222 countries from 1867 to 2024, we document how differences in welfare metrics, reference units, sub-metric definitions, post-survey adjustments, and survey design produce Gini estimates that diverge considerably -- sometimes by as much as 50 percentage points -- for the same country and year. We quantify pairwise cross-database discordance, document the income-consumption Gini gap by region and income group, and discuss the contributions of welfare metric and equivalence scale choices to cross-database dispersion. We extend the analysis with a dedicated discussion of comparability across time and across measurement dimensions, showing how multiple layers of methodological choice interact to make any single Gini figure a product of a complex chain of decisions that are rarely fully disclosed. Our analysis confirms that the choice of welfare metric remains the single most important source of cross-country non-comparability, while sub-metric definitions and equivalence scales introduce further systematic differences that are routinely overlooked in comparative work.


[3] 2603.18440

Mapping the Midweek Mountain: The New Geography of Hybrid Work

This paper provides a behavioral analysis of the post-pandemic transformation of work, using a dataset of approximately 41 billion mobile geolocation records from 73.5 million individuals in the five largest U.S. metropolitan areas from the pre- to post- pandemic periods. By tracking movements between corporate headquarters, residences, and other points of interest, we document a structural shift in work patterns. Office based workdays declined from 42% in 2019 to 20.7% in 2022, before settling at 29.1% in 2023, a new equilibrium significantly below pre-pandemic levels. A "midweek mountain" peak of office attendance on Tuesdays through Thursdays, emerged as a robust new phenomenon post-pandemic. The nature of remote work has also changed: both in and after the pandemic, employees working from home allocated significantly more time to non-work locations like parks and malls during the workday. These findings indicate that the pandemic catalyzed a lasting transformation not just in work arrangements but also in the integration of personal and professional life, with implications for corporate policy, urban economics, and the future of work.


[4] 2603.18716

Poverty traps are rare, but trappedness isn't

The persistence of poverty is not well explained by who is poor. We argue the relevant object of measurement is trappedness--expected escape time from deprivation--which varies systematically across institutional environments and is invisible to standard poverty indices. Using Markov chains estimated on twenty years of longitudinal data from 27 European countries, we show that countries with identical deprivation rates differ in escape times by up to fourfold. These differences are not explained by household characteristics alone: exogenous shocks reshape welfare landscapes differently across countries, with divergence tracking welfare regime architecture rather than household composition. The mechanism is behavioural: health constrains a household's capacity to convert income gains into durable welfare improvement. Income transfers without health improvement fail to reduce poverty-return risk; combined interventions are super-additive across 28 countries, and the gap widens with transfer size. These findings dissolve the long-running poverty trap debate--studies that rejected traps measured the wrong dimension; studies that found them captured one projection of a multidimensional dynamic process. Trappedness is continuous, multidimensional, and institutionally shaped.


[5] 2603.18920

The Optimal Reset-Hour of a Once-Daily Petrol Price Increase Limit

A German ministry recently proposed a limit of at most one price increase per day for petrol stations. At what time should the price reset be allowed in order to lower price levels the most throughout the day? To answer this question, I infer the share of price-sensitive consumers for every hour of the day from German petrol station price data, based on a simple spatial-competition model. I focus on weekdays, which are the relevant target because commuter demand is less flexible than weekend demand. Hourly petrol station prices peak at 07:00 and bottom out at 19:00. Given the inferred composition of price-sensitivity throughout the day and hourly passenger-car traffic frequencies as a proxy for quantity, I evaluate every possible reset-hour of the new policy. The lowest traffic-weighted average price is achieved by an 11:00 reset. With this reset-hour, the resulting equilibrium price throughout the day is constant. This would lead to lower prices in the morning but higher prices in the evening, harming price-sensitive consumers but benefiting morning commuters and firms.


[6] 2603.18962

Robust Investment-Driven Insurance Pricing and Liquidity Management

This paper develops a dynamic equilibrium model of the insurance market that jointly characterizes insurers' underwriting, investment, recapitalization, and dividend policies under model uncertainty and financial frictions. Competitive insurers maximize shareholder value under a subjective worst-case probability measure, giving rise to liquidity-driven underwriting cycles and flight-to-quality behavior. While an equilibrium typically fails to exist in such dynamic liquidity management framework with external financial investment, we show that incorporating model uncertainty restores equilibrium existence under plausible parameter conditions. Moreover, the model uncovers a novel relationship between the correlation of insurance and financial market risks and the equilibrium insurance price: negative loadings may emerge when insurance gains and financial returns are positively correlated, contrary to conventional intuition.


[7] 2603.18969

Robust Investment-Driven Insurance Pricing under Correlation Ambiguity

As insurers increasingly behave like financial intermediaries and actively participate in capital markets, understanding the dependence structure between insurance and financial risks becomes crucial for insurers' operations. This paper studies dynamic equilibrium insurance pricing when insurers face ambiguity about the correlation between insurance and financial risks and optimally choose underwriting and investment strategies under worst-case beliefs. Correlation ambiguity can generate multiple equilibrium regimes. Contrary to conventional intuition, we find ambiguity does not necessarily increase insurance prices nor reduce insurers' utility.


[8] 2603.18053

Auditing the Auditors: Does Community-based Moderation Get It Right?

Online social platforms increasingly rely on crowd-sourced systems to label misleading content at scale, but these systems must both aggregate users' evaluations and decide whose evaluations to trust. To address the latter, many platforms audit users by rewarding agreement with the final aggregate outcome, a design we term consensus-based auditing. We analyze the consequences of this design in X's Community Notes, which in September 2022 adopted consensus-based auditing that ties users' eligibility for participation to agreement with the eventual platform outcome. We find evidence of strategic conformity: minority contributors' evaluations drift toward the majority and their participation share falls on controversial topics, where independent signals matter most. We formalize this mechanism in a behavioral model in which contributors trade off private beliefs against anticipated penalties for disagreement. Motivated by these findings, we propose a two-stage auditing and aggregation algorithm that weights contributors by the stability of their past residuals rather than by agreement with the majority. The method first accounts for differences across content and contributors, and then measures how predictable each contributor's evaluations are relative to the latent-factor model. Contributors whose evaluations are consistently informative receive greater influence in aggregation, even when they disagree with the prevailing consensus. In the Community Notes data, this approach improves out-of-sample predictive performance while avoiding penalization of disagreement.


[9] 2603.18107

ARTEMIS: A Neuro Symbolic Framework for Economically Constrained Market Dynamics

Deep learning models in quantitative finance often operate as black boxes, lacking interpretability and failing to incorporate fundamental economic principles such as no-arbitrage constraints. This paper introduces ARTEMIS (Arbitrage-free Representation Through Economic Models and Interpretable Symbolics), a novel neuro-symbolic framework combining a continuous-time Laplace Neural Operator encoder, a neural stochastic differential equation regularised by physics-informed losses, and a differentiable symbolic bottleneck that distils interpretable trading rules. The model enforces economic plausibility via two novel regularisation terms: a Feynman-Kac PDE residual penalising local no-arbitrage violations, and a market price of risk penalty bounding the instantaneous Sharpe ratio. We evaluate ARTEMIS against six strong baselines on four datasets: Jane Street, Optiver, Time-IMM, and DSLOB (a synthetic crash regime). Results demonstrate ARTEMIS achieves state-of-the-art directional accuracy, outperforming all baselines on DSLOB (64.96%) and Time-IMM (96.0%). A comprehensive ablation study confirms each component's contribution: removing the PDE loss reduces directional accuracy from 64.89% to 50.32%. Underperformance on Optiver is attributed to its long sequence length and volatility-focused target. By providing interpretable, economically grounded predictions, ARTEMIS bridges the gap between deep learning's power and the transparency demanded in quantitative finance.


[10] 2603.19136

Adaptive Regime-Aware Stock Price Prediction Using Autoencoder-Gated Dual Node Transformers with Reinforcement Learning Control

Stock markets exhibit regime-dependent behavior where prediction models optimized for stable conditions often fail during volatile periods. Existing approaches typically treat all market states uniformly or require manual regime labeling, which is expensive and quickly becomes stale as market dynamics evolve. This paper introduces an adaptive prediction framework that adaptively identifies deviations from normal market conditions and routes data through specialized prediction pathways. The architecture consists of three components: (1) an autoencoder trained on normal market conditions that identifies anomalous regimes through reconstruction error, (2) dual node transformer networks specialized for stable and event-driven market conditions respectively, and (3) a Soft Actor-Critic reinforcement learning controller that adaptively tunes the regime detection threshold and pathway blending weights based on prediction performance feedback. The reinforcement learning component enables the system to learn adaptive regime boundaries, defining anomalies as market states where standard prediction approaches fail. Experiments on 20 S&P 500 stocks spanning 1982 to 2025 demonstrate that the proposed framework achieves 0.68% MAPE for one-day predictions without the reinforcement controller and 0.59% MAPE with the full adaptive system, compared to 0.80% for the baseline integrated node transformer. Directional accuracy reaches 72% with the complete framework. The system maintains robust performance during high-volatility periods, with MAPE below 0.85% when baseline models exceed 1.5%. Ablation studies confirm that each component contributes meaningfully: autoencoder routing accounts for 36% relative MAPE degradation upon removal, followed by the SAC controller at 15% and the dual-path architecture at 7%.


[11] 2603.19225

FinTradeBench: A Financial Reasoning Benchmark for LLMs

Real-world financial decision-making is a challenging problem that requires reasoning over heterogeneous signals, including company fundamentals derived from regulatory filings and trading signals computed from price dynamics. Recently, with the advancement of Large Language Models (LLMs), financial analysts have begun to use them for financial decision-making tasks. However, existing financial question answering benchmarks for testing these models primarily focus on company balance sheet data and rarely evaluate reasoning over how company stocks trade in the market or their interactions with fundamentals. To take advantage of the strengths of both approaches, we introduce FinTradeBench, a benchmark for evaluating financial reasoning that integrates company fundamentals and trading signals. FinTradeBench contains 1,400 questions grounded in NASDAQ-100 companies over a ten-year historical window. The benchmark is organized into three reasoning categories: fundamentals-focused, trading-signal-focused, and hybrid questions requiring cross-signal reasoning. To ensure reliability at scale, we adopt a calibration-then-scaling framework that combines expert seed questions, multi-model response generation, intra-model self-filtering, numerical auditing, and human-LLM judge alignment. We evaluate 14 LLMs under zero-shot prompting and retrieval-augmented settings and witness a clear performance gap. Retrieval substantially improves reasoning over textual fundamentals, but provides limited benefit for trading-signal reasoning. These findings highlight fundamental challenges in the numerical and time-series reasoning for current LLMs and motivate future research in financial intelligence.


[12] 2411.05938

Uncertain and Asymmetric Forecasts

Measures of inflation uncertainty and directional risk derived from higher moments of forecast distributions are contaminated by the first moment, but in distinct ways. Using individual density forecasts from the ECB Survey of Professional Forecasters, this paper shows that 42% of the variation in raw forecast variance reflects the distance of expected inflation from target, a mechanical level effect, while raw asymmetry is too noisy to identify directional risk without reference to the central forecast. We propose two complementary corrections. Normalized Uncertainty (NU) purges dispersion of its predictable component linked to the policy anchor, recovering genuine belief imprecision. Asymmetry Coherence (AC) extracts directional risk only when asymmetry aligns with the central forecast, formalizing the balance of risks. These corrections alter inference. In a replication of Barro (1995), the volatility effect on growth disappears once level contamination is removed, while the inflation-level coefficient regains significance. In a VAR, the sign of the policy response reverses: raw asymmetry suggests easing, whereas coherent upside risk predicts tightening. In the credit channel, higher uncertainty slows and weakens pass-through from policy easing to loan pricing, especially at longer maturities. A division of roles emerges: NU governs transmission, AC informs policy response. Higher moments are informative only when measurement separates macroeconomic signals from first-moment contamination.


[13] 2507.08193

Entity-Specific Cyber Risk Assessment using InsurTech Empowered Risk Factors

The lack of high-quality public cyber incident data limits empirical research and predictive modeling for cyber risk assessment. This challenge persists due to the reluctance of companies to disclose incidents that could damage their reputation or investor confidence. Therefore, from an actuarial perspective, potential resolutions conclude two aspects: the enhancement of existing cyber incident datasets and the implementation of advanced modeling techniques to optimize the use of the available data. A review of existing data-driven methods highlights a significant lack of entity-specific organizational features in publicly available datasets. To address this gap, we propose a novel InsurTech framework that enriches cyber incident data with entity-specific attributes. We develop various machine learning (ML) models: a multilabel classification model to predict the occurrence of cyber incident types (e.g., Privacy Violation, Data Breach, Fraud and Extortion, IT Error, and Others) and a multioutput regression model to estimate their annual frequencies. While classifier and regressor chains are implemented to explore dependencies among cyber incident types as well, no significant correlations are observed in our datasets. Besides, we apply multiple interpretable ML techniques to identify and cross-validate potential risk factors developed by InsurTech across ML models. We find that InsurTech empowered features enhance prediction occurrence and frequency estimation robustness compared to only using conventional risk factors. The framework generates transparent, entity-specific cyber risk profiles, supporting customized underwriting and proactive cyber risk mitigation. It provides insurers and organizations with data-driven insights to support decision-making and compliance planning.


[14] 2507.14420

The effects of temperature and rainfall anomalies on Mexican inflation

This paper measures the effects of temperature and precipitation shocks on Mexican inflation using a regional panel. To measure the long-term inflationary effects of climate shocks, we estimate a panel autoregressive distributed lag model (panel ARDL) of the quarterly variation of the price index against the population-weighted temperature and precipitation deviations from their historical norm, computed using the 30-year moving average. In addition, we measure the short-term effects of climate shocks by estimating impulse response functions using panel local projections. The result indicates that, in the short term, the climate variables have no statistical effect on Mexican inflation. However, in the long term, only precipitation norms have a statistical effect, and the temperature norms have no statistical impact. Higher than normal precipitation has a positive and statistically significant effect on Mexican inflation for all items.


[15] 2603.04105

A Random Rule Model

We study stochastic choice when behavior is generated by switching among a small library of transparent deterministic decision procedures. The object of interest is procedural heterogeneity: how the relative importance of these procedures varies across decision environments. We model this heterogeneity with a Random Rule Model (RRM), in which menu-level choice probabilities arise from environment-dependent weights on named rules. We show that identification has a two-step structure. At a fixed feature value, variation in decisive-side patterns across menus identifies the vector of relative rule weights up to scale; across sufficiently rich feature values, these recovered weights identify the parameters of an affine gate. Applied to a large dataset of binary lottery choices, the estimated procedure weights are concentrated on a small subset of interpretable rules and shift systematically with menu characteristics such as tradeoff complexity and dispersion asymmetry. Out-of-sample prediction and cross-dataset portability provide supporting evidence that the recovered procedural representation is empirically disciplined.


[16] 2603.10857

SPX-VIX Risk Computations Via Perturbed Optimal Transport

We propose a model independent framework for generating SPX and VIX risk scenarios based on a joint optimal transport calibration of their market smiles. Starting from the entropic martingale optimal transport formulation of Guyon, we introduce a perturbation methodology that computes sensitivities of the calibrated coupling using a Fisher information linearization. This allows risk to be generated without performing a full recalibration after market shocks. We further introduce a dimension reduction method based on perturbed optimal transport that produces fast and stable risk estimates while preserving the structural properties of the calibrated model. The approach is combined with Skew Stickiness Ratio(SSR) dynamics to translate SPX shocks into perturbations of forward variance and VIX distributions. Numerical experiments show that the proposed method produces accurate risk estimates relative to full recalibration while being computationally much faster. A backtesting study also demonstrates improved hedging performance compared with stochastic local volatility models.


[17] 2603.12422

Mortgage Burnout and Selection Effects in Heterogeneous Cox Hazard Models

We study the aggregate hazard rate of a heterogeneous population whose individual event intensities are modeled as Cox (doubly stochastic) processes. In the deterministic hazard setting, the observed pool hazard is the survival weighted mean of the individual hazards, and its time derivative equals the mean individual hazard drift minus a variance term. This yields a transparent structural explanation of burnout in mortgage pools. We extend this perspective to stochastic intensity models. The observed pool hazard remains a survival-weighted mean, but now evolves as an Ito process whose drift contains the mean drift of the individual hazards and a negative selection term driven by cross-sectional dispersion, together with a diffusion term inherited from the common factor. We formulate the general identity and discuss special cases relevant to mortgage prepayment modeling.


[18] 2410.04867

Optimal execution with deterministically time varying liquidity: well posedness and price manipulation

We investigate the well-posedness in the Hadamard sense and the absence of price manipulation in the optimal execution problem within the Almgren-Chriss framework, where the temporary and permanent impact parameters vary deterministically over time. We present sufficient conditions for the existence of a unique solution and provide second-order conditions for the problem, with a particular focus on scenarios where impact parameters change monotonically over time. Additionally, we establish conditions to prevent transaction-triggered price manipulation in the optimal solution, i.e. the occurence of buying and selling in the same trading program. Our findings are supported by numerical analyses that explore various regimes in simple parametric settings for the dynamics of impact parameters.