The steady incompressible Navier--Stokes equations pose significant computational challenges due to their nonlinear convective terms and pressure--velocity coupling. Physics-informed neural networks (PINNs) provide a mesh-free framework for approximating such systems, but classical PINNs can experience optimization difficulties in nonlinear flow regimes. In this work, we propose a quantum physics-informed neural network (QPINN) framework with a quantum neural network (QNN)-based trainable embedding for the lid-driven cavity problem. The proposed approach uses a QNN to learn data-adaptive quantum feature maps that encode spatial coordinates before they are processed by a variational quantum circuit within a physics-informed loss formulation. Numerical experiments show that the proposed QNN-TE-QPINN exhibits stable training behavior and competitive solution accuracy compared with classical PINNs and hybrid quantum models using classical embeddings, while requiring significantly fewer trainable parameters. Rather than claiming computational speedup, these results highlight the potential of trainable quantum embeddings for parameter-efficient physics-informed learning. The findings suggest that embedding design plays an important role in quantum-assisted PDE solvers and support further investigation of QNN-based trainable embeddings for nonlinear fluid dynamics benchmarks.
Recent work has identified a dynamical squeezing phase transition in power-law interacting bilayer XXZ spin models, separating a fully collective phase with Heisenberg-limited squeezing from a partially-collective phase with universal critical scaling. Here we test and establish the universality of this transition along two qualitatively different microscopic axes: lattice geometry, by studying square, triangular, and honeycomb $2\mathrm{D}$ bilayers as well as $1\mathrm{D}$ ladders, and a symmetry-preserving rescaling $\lambda$ of the interlayer couplings relative to the intralayer ones. Combining a Bogoliubov instability analysis with discrete truncated Wigner simulations, we find that the transition persists across all four lattice geometries and over a wide range of $\lambda$ with critical exponents consistent within error, providing strong evidence for a genuine non-equilibrium universality class. The Bogoliubov theory recovers the previously identified scaling $a_Z^* \propto L$ in the long-range interacting regime $\alpha < d+2$, and yields an analytical scaling $a_Z^* \propto L^{2/(\alpha-d)}$ for the critical aspect ratio with system size for $\alpha>d+2$, with $\alpha$ the power-law exponent in dimension $d$. This uncovers a previously unrecognized sub-linear regime for short-range interactions. By tuning $\lambda$ we vary the interlayer coupling strength at fixed layer spacing, demonstrating that the dynamical transition can be driven purely through interaction engineering without modifying the underlying geometry. These findings provide a versatile route toward controlling entanglement generation in Rydberg-array, polar molecule, and trapped-ion platforms with applications in quantum sensing and simulation.
Semiconductor hole-spin qubits offer a promising route to quantum computation due to their weak hyperfine interaction, and strong intrinsic spin-orbit coupling enabling electric control of qubits. Scalable architectures, however, require coherent long-distance quantum state transfer, which is hindered in these systems by spin-orbit induced anisotropic exchange. Here we show that this limitation can be overcome by using an all-electric control protocol. By tuning the electric field strength, we identify discrete spin-orbit phase-matching conditions that restore near-perfect state transfer, independent of the rotation axis. Complementarily, controlling the electric field direction aligns the spin-orbit axis, suppressing excitation non-conserving processes and enabling robust transfer without fine tuning. Our results establish that electrical control of spin-orbit phases through either magnitude tuning or axis alignment as a practical route for robust quantum information transport in hole-spin quantum dot arrays.
Quantum machine learning (QML) aims to accelerate machine learning tasks by exploiting quantum computation. Previous work studied a QML algorithm for selecting sparse subnetworks from large shallow neural networks. Instead of directly solving an optimization problem over a large-scale network, this algorithm constructs a sparse subnetwork by sampling hidden nodes from an optimized probability distribution defined using the ridgelet transform. The quantum algorithm performs this sampling in time $O(D)$ in the data dimension $D$, whereas a naive classical implementation relies on handling exponentially many candidate nodes and hence takes $\exp[O(D)]$ time. In this work, we construct and analyze a quantum-inspired fully classical algorithm for the same sampling task. We show that our algorithm runs in time $O(\operatorname{poly}(D))$, thereby removing the exponential dependence on $D$ from the previous classical approach. Numerical simulations show that the proposed sampler achieves empirical risk comparable to exact sampling from the optimized distribution and substantially lower than sampling from the non-optimized uniform distribution, while also exhibiting exponentially improved runtime scaling compared with the conventional classical implementation. These successful dequantization results show that sparse subnetwork selection via optimized sampling can be achieved classically with polynomial data-dimension scaling on conventional computers without quantum hardware, providing an alternative to the existing quantum algorithm.
Solving non-linear Diophantine systems lies at the mathematical core of integer optimization and cryptography. While the general unbounded problem is undecidable, even over bounded integer domains it remains classically intractable in the worst case. In this work, we introduce a fully reversible quantum algorithmic framework tailored to solve arbitrary polynomial Diophantine equations over bounded integer domains. The core of our approach is the explicit, gate-level synthesis of an evaluation oracle for amplitude amplification. By coherently evaluating polynomial constraints via in-place two's complement arithmetic and routing operations into a single recycled accumulator, this garbage-free strategy achieves a compact and scalable synthesis of the underlying non-linear arithmetic. Through analytical derivations and empirical circuit simulations, we prove that the overall spatial complexity is bounded by $q = \mathcal{O}((n + d^2)\log_2 N)$ logical qubits for $n$ variables, maximum degree $d$, and interval length $N$. The non-Clifford Toffoli depth is upper-bounded by $\mathcal{O}(q^2)$. This structural scaling exponent remains invariant to the variable count, modulated linearly only by the coefficients' Hamming weights. By moving beyond abstract black-box assumptions, this explicit architectural synthesis guarantees that the necessary quantum arithmetic acts as a bounded polynomial overhead. This ensures a quadratic speedup over classical exhaustive search, whether retrieving a unique assignment or dynamically enumerating an unknown number of solutions.
We introduce Graphical Algebraic Geometry (GAG), a family of diagrammatic languages extending the Graphical Linear Algebra programme. We construct several languages within this family and prove that they are universal and complete for the corresponding (co)span semantics of commutative algebras and affine varieties. This framework provides clear graphical representations of algebraic structures -- such as polynomials, ideals, and varieties -- enabling intuitive yet rigorous diagrammatic reasoning. We showcase two practical viewpoints on GAG. First, we show that instances of counting constraint satisfaction problem (#CSP) are recast as rewrite problems of closed diagrams in GAG. This means that deciding rewritability in GAG is #P-hard, and GAG can be viewed as a complete and compositional rewrite system for networks of polynomial constraints. Second, we characterize the qudit ZH calculus, a diagrammatic language for quantum computation, as an extension of Graphical Algebraic Geometry. This establishes the correspondence that Graphical Algebraic Geometry is to the ZH calculus what Graphical Linear Algebra is to the ZX calculus. Using this construction, we show that computing amplitudes in qudit ZH requires only a constant number of queries to a GAG oracle.
Quantum computing has demonstrated its potential to solve various optimization problems, including drone scheduling, which is important not only for drone delivery but also for logistics in general. However, one of the main obstacles is that practical drone scheduling settings typically require quantum resources that current hardware cannot provide. Therefore, in this work, we introduce a new Quantum Optimization via Coordinate Descent (QUACOD) approach to address this problem under the constraint of a limited number of available qubits. By leveraging coordinate descent, QUACOD decomposes the original high-complexity problem into multiple subproblems, which are then solved using quantum optimization. In our experiments, QUACOD outperforms the state-of-the-art (SOTA) quantum-based drone scheduling method not only in optimized drone completion times but also in scalability, handling up to 5 times more drones and 35 times more routes. In addition, QUACOD demonstrates that hardware-efficient circuits are effective for optimization problems. Together, these contributions advance quantum computing toward practical applications in the noisy intermediate-scale quantum (NISQ) era.
Achieving practical quantum advantage on fault-tolerant quantum computers (FTQC) is fundamentally constrained by the substantial spatial and temporal overheads required to map logical operations onto physical hardware. Existing compilation approaches typically adopt coarse-grained, slice-based abstractions that overlook fine-grained microarchitectural effects, such as routing contention, leading to inefficient resource utilization and limited alignment between algorithm structure and hardware capabilities. This work presents a microarchitecture-aware compilation approach that integrates algorithmic structure directly with lattice surgery (LS) execution. By leveraging the commutativity of C-Phase operations, the method transforms inherently sequential gate sequences into concurrent multi-target interactions, effectively removing artificial dependencies and exposing significant instruction-level parallelism. To enable this, we design a dynamic, event-driven scheduling strategy that accurately models spatial layout and routing constraints, allowing operations to overlap in time while minimizing contention. Through improved coordination of computation and communication, this approach substantially reduces idle resources and achieves up to a 59.7$\times$ reduction in execution time compared to standard baselines.
We introduce a framework where light-matter transitions, rather than states, are the primary dynamical objects. Successive compositions of elementary transitions yield multiphoton processes with compact diagrammatic bookkeeping of resonant and off-resonant pathways. This approach enables transparent derivations of effective high-order Hamiltonians in the dispersive regime, foundational to quantum-information applications. Applied to the paradigmatic Jaynes-Cummings model, our framework reveals a photon-number-independent intrinsic Rabi frequency and persistent polaritonic hybridization in the dispersive regime, unifying resonant and dispersive limits.
We propose an adiabatic-elimination formalism in the dispersive regime based on a transition-centric perturbation theory. The perturbative expansion is recast into a diagrammatic framework, while adiabatic elimination is implemented through controlled projections onto transition subspaces. Our approach applies systematically at arbitrary perturbation order, and is suited to multilevel systems and multiple qubits in both cavity and waveguide quantum electrodynamics. It ultimately enables the explicit construction of effective higher-order Hamiltonians while bypassing important limitations of existing techniques, thereby providing a practical toolbox for multiphoton processes in the dispersive regime.
We investigate the suppression of matter-wave Talbot interference under environmentally induced decoherence. The system is modeled as an atomic beam diffracted by a periodic grating, whose transverse dynamics is described within the paraxial approximation. Environmental coupling is introduced through an effective open-system model that exponentially damps spatial coherences between diffracted components, allowing a continuous interpolation between the coherent Talbot regime and the incoherent far-field diffraction limit. Besides the usual intensity and transverse-momentum distributions, we analyze the local probability flow associated with the diffracted matter wave. The corresponding Bohmian, or hydrodynamic, representation is used here as a diagnostic tool fully equivalent to the standard quantum description, with no additional assumptions beyond the probability current of the paraxial wave field. In the present Talbot geometry, this analysis shows how decoherence progressively suppresses the carpet structure and smooths the transverse-momentum distribution, while the flow may remain organized into channels determined by the grating periodicity. The results illustrate, in a periodic matter-wave Talbot geometry, that the loss of visible interference and the loss of dynamical pathway separation need not occur simultaneously. In particular, flux-channel structures can persist in parameter regimes where multi-slit interference features have already been strongly reduced. This distinction provides a local characterization of decoherence in matter-wave Talbot interferometry and complements previous trajectory-based analyses of coherence loss in simpler interference and confined geometries.
What does a book look like to a quantum computer? This paper takes eight classical works of the Renaissance and its late-antique inheritance -- from Augustine to Galileo -- and runs each through a neutral-atom quantum processor. The bridge is graphs: each textual unit becomes an atom, and graph edges are physical blockade constraints for engineered exact unit-disk designs, or a 2D approximation to the semantic graph for natural texts. Three contributions follow. First, we introduce rigidity rho, a metric for how unique a book's structural backbone is -- distinguishing Marguerite de Navarre's Heptameron (rigid, twelve-nouvelle hard core) from Boethius (fully fungible, every chapter substitutable). Second, we invert the pipeline: rather than extracting a graph from existing prose, we pick a target graph the hardware encodes natively, and write a book whose structure matches it. The twenty-nine texts written this way, collected under the name QOuLiPo, extend the OuLiPo tradition to graph-topological constraints and, together with the eight natural texts, form a benchmark distribution against which neutral-atom hardware can be tracked as it scales. Third, we run both natural and engineered texts on Pasqal's FRESNEL processor up to one hundred atoms; engineered texts reach high approximation ratios, the cleanest instances returning the exact backbone. A cloud-accessible quantum machine plus an agentic coding environment now lets a single investigator run this pipeline end-to-end. What is reported is an application layer, not a speedup -- humanistic instances ready to load onto neutral-atom processors as they scale, already complementing classical text analysis. The Digital Humanities community has a stake in building familiarity with this hardware now: the engineered-corpus design choices made today fix the benchmark distribution future hardware will be measured against.
Nitrogen-vacancy (NV) centres in diamond can be used to detect radiofrequency (RF) signals through coupling of the RF magnetic field with the NV spins, combined with optical readout of the spin state. The sensitivity of such RF detectors has so far been mainly studied in terms of magnetic field sensitivity, which is relevant when the RF signal is generated by a near-field source. However, for applications where the RF input is delivered externally, a more relevant quantity is the sensitivity in terms of the input RF power. Here we theoretically analyse the power sensitivity of NV-based RF detectors as a function of the RF-spin interface geometry. We derive scaling laws of the power sensitivity for both slope-detection and variance-detection RF sensing protocols, and for various noise regimes. We find that, in most scenarios, the power sensitivity scales inversely with the characteristic physical dimension of the RF-spin interface, for instance the width of a coplanar waveguide or the diameter of a loop antenna. In other words, the smaller the structure and the probed NV volume, the better the power sensitivity, which is contrary to the case of magnetic field sensitivity. Lastly, we numerically estimate that photon shot noise limited sensitivities of 10^{-20} W Hz^{-1} (slope) and 10^{-12} W Hz^{-1/2} (variance) are achievable. This work lays the groundwork for further optimisation of NV-based RF detectors.
We study distributed inner product estimation for $n$-qubit states using local randomized measurements, for which rigorous worst-case guarantees are less understood. We first reduce the minimax kernel optimization to Hamming-distance kernels. Within this class, unbiasedness fixes a unique kernel. For this kernel under local Clifford sampling, we prove a sharp fourth-moment bound using the single-qubit Clifford commutant. This yields worst-case sample complexity $\mathcal{O}(\sqrt{4.5^n})$, attained by identical pure product stabilizer states. For the same kernel under local Haar sampling, we prove a local twirling identity that compares its fourth moment with the Clifford fourth moment. This gives the same rigorous upper bound as in the Clifford case, but the comparison is lossy. This motivates the conjectured sharper Haar scaling $\mathcal{O}(\sqrt{3.6^n})$ attained by product states, and verify it for several important classes of states. We also show that independent single-qubit Pauli shadows have worst-case scaling $\mathcal{O}(\sqrt{7.5^n})$ for large $n$.
Periodically driven quantum many-body systems can spontaneously break discrete time-translation symmetry, realizing discrete time crystals. To date, both experimental and theoretical efforts have largely focused on the simplest case of spontaneous period-doubling in $\mathbb{Z}_2$ discrete time crystals realized with qubits. This owes, in part, to the challenge of stabilizing eigenstate order in higher discrete symmetry ($\mathbb{Z}_n$) time crystals, due to the presence of richer domain wall physics. Here, we demonstrate the realization of a $\mathbb{Z}_3$ discrete time crystal by implementing a Floquet chiral clock model in a chain of 15 superconducting qutrits. Unlike the conventional Ising setting, our system features a tunable chiral angle that governs domain-wall dynamics, spectral degeneracies, and crucially, the stability of time-crystalline order. Using disordered nearest-neighbor chiral interactions, we observe robust subharmonic period tripling that persists across a wide range of drive strengths and is independent of initial state. Finally, we highlight the special role that chirality plays in our $\mathbb{Z}_3$ discrete time crystal -- in its absence, the system's Floquet dynamics exhibit a marked initial state dependence governed by domain wall degeneracies. Our results establish native qudit hardware as a powerful platform to access a broader landscape of non-equilibrium phases.
Complex numbers play an indispensable role in quantum mechanics and quantum information, as validated by both theoretical analysis and experimental verification. Since quantum information processing inherently relies on quantum channels, the resource theory for quantum channels is equally fundamental to that for quantum states. In this paper, we propose two frameworks for quantifying the imaginarity of Gaussian channels. The first framework regards all real superchannels as free superchannels. Within this setting, we introduce two concrete imaginarity measures for Gaussian channels: I_s^GC based on existing imaginarity measures of Gaussian states, and I_d^GC derived directly from the intrinsic parameters of Gaussian channels, which enjoys high computational simplicity. The second framework adopts only a proper subset of real superchannels as free superchannels. Under this framework, we put forward another imaginarity measure I_c^GC , which is fully determined by the inherent parameters of Gaussian channels and features continuity as well as tractable computation. As a practical application, we employ I_c^GC to investigate the dynamical behavior of Quantum Brownian Motion Gaussian channels throughout the entire evolutionary process.
Frequency modes of light are one of the most promising platforms that provide access to high-dimensional quantum states amongst different photonic degrees of freedom capable of high-dimensionality, enabling robust, error-tolerant, and scalable quantum optical information systems. We demonstrate engineering of precisely controlled two-photon high-dimensional states entangled in frequency through time-domain Fourier optical synthesis. We generate and convert a continuous broadband frequency-entangled state into a large range of discrete frequency bins suitable for ITU standards, with spacings ranging from 12.5 GHz to 750 GHz, and observe spectral anticorrelations over 38 frequency bins, including intra-bin pure states at a 100 GHz bin spacing. We characterize the full quantum state dimensionality via Schmidt decomposition and observe lower bounds on the frequency-binned Hilbert-space dimensionalities of at least 289, formed by two entangled qudits with dimension 17. Furthermore, we demonstrate quantum nonlocality via frequency correlations in a transmission experiment over a campus-scale two-node fiber network. This work represents a crucial step towards building a versatile and relatively simple way of generating precisely controlled high-dimensional spectral qudits, with the potential of harnessing in wavelength-multiplexed quantum networks, high-dimensional information processing, and communication of quantum states specifically, and fiber-optic quantum remote sensing.
As quantum computers become available through multi-tenant cloud platforms, ensuring privacy against adversaries sharing the same quantum processing unit becomes critical. We introduce and explore \emph{covert quantum computing}, a new concept that ensures an adversary with access to all other quantum computational units (QCUs) of a quantum computer cannot detect computation on the subset that they cannot access. Analogous to covert communication, we employ information theory. However, since here the adversary controls the systems used for detection, we require a richer framework for covertness analysis that accounts for the use of quantum memories and adaptive operations. Thus, we adopt the \emph{quantum-strategy} framework used in quantum game theory and memory channel discrimination. Current quantum computers use planar graph circuit layouts and typically assume nearest-neighbor crosstalk. We derive discrete isoperimetric inequalities to show that, for an $n$-qubit circuit under this model, only $\mathcal{O}(\sqrt{n})$ border qubits provide detection information to the adversary. We then explore this scaling law on IQM's 54-qubit \emph{Emerald} processor and IBM's 156-qubit \emph{ibm\_fez} machine employing the Heron 2 architecture. We implement Ramsey experiments on qubits not used in computation, and detect nearest-neighbor crosstalk, as expected. However, we also observe long-range coupling effects beyond the border qubits, revealing a side channel that the adversary can exploit. We hypothesize that this long-range crosstalk is induced by leakage from the drive and control lines. Beyond weakening covertness, it exposes co-tenants to both adversarial and unintended crosstalk and degrades circuits that span spatially distributed qubits, motivating further work on spatial isolation and crosstalk characterization.
Adaptive quantum Fisher information (QFI) estimation requires a stopping rule that distinguishes accuracy from apparent numerical stability. For Krylov-shadow QFI estimators, finite Krylov order $K$ produces truncation bias, while finite sample budget $M$ produces finite-$M$ sampling-side error. We show that a width-only empirical stopping rule, based on interval width and local Krylov stability, can declare convergence at small $(K,M)$ even when the post hoc error exceeds the requested tolerance; we call this event a \emph{false stop}. The mechanism is a narrow empirical interval centered on a biased low-$K$ estimate. We give a two-component stopping analysis that separates the Krylov and sampling terms, and we implement a guarded rule that permits a success declaration only after minimum thresholds in $K$ and $M$ and a persistence condition are satisfied. On a five-level dephasing benchmark at $n=4$ qubits, the guarded rule suppresses the false success declarations produced by the width-only empirical rule, whose false-stop rates range from $0.16$ to $0.68$ across the tested noise levels. Under the main fixed resource limit, the guarded rule refuses to make success declarations rather than accepting biased low-$K$ estimates; a separate true-relative-tolerance sampling-budget sequence then shows that, after Krylov and sampling recalibration, the same decision principle can make success declarations without observed false stops. These results show that stopping reliability is a distinct design requirement for adaptive QFI estimation: sampling precision at fixed $K$ does not by itself establish that Krylov truncation bias is controlled.
Matrix product states (MPS) are a standard tensor-network representation for ground states of one-dimensional quantum many-body systems, and they underpin widely used simulation tools such as DMRG. However, while quantum model checking has been developed mainly for quantum programs and communication protocols (with properties expressed along a time axis), there is still no comparable framework for systematically verifying \emph{spatial} and \emph{size-dependent} properties of physical many-body states, where the key parameter is the system size. This paper takes a step toward bridging the gap. We propose \emph{Linear Chain Logic} (LCL), a spatial logic designed to specify physically meaningful properties of periodic MPS families as the system size grows, such as nontriviality on rings and large-size asymptotic patterns. Our approach builds on a simple but powerful connection: every periodic MPS naturally induces a completely positive map (a quantum operation) on its virtual space, so many quantitative features of the MPS can be analysed through the repeated application of the operation. Using this perspective, we derive an effective procedure to compute the inner products of an MPS at a given size and to support richer LCL specifications, without relying on brute-force state expansion. We then develop approximate model-checking algorithms that combine sound bounding with asymptotic structural analysis, enabling scalable reasoning about large system sizes. Experiments on representative MPS families illustrate that our method can automatically verify nontriviality and detect asymptotic spatial regimes in a way that complements traditional numerical techniques.
Recently, the technique of counterdiabatic driving, which provides an effective strategy for accelerating adiabatic quantum evolution, has been widely applied in the preparation of many-body quantum states. In this work, we propose a theoretical scheme for the efficient preparation of Dicke states in a system of non-interacting two-level atoms. Our approach leverages the one-axis twisting (OAT) interaction to generate non-classical correlations and combines it with time-dependent external fields to achieve precise control over the dynamics of the system. By employing rapid adiabatic passage (RAP), it demonstrates how the system can be steered from an initial coherent spin state to a target Dicke state with high fidelity [S. C. Carrasco, M. H. Goerz, S. A. Malinovskaya, V. Vuletić, W. P. Schleich, and V. S. Malinovsky, Phys. Rev. Lett. \textbf{132}, 153603 (2024)]. To further optimize the preparation process, we introduce counterdiabatic driving (CD), which suppresses non-adiabatic transitions. Numerical simulations confirm that our scheme can achieve high-fidelity Dicke states for a moderate number of particles. Our results provide a scalable and experimentally feasible approach to prepare Dicke states, with potential applications in quantum metrology, quantum communication, and quantum information processing.
Cavity-magnon systems, combining magnons and photons, offer a versatile platform for studying quantum entanglement and advancing quantum information science. In this work, we propose a scheme for generating nonreciprocal magnon-magnon entanglement in a hybrid system consisting of two yttrium iron garnet spheres coupled to a spinning whispering-gallery-mode cavity. By leveraging the magnon Kerr nonlinearity and the Sagnac effect arising from the cavity rotation, we show that the entanglement can be substantially enhanced, and the resulting entanglement exhibits pronounced nonreciprocal characteristics. Furthermore, our scheme demonstrates that the entanglement remains robust against thermal noise and persists at bath temperatures up to 100 mK. This work underscores the potential of spinning cavity-magnon systems as a versatile platform for realizing nonreciprocal quantum devices and facilitating the development of quantum technologies.
We show that standard multi-path interferometry, using only pairwise visibility measurements, provides an operational route to tests of preparation noncontextuality. Under ideal symmetric conditions, interference visibility directly encodes state overlaps, without requiring tomography or SWAP tests. For three paths, any jointly diagonalizable (coherence-free) description must satisfy ${V}_{12}^2+{V}_{23}^2-{V}_{13}^2\le 1$, where ${V}_{ij}$ are two-path visibilities. Pure qubit detector states violate this bound, achieving a maximal value of $5/4$. We generalize to arbitrary $n$-path interferometers and derive the tight qubit bound $S_n^{\max}=n\cos^2(\pi/2n)-1$ for all $n\ge3$, achieved by coplanar pure qubit states with uniform angular separation $\pi/n$. A robustness analysis yields explicit experimental thresholds. Under the operational equivalences used in overlap-based generalized noncontextuality frameworks, violations of these visibility inequalities also witness preparation contextuality. For $n$-cycle inequalities, only the pairwise visibilities appearing in the cycle need to be measured.
We study far-field discrimination between one and two incoherent point sources in the singular regime of weak and closely spaced emitters. Under ideal alignment, spatial-mode demultiplexing (SPADE) attains the quantum-optimal large-sample Stein exponent, but the finite-photon behavior near the one-source boundary and the effect of realistic imperfections remain less understood. Using singular learning theory, we analyze both the aligned and misaligned problems. In the aligned Gaussian case, we derive the zeta-function poles for direct imaging and SPADE, show that both share the same real log canonical threshold $\lambda=1/2$ but differ in multiplicity, and obtain the corresponding Bayes free-energy asymptotics. This yields a universal subleading advantage of aligned SPADE in the local prior-weighted regime. In the misaligned setting, we study a physically motivated binary-SPADE reduction that retains the full leading $O(s^2)$ leakage contrast near alignment, with corrections from the detailed higher-mode redistribution entering only at $O(s^4)$. We show that misaligned binary-SPADE and direct imaging acquire nontrivial local power on different intrinsic scales, $s=O(n^{-1/4})$ and $s=O(n^{-1/2})$, respectively. However, finite-$n$ Neyman--Pearson comparisons under common physical conditions reveal that direct imaging is stronger on the plotted grids and that misaligned binary-SPADE exhibits an exact blind separation $s^\ast=2\theta$, where its power collapses to $\alpha$. These results identify model singularity as a structural organizing principle for finite-photon quantum discrimination and clarify how ideal aligned SPADE benchmarks can fail to translate into finite-$n$ advantages under misalignment.
Mode-pairing quantum key distribution (MP-QKD) protocol achieves performance beyond the repeaterless rate-transmittance bound and exhibits excellent practicality by avoiding the requirement for difficult global phase locking. However, the source side of MP-QKD still relies on the assumption of continuous phase randomization, an experimentally infeasible requirement in practice. Therefore, the practical security of the protocol cannot be fully guaranteed. In this work, we propose a discrete-phase-randomized mode-pairing quantum key distribution (DPR-MP-QKD) protocol and analyze the basis-dependence of the source side. Then, we introduce a concrete discrete version of the decoy state method that ensures the security of the DPR-MP-QKD protocol. Finally, simulation results indicate that as the number of discrete phases increases, the key rate performance of DPR-MP-QKD progressively approaches that of the continuous case, with convergence achieved at approximately 14 discrete phases. Moreover, our approach drastically lowers the demand for randomness. While conventional continuous phase randomization demands an unlimited supply of random bits, we show that merely a few bits (e.g., 4) are adequate.
Entanglement can hide in two fundamentally different ways. First, multi-copy correlations can carry information that no single-copy measurement on an unknown state is able to access. Second, bound entangled states possess a positive partial transpose, which makes them invisible to the Peres-Horodecki criterion and all moment inequalities that depend on it. Here we show that the moment difference between the partial transpose and purity decomposes exactly as a chirality-chirality correlator, where the relevant operator is the scalar spin chirality -- the same quantity that governs chiral spin liquids and the topological Hall effect. This decomposition identifies the specific physical structure that multi-copy entanglement detection probes. Using the same controlled-SWAP circuits, we develop a multi-channel spectral classifier for bound entanglement. The classifier combines realignment spectral features with chirality corrections and achieves 99.9% recall at zero false positives across all three known 3x3 bound entangled families, compared with ~40% for the CCNR criterion alone. We also introduce a marginal-noise construction that produces CCNR-invisible bound entangled states, which the classifier detects but which remain invisible to all single-parameter criteria. We validate our approach experimentally on three IBM Quantum processors and demonstrate negativity reconstruction with mean errors of 0.002-0.027, chirality detection for pure and mixed entangled states, and bound entanglement detection across two structurally distinct families (Horodecki and chessboard) on a single gate-based superconducting processor.
Speech emotion recognition (SER) remains fragile in real-world conditions because emotional cues are subtle, speaker-dependent, and easily confounded by recording variability, while high-performing deep models typically rely on large and carefully curated training sets. Quantum machine learning offers an alternative way to introduce nonlinear correlation modeling with compact modules, yet existing quantum SER studies remain limited and the impact of circuit structure is not well understood. This paper presents HQTN-SER, a hybrid quantum-classical framework that investigates how quantum tensor network connectivity can support SER under small-qubit settings. HQTN-SER introduces (i) an MPS-inspired quantum tensor network module that enforces structured interactions to model correlations in speech representations with a small number of trainable parameters, and (ii) a fusion strategy that combines quantum measurement features with a learned classical latent embedding for end-to-end emotion classification. We evaluate HQTN-SER on three public benchmarks (RAVDESS, SAVEE, and MDER) under a unified preprocessing and training protocol. The proposed model achieves consistent performance across datasets, RAVDESS = 80.12%, SAVEE = 78.26% and MDER = 73.51% accuracy, with stable convergence and low qubit counts, showing that tensor network structure can be an effective and hardware-aware design choice for quantum-assisted SER. The results provide a reproducible baseline and clarify when structured quantum modules can add value to affective computing today.
Rydberg atomic electrometry leverages the extreme sensitivity of highly excited atoms for calibration-free electric field measurements. The technique uses a non-metallic vapor cell to link properties of an RF field to a spectroscopic readout in the optical domain. Most demonstrations have so far focused on detecting linearly-polarized fields, for which the induced splitting of dressed atomic levels is rotationally invariant. Here we report on Rydberg atomic measurements of RF fields in a general state of polarization (SOP) which we map onto the Poincaré sphere through spectroscopic fingerprints. For a Stokes vector circumnavigating a Poincaré sphere meridian, we witness a continuous transformation of the atomic eigenenergy spectrum. Because the relative positions of eigenenergies are locked in place by quantization of angular momentum, the framework is universal and calibration free. We provide a specific demonstration in rubidium, which generalizes to all systems with a single valence electron.
Wigner's thought experiment illustrates quantum theory's measurement problem by considering an observer who measures a quantum system inside a sealed lab, modeled unitarily by an outsider. Recent extensions of this thought experiment, referred to as extended Wigner's friend arguments, question how different observers can reason consistently about each other in quantum setups, and challenge the absoluteness of the outcome value obtained by the friend under a notion of locality. In this work, we present an argument against the absoluteness of free choices under the same notion of locality, using an extended Wigner's friend scenario based on the Pusey--Barrett--Rudolph theorem. Similar arguments based on other contextuality or nonlocality models are possible.
The parametric amplification enabled by two-photon driving constitutes a versatile platform for advanced quantum technologies. We present an optimized scheme for implementing quantum batteries (QBs) based on a superconducting circuit system, where a two-photon-driven LC resonator serves as the charger and an array of transmon qubits functions as the battery. Our results show that two-photon parametric driving exponentially enhances the effective cavity-qubit coupling, which in turn gives rise to near-degenerate energy-level structures and highly entangled quantum states. This significantly enhances the charging power and enables rapid energy transfer from the charger to the battery. Moreover, the engineered squeezed cavity mode and the associated quantum correlations effectively suppress environmentally induced decoherence, thereby delaying energy leakage and facilitating stable energy storage. The proposed scheme remains robust against practical experimental imperfections, such as parameter disorder and environmental noise, preserving its performance advantages. The work provides a feasible platform for realizing high-power, high-stability QBs and highlights the potential of parametric control in quantum energy technologies.
We propose a superconducting circuit hosting $d$ low-lying states, well separated from the rest of the spectrum, that naturally realizes a qudit system protected from leakage errors. The system represents a generalization of the fluxonium and the low-energy states are constituted by fractional fluxon states, that we call {\it fraxons}, localized in the minima of a suitably designed Josephson potential. The latter is tailored through a Fourier engineering approach, that employs multi-harmonic Josephson building block elements composed by a Josephson junction and an inductance connected in series. We present the spectrum of a $d=4$ and a $d=5$ qudit system and study in detail the qutrit case. We analyze the dipole matrix elements for coupling to radiation and propose a non-Abelian, stimulated Raman adiabatic passage (STIRAP) protocol for single-qutrit gates, that is particularly suited for the present system. The proposed platform opens novel perspectives in circuit engineering and quantum computing beyond the qubit paradigm.
Real-time decoding plays a crucial role in practical fault-tolerant quantum computing. Window decoding, in which the decoding problem is divided into windows, is a promising approach. While reducing the window size is desirable for faster decoding, each window contains a buffer region whose size must typically be at least the code distance to avoid degrading the logical error rate, which limits how much the window can shrink. In this paper, we propose an adaptive decoding scheme in which window decoding is first performed with a small buffer size and a decoding confidence (soft information) is computed; if the confidence is low, the buffer size is enlarged and decoding is redone. This approach reduces the average decoding time, since most shots are decoded with a small buffer. A central challenge in realizing this scheme is that existing forms of soft information are not directly applicable to window decoding, especially with a small buffer. We address this challenge by introducing a new form of soft information, the spatiotemporal complementary gap, specifically designed for this setting. Numerical simulations demonstrate that the proposed scheme reduces the average buffer size by approximately 40% while maintaining the logical error rate.
We study scattering for continuous-time quantum walks on finite graphs with two attached leads. We derive explicit formulae for the two-terminal scattering matrix in terms of characteristic polynomials of the finite graph and its vertex-deleted subgraphs. For real-weighted two-terminal graphs, we then introduce three real quantities, $\mu_1$, $\mu_2$, and $\nu$, which are each additive under parallel composition of graphs. In these variables, perfect transmission at fixed momentum is characterized by the condition $\mu_1=\mu_2$ together with a hyperbola in the corresponding $(\mu,\nu)$-plane, whose points determine the transmission phase. This turns the search for graphs with prescribed transmission properties into a geometric vector-sum problem for smaller building blocks.
Positive maps that are not decomposable are a key resource in entanglement theory because they can detect bound entangled states, yet systematic methods for constructing them remain limited. We introduce an optimization framework based on differentiable semidefinite programming (SDP) for generating positive non-decomposable maps under flexible structural constraints on their Choi matrices. The method combines SDP-based certificates of non-decomposability and positivity with gradient-based optimization, enabling a systematic search over maps with different input and output dimensions. Within this framework, we generate previously unknown numerical examples, identify a parametrized family of maps arising from masked Choi matrices, and construct real non-decomposable maps. We further show that the same approach can be adapted to explore open questions in quantum information theory, including the PPT square conjecture and recently proposed eigenvalue bounds for 2-positive trace-preserving maps.
Current cloud-based quantum processors offer access to advanced hardware hosted on a remote server, but do not guarantee data or algorithm privacy. Blind quantum computation provides information-theoretic privacy by enabling a client to execute an algorithm without disclosing information about either the task or the final result. Here, we execute a measurement-based blind quantum computation protocol on a superconducting processor comprising two flip-chip-bonded modules, one acting as a server and the other as a client. The server generates a two-dimensional cluster state and forwards it to the client. Using this resource, the client implements a universal gate set with only adaptive single-qubit rotations and measurements. To illustrate this approach, we execute a three-qubit instance of the Deutsch-Jozsa algorithm. We analyze the server's quantum state after each rotation of a measurement-based single-qubit gate to verify that negligible information about the computation is revealed to the server, consistent with the one-way flow of information that guarantees blindness. This proof-of-principle demonstration establishes key elements of blind quantum computation in superconducting-circuit architectures, indicating that intermediate-scale implementations of blind protocols may become feasible with realistic near-term improvements in gate fidelities.
In this study, we explore the behavior of photon added coherent states in a deformed harmonic oscillator subjected to dissipative decoherence. We use $q-$deformation as our nonlinear function to model our system. By adjusting the deformation parameter, we show that $q-$deformed photon added coherent state (DPACS) exhibit greater nonclassicality and resilience to decoherence compared to those of a standard harmonic oscillator. Additionally, we investigate the nonclassical properties and entanglement of DPACS under decoherence induced by interaction with a dissipative photon-loss environment.
The application of quantum computing to data management has attracted growing interest, yet remains constrained by a limited understanding of how the physical behaviour of quantum devices relates to the structure and difficulty of database problems. In particular, evaluating quantum annealing approaches for combinatorial optimisation, which is central to many data management tasks, poses significant challenges beyond the scope of conventional empirical and complexity-theoretic methods. We present a computational toolbox for the systematic numerical analysis of quantum annealing processes derived from data management problem formulations. Adopting a physics-informed perspective, the toolbox enables the study of spectral and dynamical properties -- such as energy gaps and eigenstate structure -- that are inaccessible through direct hardware measurements, yet essential for understanding computational hardness and scaling behaviour. Our approach further provides derived quantities and visualisation techniques that support the interpretation of optimisation dynamics, the identification of structural similarities to canonical physical models, and the construction of reduced effective descriptions. By bridging methodological gaps between quantum computing and database systems research, this work establishes a principled foundation for evaluating quantum approaches and guiding future co-design efforts.
We study a generic quantum Markovian master equation for a linearly displaced or driven harmonic oscillator. It was known that the displacement dynamics of Gaussian mixed states depends on the unitary part of the Liouvillian, the decay rate of the system but not on the bath temperature. Here we further show that the fast-rotating modes do not affect the system's displacement dynamics under linear driving forces. Analytical solutions of the quantum master equation are obtained for displaced Gaussian mixed states. Because the non-driven and driven Liouvillians are related by a unitary displacement operator, they are expected to share the same exceptional points structure. At the exceptional points, the displacement of critically damped oscillator displays a characteristics polynomial-in-time prefactor multiplied by an exponential decay. We discuss how external time-dependent forces affect the displacement dynamics using impulsive force and harmonic force as examples. The results obtained for constant driving remain valid in the presence of time-dependent driving.
Spectrally multiplexed telecom quantum networks require quantum memories that combine efficient storage with programmable frequency addressing. An ideal integrated implementation should therefore unite a native telecom transition, efficient storage and fast on-chip spectral control. Here we demonstrate a cavity-enhanced quantum memory in an isotopically purified $^{167}\mathrm{Er}^{3+}$-doped thin-film lithium niobate microring resonator. Long-lived hyperfine shelving states support persistent, high-contrast atomic frequency comb preparation, with a single-component comb lifetime of $277.6 \pm 52.6$s. Together with cavity impedance matching, this yields an on-chip storage efficiency of $23.3 \pm 0.5\%$ for 100-ns storage. The intrinsic electro-optic response of lithium niobate enables frequency-selective storage and routing of retrieved photons at rates up to 20~MHz with inter-channel crosstalk below $10^{-4}$. We further store and retrieve time-energy-entangled telecom photons, violating an entanglement-witness bound by more than 11 standard deviations and thus verifying the quantum nature of the storage process. Our results establish erbium-doped thin-film lithium niobate as a programmable light--matter interface for spectrally multiplexed quantum networks.
One of the most common approaches for coupling optical single-photon sources and photonic integrated circuits is to use a cavity. The cavity acts as a spectral filter that distorts the light spectrum and changes its statistical properties. But in the general case one should take into account not only spectral filtering of light but also the spectral filter influence on the single-photon source dynamics. We build an effective analytical model for description of the cavity influence on the photon statistics of light emitted by the single-photon source as spectral filtering only. We show that this model correctly describes the photon statistics even in a strong-coupling regime between the single-photon source and the spectral filter. Our results can be useful for analytical modeling of photon statistics of quantum emitters strongly coupled to various electromagnetic interfaces.
The Heisenberg-Weyl group $HW(d)$ related to a $d$-dimensional Hilbert space $H(d)$, is enlarged into the Heisenberg-Weyl-parity group $HWP(d)$ that incorporates parity transformations. It consists of $2d^3$ elements, of which $d^3$ elements belong to the $HW(d)$ subgroup, and extra $d^3$ elements which are related through a Fourier transform with the former ones. It is shown that $HWP(d)$ is a generalised version of the dihedral group. The properties of operators that combine displacements and parity, are discussed. $HWP(d)$ is shown to be a solvable group, and commutators of its elements perform displacement and parity transformations of quantum states, along loops in the discrete phase space.$2d^2$ coherent states related to the $HWP(d)$ group are introduced, which consist of $d^2$ coherent states related to the $HW(d)$ subgroup, and extra $d^2$ coherent states which are related through a Fourier transform with the former ones. In noisy cases, expansion of an arbitrary state in terms of the $2d^2$ coherent states with Bargmann coefficients, is advantageous in comparison to expansion in terms of the $d^2$ coherent states related to $HW(d)$. One of the consequences of the $HWP(d)$ group, is a natural unification of the Wigner and Weyl functions. The properties of the unified Wigner-Weyl function are discussed.
We consider an extended model of quantum computation where a scalable fault-tolerant quantum computer is coupled to one or more ancilla qubits that evolve according to a nonlinear Schrödinger equation. Following the approach of Abrams and Lloyd, an efficient quantum circuit evaluating an $n$-bit Boolean function in conjunctive normal form is used to prepare an ancilla encoding its number $s$ of satisfying assignments ($0 \le s \le 2^n$). This is followed by a nonlinear quantum state discrimination gate on the ancilla qubit that is used to learn properties of $s$. Here we consider three types of state discriminators generated by different nonlinear Hamiltonians. First, given a restricted Boolean satisfiability problem with the promise of at most one satisfying assignment ($ 0 \le s \le 1$), we show that a qubit with $\langle \sigma^z \rangle \sigma^z$ nonlinearity can be used to efficiently determine whether $s = 0$ or $s = 1$, solving the UNIQUE SAT problem. Here $\langle A \rangle := \langle \psi | A |\psi \rangle $ denotes expectation in the current state. UNIQUE SAT is NP-hard under a randomized polynomial-time reduction (of course any discussion of complexity assumes a scalable, fault-tolerant implementation). Second, for unrestricted satisfiability problems with $ 0 \le s \le 2^n$, a Hamiltonian with $ \langle \sigma^x \rangle \sigma^y - \langle \sigma^y \rangle \sigma^x$ nonlinearity can be used to efficiently determine whether $s=0$ or $s>0$, thereby solving 3SAT, which is NP-complete. Finally, we show that $ \langle \sigma^y \rangle \langle \sigma^z \rangle \sigma^x - \langle \sigma^x \rangle \langle \sigma^z \rangle \sigma^y $ nonlinearity can be used to efficiently measure $s$ and solve #SAT, which is #P-complete. The nonlinear models are of mean field type and might be simulated with ultracold atoms.
Ultrafast continuous-variable quantum states offer new opportunities for advanced quantum technologies, but efficient homodyne detection of these states remains challenging. Here, we present a method for efficient ultrafast homodyne detection by exploiting temporal correlations in detector signals. By optimizing the temporal weight used to extract quadrature outcomes, we achieve a substantial increase in the signal-to-noise ratio of ultrafast homodyne detection, thereby improving the detection efficiency. We analyze the autocorrelations of shot noise and electronic noise and determine the optimal weight by solving a generalized Rayleigh quotient problem. The optimal weight enhances the squeezing and anti-squeezing levels observed experimentally. These results highlight the importance of optimized signal processing for efficient quantum measurements.
We present a tool QSeqSim, a Qiskit-integrated symbolic backend that fills the current gap of having no Qiskit-native support for simulating while-loop quantum programs and their induced sequential quantum circuits. QSeqSim takes Qiskit QuantumCircuit objects, translates them into OpenQASM 3 code, and organises the resulting program into a combination of combinational, dynamic, and sequential circuits, thereby assigning while-loops a precise sequential circuit semantics with explicit internal and external qubits. Building on this semantics, QSeqSim adopts a Binary Decision Diagram (BDD)-based symbolic representation and integrates weighted model counting to compute measurement probabilities efficiently by exploiting sharing in structured and sparse BDDs. On top of this Boolean backbone, it introduces dedicated symbolic operators for state composition and state retention, thereby enabling efficient symbolic execution of sequential quantum circuits. Our experiments demonstrate that QSeqSim scales to substantial while-induced sequential circuits; in particular, in the quantum random walk benchmark we successfully simulate circuits with over 1000 qubits for more than 10 loop iterations. QSeqSim is available at this https URL.
We introduce a nonlocal Maxwell demon teleporting ergotropy at finite temperature via classical communication and a shared surface code. The teleported ergotropy is exponentially protected below a topological threshold. We identify a thermodynamic phase transition separating a profitable demon phase from a thermal phase. A quadratic infrastructure cost strictly enforces the second law, imposing a fundamental thermodynamic horizon on separation distance. This establishes quantum error correction as a resource for nonlocal thermodynamics beyond fault-tolerant computation.
We prove that the maximum eigenvalue of the (both signed and unsigned) Laplacian of level $k$ Kikuchi graph of any graph $G$ with $m$ edges is at most $m+k$. This confirms four recent conjectures of Apte, Parekh, and Sud. As applications, we obtain that tensor products of one and two qubit product states achieve an approximation ratio of $5/8$ for Quantum Max Cut and $5/7$ for the XY Hamiltonian. Moreover, combining our bounds with the algorithms analyzed by Apte, Parekh, and Sud, yields efficient algorithms achieving an approximation ratio of $0.614$ for Quantum Max Cut and $0.674$ for the XY Hamiltonian. Finally, we also make modest progress on Brouwer's conjecture and improve Lew's bound on the sum of the top-$k$ eigenvalues of a Graph Laplacian.
The Majorana stellar representation translates abstract quantum spin states into intuitive geometric constellations on the Bloch sphere, revealing symmetries, degeneracies, and correlations that traditional algebraic methods often obscure. Within quantum information science, this framework provides a powerful lens for characterizing symmetric multi-qubit and higher-spin systems. By encoding entanglement directly into spatial coordinates, the constellation geometry yields exact measures of concurrence, three-tangle, and genuine multipartite entanglement, while its dynamical evolution uncovers internal anomalous contributions to geometric phases. While interest in stellar representations has resurged, existing literature remains fragmented, lacking a unified treatment of these entanglement-specific metrics and their higher-dimensional dynamics. This review synthesizes the entanglement-centric perspective on Majorana representations, bridging discrete algebraic classifications (e.g., SLOCC orbits) with continuous geometric interpretations. Crucially, we highlight how this framework circumvents \#P-hard computational bottlenecks, leveraging polynomial-time tractability to evaluate multipartite invariants. We detail the interplay between constellation topology and higher-spin Berry/Hannay phases, explore extensions beyond pure symmetric states, and review applications in quantum metrology, state engineering, and condensed-matter physics. By foregrounding entanglement as the unifying theme, this comprehensive examination establishes Majorana stars as a fundamental geometric language, uniquely positioned to inspire new theoretical and experimental directions in quantum technologies.
Shared multipartite entanglement defines a ``whatever channel'', i.e., a latent communication substrate that does not determine a priori which end-to-end entangled links are activated, but can be configured to support different entanglement-connectivity graphs through Local Operations and Classical Communication (LOCC). Building on this, we propose a resource-driven framework in which multipartite entanglement is treated as a programmable resource that induces a space of admissible entanglement-graph configurations. Within this framework, connectivity provisioning emerges as a particular instance of a more general resource reconfiguration process. To support this paradigm, we introduce a set of structural design parameters that characterize the operational degrees of freedom of the resource and define the admissible transformations independently of the specific mechanism used to realize them. We then formalize Entanglement Rolling as a measurement-based protocol that operates over the induced configuration space, enabling the systematic reconfiguration of the shared resource across a family of multipartite states. Finally, we analyze the proposed framework under realistic noise conditions. Leveraging the Noisy Stabilizer Formalism (NSF), we derive closed-form noise maps that characterize the effect of noise on the resource transformations and show that the proposed approach maintains reliable performance under relevant noise processes.
We derive closed-form propagators for any $K$-qubit subsystem of a closed $N$-qubit network with a single conserved excitation. A single transition amplitude simultaneously controls excitation flow between subsystems, the positivity and complete positivity of every propagator, the entanglement entropy of every subsystem, and the quantum Fisher information for global parameters. Positivity and complete positivity coincide, determined solely by the direction of excitation flow, independently of subsystem size, coherence, or entanglement structure. A propagator is positive and completely positive if and only if it contracts the subsystem state toward its fixed point. The ensemble of propagators collectively constrains global properties inaccessible to any single subsystem. For single-qubit subsystems, we characterize the ensemble's fixed-point distribution and domain of positivity, finding a band of states that lies inside the positivity domain of every propagator yet is never visited by the physical dynamics. The quantum Fisher information decomposes into state and process contributions over any observation window $[t_1,t_2]$, with the state contribution bounded while the process contribution grows secularly. The total Fisher information is minimal when all future propagators are nonpositive and not completely positive, and near its maximum when they are positive and completely positive.
Quantifying quantum resources for simulating the fundamental forces of Nature is sensitive to the mapping of gauge fields onto finite quantum computational architectures. When locally truncating lattice gauge theories in the irreducible representation basis, it has been proposed to further deform the theory via quantum groups. The purpose of this deformation is (1) to provide an infinite tower of finite-dimensional ($d = k+1$) groups systematically approximating the infinite-dimensional gauge links and (2) to restore the physical unitarity of a plaquette operator diagonalization procedure analytically derived from the field continuum by recontracting vertex pairs. For the SU(2)$_k$ Yang-Mills pure-gauge theory, we provide a constructive strategy of gauge-variant completions to extend this unitarity to the entire computational Hilbert space, leading to well-defined time evolution unitaries as targets for optimized circuit synthesis. Leveraging basic circuit decompositions and symmetries of the diagonalized plaquette operator, we report resource upper-bounds on the generalized-controlled-X two-qudit gates for arbitrary local truncation $d$, reducing estimates and scaling relative to the non-deformed theory by three polynomial powers from $O(d^8)$ to $O(d^5)$. Examining the stronger q-deformed gauge constraint, which softens the total flux at vertices, we show that the physical Hilbert space dimension of the deformed plaquette operator scales equivalently to its non-deformed counterpart with a constant factor $0.2563(5)$. Thus, despite affecting interactions at all scales as exemplified by the observed flux hierarchy inversion symmetry, q-deformation continues to pass scrutiny as a reliable truncation offering advantages in quantum circuit synthesis.
Nondestructive detection of single-electron motion is crucial for quantum information processing with electrons trapped in Paul traps. The standard approach in Penning traps is to detect the image current induced on the trap electrodes by the electron's oscillatory motion. However, applying this approach in Paul traps for single electrons is currently hindered by motional frequency fluctuations arising from trap anharmonicities and instabilities in the rf trapping field. In this work, we propose a robust detection scheme exploiting the transient dynamics of parametric driving to overcome these limitations. Distinct from traditional steady-state approaches, our method focuses on the transient regime to break the temporal constraints imposed by steady-state assumptions, thereby enabling fast readout. We show that a controlled ramp of the parametric drive effectively locks the frequency of the electron motion in the transient regime, rendering the signal highly resilient to realistic experimental noise and inherent micromotion. This work paves the way for the experimental realization of nondestructive detection of single-electron motion in Paul traps.
How much energy does a quantum computer consume? Are they more efficient than their classical counterparts? In this work, we make a step towards answering these questions. We define the energy efficiency of a quantum computer as the ratio of the number of algorithms it can perform during a given time over the energy consumed by the hardware during this time. We analyze the most representative physical platforms currently envisioned to be used as building blocks of quantum computers: superconducting qubits, silicon spin qubits, trapped ions, neutral atoms and photonic qubits. Including insights from experts in all these technologies and taking into account algorithm compilation constraints, we discuss the advantages and inconveniences of each platform from an energy standpoint. Beyond providing concrete values of the energy consumption of current quantum computers, we lay the foundation of a framework to benchmark the energy efficiency of any future quantum computing architecture.
The classical simulation of quantum algorithms is a crucial tool for circuit development, testing, and validation. Although acceleration using GPUs significantly reduces simulation time, most high-performance simulators rely on vendor-specific frameworks that target data-center hardware. To broaden access to quantum simulation, this work proposes a vendor-agnostic approach targeting the integrated GPUs commonly found in consumer-grade laptops. A primary challenge in state-vector simulation is its inherently poor spatial locality, which creates a memory bandwidth bottleneck. Consequently, baseline implementations experience a severe degradation in relative GPU speedup as the number of simulated qubits increases. To address this limitation, we introduce a state partitioning optimization that reorganizes the quantum state vector to maximize the last-level cache locality and minimize costly main memory fetches. We evaluate this strategy using a Quantum Phase Estimation algorithm across diverse architectures from Intel, AMD, and Apple. The experimental results demonstrate that the proposed optimization successfully mitigates performance degradation at larger qubit scales. In particular, for a 28-qubit simulation, the optimization reversed a performance deficit on an Intel Core i5, improving the GPU speedup over the CPU from 0.95x to 1.89x, and increased the Apple M1 Pro speedup from 3.71x to 5.88x. Overall, this approach yields consistent execution time improvements, demonstrating the viability of integrated GPUs for efficient quantum circuit simulation.
Characterizing large quantum systems with minimal assumptions is a central challenge in quantum information science. Self-testing provides the strongest form of certification by identifying the underlying quantum state solely from observed measurement statistics. However, existing self-testing methods for generic $n$-partite states face a scalability barrier, requiring exponentially many samples in the system size. In this work, we overcome this barrier by introducing a protocol that robustly self-tests almost all $n$-qubit states with only polynomial sample complexity. The key ingredient is an efficient scheme for device-independently evaluating multipartite Pauli measurements, which can be implemented using only a linear number of ancillary Bell pairs together with standard projective and Bell measurements, well within the reach of current quantum technology. Beyond self-testing states, our scheme provides a general framework for implementing a wide range of learning and certification protocols in the device-independent setting, thereby opening a scalable route to device-independent quantum information processing in large-scale quantum networks.
Independent and identically distributed (i.i.d.) states are ubiquitous in quantum information theory. However, in a practical setting, the i.i.d. assumption is too stringent, and possibly not realistic. A physically more compelling class of 'almost i.i.d.' sources was recently proposed by [Mazzola/Sutter/Renner, arXiv:2603.15792]. In this paper, we introduce two alternative definitions of almost i.i.d. states, based on the normalised quantum Wasserstein distance and on the idea of looking at the average $k$-body marginal. We explore some basic properties of these notions and prove a strict hierarchical relation among them, with Mazzola et al.'s notion being the strictest, the one based on $k$-body marginals the loosest, and the one based on the quantum Wasserstein distance in between. Strict separation is established by means of explicit examples.
We show that the low-energy states of non-Abelian topological orders possess extensive magic which is long-ranged, and cannot be eliminated by a constant-depth local unitary circuit. This refines conventional notions of complexity beyond the linear circuit depth which is required to prepare any topological phase, and provides a new resource-theoretic characterization of topological orders. A central technical result is a no-go theorem establishing that stabilizer states--even up to constant-depth local unitarie--cannot approximate low-energy states of non-Abelian string-net models which satisfy the entanglement bootstrap axioms. Moreover, we show that stabilizer-realizable Abelian string-net phases have mutual braiding phases quantized by the on-site qudit dimension, and that any violation of this condition necessarily implies extensive long-range magic. Extending to higher spatial dimensions, we argue that any state obeying an entanglement area law and hosting excitations with nontrivial fusion spaces must exhibit extensive long-range magic. This applies, in particular, to ground-states and low-energy states of higher-dimensional quantum double models.
The performance of quantum resource manipulation protocols, including key examples such as distillation of quantum entanglement, is measured in terms of the rate at which desired target states can be produced from a given noisy state. However, to achieve optimal rates, known protocols require precise tailoring to the quantum state in question, demanding a perfect knowledge of the input and allowing no errors in its preparation. Here we show that distillation of quantum resources in the framework of resource non-generating operations can be performed universally: optimal rates of distillation can be achieved with no knowledge of the input state whatsoever, certifying the robustness of quantum resource distillation. The findings apply in particular to the purification of quantum entanglement under non-entangling maps, where the optimal rates are governed by the regularised relative entropy of entanglement. Our result relies on an extension of the generalised quantum Stein's lemma in quantum hypothesis testing to a composite setting where the null hypothesis is no longer a fixed quantum state, but is rather composed of i.i.d. copies of an unknown state. The solution of this asymptotic problem is made possible through new developments in one-shot quantum information and a refinement of the blurring technique from [Lami, arXiv:2408.06410].
We show by a counting argument that even though translation symmetry admits symmetric short-range entangled (SRE) eigenstates, there are not enough such SRE eigenstates to span the zero momentum sector. This means that the fixed point strong-to-weak spontaneous symmetry breaking state of translation symmetry is long-range entangled: it cannot be written as a mixture of SRE states. This is a subtle form of long-range entanglement in mixed states that cannot be detected by long-range connected correlation functions.
We present a new mechanism for long-range entanglement (LRE) in strongly symmetric many-body mixed states that does not rely on symmetry anomalies or long-range correlations. Our primary example is the maximally mixed state in the translation-invariant subspace on a one-dimensional ring. This state is LRE because translationally symmetric short-range entangled states span a subspace whose dimension grows only polynomially with system size, whereas the full translation-invariant subspace grows exponentially. We further discuss certain unconventional properties of this state, including logarithmically growing conditional mutual information, strong-to-weak spontaneous symmetry-breaking, and Rényi-index-dependent operator-space entanglement. We also construct a geometrically non-local Lindbladian to stabilize this state as the steady state. Our results identify dimensional mismatch as a novel route to LRE that is intrinsic to many-body mixed states.
We investigate measurement-induced localization in a continuously monitored one-dimensional Aubry--André--Harper model, focusing on the quantum Zeno regime in which the measurements dominate coherent dynamics. The presence of a quasiperiodic potential renders the problem analytically tractable and enables a controlled study of the interplay between monitoring and disorder. We develop an analytical description based on an instantaneous Schrödinger equation with a measurement-induced effective potential constructed self-consistently from individual quantum trajectories, without relying on postselection. In the quantum Zeno regime, an emergent dominant energy scale reduces the problem to a transfer-matrix formulation of an effective non-Hermitian Hamiltonian, which allows direct computation of the Lyapunov exponent. Complementarily, we extract the localization length numerically from long-time steady-state quantum state diffusion trajectories by reconstructing the intrinsic localized single-particle wave functions and analyzing their spatial decay. These numerical results show quantitative agreement with the effective theory predictions, with controlled corrections of order $J^2/[\lambda^2+(\gamma/2)^2]$ (where $J$ is the hopping amplitude, $\gamma$ the measurement strength, and $\lambda$ the quasiperiodic potential). Our results underscore the connection between the effective non-Hermitian description and the stochastic monitored dynamics, showing the interplay between Zeno-like localization, coherent hopping, and quasiperiodic-disorder-induced localization, while also laying the groundwork for understanding and exploiting measurement-induced localization as a tool for quantum control and state preparation.
Quantum computers promise exponential speedups for problems in cryptography, chemistry, and optimization. Realizing this promise requires fault tolerance: physical qubits are noisy, so logical qubits must be encoded redundantly across many physical ones using quantum error-correcting codes. In most practical fault-tolerance schemes, T gates cannot be implemented transversally and instead require costly magic-state distillation protocols involving a complex set of operations. As a result, T-gate count can dominate the resource budget of large-scale quantum computations, making T-count minimization a central bottleneck on the path to quantum advantage. Existing T-count optimization tools, however, do not scale to the circuits that quantum advantage demands. We present theoretical and practical results on T-gate optimization. On the theoretical side, we give a linear-time randomized algorithm for phase folding, based on a novel randomized static analysis. Our static analysis soundly approximates the set of reachable quantum states with an arbitrarily high probability. Our key insight is a static analysis that does not track symbolic expressions, but propagates constant-width bitstrings down the circuit. On the practical side, our implementation, TZAP, is multiple orders of magnitude faster than state-of-the-art tools -- such as PyZX, VOQC, and Feynman -- closely matches their T-count reductions on standard benchmarks, and within seconds on a laptop computer can optimize circuits with millions of gates.
For three-dimensional non-interacting multi-band metals, we show that important information about the shape and the quantum geometry of Fermi surfaces is encoded in the subleading logarithmic term of bipartite charge fluctuations. This logarithmic term is related to the dimensionless $|\mathbf{q}|^3$-coefficient of the structure factor in momentum space, and both quantities can be expressed as Fermi surface integrals of the Fermi surface curvature tensor and the quantum metric tensor. When the real-space partition surface is a quadric (i.e., sphere or ellipsoid), the logarithmic coefficient satisfies a topological bound depending only on the Euler characteristic and the Chern number of the Fermi surface, illustrating a non-trivial interplay between topology and quantum topology in multi-band metals.
We construct families of deformations of the double-scaled SYK (DSSYK) model and investigate their bulk interpretation. We introduce microscopic deformations of the SYK model which, after ensemble averaging and in the double-scaling limit, are described by a transfer matrix encoding the recurrence relations of basic orthogonal polynomials in the q-Askey scheme. For certain families of deformations in the semiclassical limit at finite temperature, the chord number (encoding Krylov complexity) corresponds to the length of an Einstein-Rosen bridge connecting an End-Of-The-World brane to an anti-de Sitter asymptotic boundary. By increasing one of the deformation parameters, the models eventually exhibit discrete energy levels, signaling a new geometric transition in sine dilaton gravity. Via the SYK-Schur duality, Krylov complexity also admits a representation-theoretic interpretation as the spread of the SU(2) spin in the index of an $\mathcal{N}=2$ SU(2) gauge theory. We study the operator algebras of the deformed theories. The algebras can be type II$_1$ or type I$_\infty$ factors, depending on the operators that are included. The entanglement entropy between the type II$_1$ algebras for a pure state manifests as an extremal surface through the Ryu-Takayanagi formula. We discuss connections between our results and the emergence of baby universes in the bulk.
Inverse problems in scientific sensing are often solved with either hand-designed regularizers or supervised networks trained on simulated labels, yet both can fail when the forward model is nonlinear, spectrally coupled, and physically delicate. We study this issue for noise sensing based on nitrogen-vacancy (NV) centers in diamond, where a quantum sensor measures magnetic-noise spectra generated by sparse spin sources. We show that replacing a common scalar/coherent forward approximation with a tensor power-summed dipolar operator changes the inverse landscape and exposes a center-collapse failure mode in free-density optimization. We propose NeTMY, an amortization-free coordinate neural field coupled to the differentiable NV forward model, with annealed positional encoding, multiscale optimization, sparsity/gating, and spectrum-fidelity losses. Across sparse synthetic reconstructions generated by the corrected operator, NeTMY achieves the best localization and distributional metrics in the tested benchmark. Mechanism experiments show that NeTMY does not directly execute the raw density-space gradient; its parameterization smooths and redistributes updates, mitigating the center-collapse pathology. These results position NV quantum sensing as a useful testbed for physics-faithful neural inverse problems.
We propose a superconducting circuit based on the Bloch transistor, a quantum device consisting of two small-capacitance Josephson junctions connected in series and having a small island in between. This device is driven by two dc electrical sources controlling Josephson oscillations of frequency $f_J = 2e\overline{V_J}/h$, related to the average voltage $\overline{V_J}$ on the transistor, and Bloch oscillations of frequency $f_B = \overline{I_B}/2e$, related to the average current $\overline{I_B}$ injected into the transistor island. Due to the Bloch transistor properties, these two types of oscillations can mutually phase lock, i.e., $f_J = f_B$. This leads to formation of current steps on the current-voltage curve at $\overline{I}_B = 2ef_J$, which are similar to the dual Shapiro steps appearing at current $\overline{I}=2ef$ under microwave irradiation of frequency $f$. Moreover, transconductance $\overline{I_B}/\overline{V_J}$ takes the fundamental value of $1/R_Q$, where $R_Q = h/4e^2$ is the resistance quantum. The obtained results pave the way to the alternative quantum standard of resistance, based on the superconducting circuit and operating without applying strong magnetic field.
We investigate the spectral consequences of the uniquely determined Hermitian ordering of the Dirac Hamiltonian with spatially varying mass. In contrast to the nonrelativistic case, where continuous families of admissible prescriptions exist, the relativistic Dirac operator admits a single consistent ordering compatible with probability-current conservation. This requirement generates an additional logarithmic-gradient term proportional to the spatial variation of the mass profile. We show that this contribution modifies the effective kinetic operator and induces a universal deformation of the spectral quantization condition. In compact geometry, an explicit analytic computation reveals a mode-dependent second-order spectral shift that becomes strongly enhanced near the mass-inversion threshold. These results demonstrate that the consistent relativistic ordering of the Dirac operator leads to observable modifications of discrete spectra in spatially inhomogeneous scalar backgrounds.
Hybrid quantum-classical (HQC) algorithms, such as the Variational Quantum Eigensolver (VQE) and the Quantum Approximate Optimization Algorithm (QAOA), are central to near-term quantum computing but remain challenging to test. Sampling-based fuzzing can expose faulty or non-convergent configurations, but under realistic execution budgets, it may miss failure-prone regions in the joint space of classical optimizer settings and quantum circuit parameters. This paper studies failure-guided fuzzing for HQC programs. It models a hybrid input as a pair of classical optimizer hyperparameters and quantum circuit parameters, and evaluates a two-phase strategy that first searches for non-convergent seeds and then locally fuzzes circuit parameters around those seeds. To understand where the gains come from, five budgeted strategies are compared: random hybrid testing, classical enumeration without fuzzing, random-seed local fuzzing, enumeration-seed local fuzzing, and concolic-seed local fuzzing. The study is implemented on a VQE instance and a QAOA MaxCut instance in Qiskit. The results show that failure-guided local fuzzing is the main driver of improvement over random testing, while concolic seed discovery provides additional benefits on VQE but is less stable on QAOA. These findings suggest that reusing failure information is a promising direction for HQC testing, but that the value of concolic seed discovery is workload-dependent.
We present an empirical evaluation of quantum entanglement in agent coordination within quantum multi agent reinforcement learning (QMARL). While QMARL has attracted growing interest recently, most prior work evaluates quantum policies without provable baselines, making it impossible to rigorously distinguish quantum advantage from algorithmic coincidence. We address this directly by evaluating a decentralized QMARL framework with variational quantum circuit (VQC) actors with shared entangled states. In the CHSH game, which has a mathematically proven classical performance ceiling of 0.75 win rate, we show that entangled QMARL agents approach the Tsirelson limit of 0.854, providing clear evidence of their quantum advantage. We show that unentangled quantum circuits match the classical baseline, confirming that entanglement and not the quantum circuit itself is the active coordination mechanism. We also explore the effect of specific entanglement structures, as some Bell states enable coordination gains while others actively harm performance. On cooperative navigation (CoopNav), QMARL without entanglement achieves $\sim2\times$ improvement in success rate over classical MAA2C ($\sim$0.85 versus $\sim$0.40), with the hybrid configuration, quantum actor paired with a classical centralised critic, outperforming both fully classical and fully quantum solutions. We present our experimental analysis and discuss future work.
We develop an open quantum theory for shot-noise dynamics in dissipative chiral transport. By mapping a system under consideration onto a quantum circuit, we show that current noise is governed by two competing factors: the average occupancy distribution and particle-number fluctuations. With energy fully relaxed, shot noise is strongly suppressed, reflecting the stacking of electrons into lower energy states due to dissipation. This process quenches the partition noise from partially occupied levels, and finally isolates the residual noise protected by strong $U(1)$ symmetry. Moreover, selectively heating the source against the bath uncovers the underlying competition between the noise contributions from the occupancy distribution and those from the particle-number fluctuations. It triggers a sign reversal in inter-channel correlation noise, a signature masked by seemingly identical single-channel thermal noises. We propose an inversion scheme to experimentally reconstruct the hidden occupancy distribution directly from measurable noise cumulants.
Superconducting nanowire single-photon detectors (SNSPDs) have demonstrated timing jitter in the few-picosecond regime, yet their timing resolution deteriorates substantially under high-count-rate operation. Existing interpretations mainly attribute this degradation to deterministic waveform distortions, such as multiphoton responses and pulse pile-up, yet the experimentally observed jitter broadening at high count rates cannot be fully accounted for within this picture. Here, we show that stochastic baseline fluctuations arising from finite-memory readout dynamics constitute an intrinsic source of the count-rate-dependent timing jitter in SNSPD systems. For stochastically arriving photons, overlapping recovery responses accumulate in the readout chain and generate statistically fluctuating baselines, which are converted into timing uncertainty through threshold-based timing extraction. We develop a stochastic-process framework that quantitatively connects photon statistics, readout dynamics, and timing jitter. The framework predicts characteristic scaling behaviors, including a nonmonotonic dependence of baseline fluctuations under pulsed excitation with a maximum near half of the repetition frequency. These predictions are quantitatively verified through systematic variations of count rate, circuit time constant, and detector dynamical properties. Our results identify stochastic baseline dynamics as a fundamental mechanism limiting timing resolution in high-count-rate SNSPD operation and provide a general framework for optimizing finite-memory high-speed photon-counting systems.
Nitrogen-vacancy (NV) centers in diamond are a leading platform for solid-state quantum sensing and quantum information processing. While most optical studies rely on the visible fluorescence associated with the triplet transitions, the infrared singlet transition near $1042$ nm, which is typically considered dark within the singlet manifold of the NV optical cycle, provides an alternative optical channel. Here, we report wavelength-resolved optically detected magnetic resonance (ODMR) measurements of this infrared emission. We directly observe ODMR contrast in the $1042$ nm emission and analyze its dependence on the magnetic field. The field-dependent spectral dispersion of the ODMR signal demonstrates that the spin-state information encoded in the NV center is transcribed to the infrared singlet emission through the spin-selective intersystem crossing, in close analogy to the visible fluorescence readout. These results establish infrared ODMR as a high-fidelity optical readout pathway. Crucially, by extending spin-state transcription directly into the $1300-1600$ nm range, this work demonstrates a direct, conversion-free interface between diamond spin-qubits and standard telecommunication infrastructure, bypassing the efficiency bottlenecks of active frequency conversion and benefiting from the already well-developed technologies in this range of the electromagnetic spectrum.
Single-photon detection possibility is a fundamental requirement for quantum technologies, including communication, computing and sensing. To achieve scalability and practical deployment, increasing attention is being directed toward integration of detectors with photonic integrated circuits, which offer compactness and compatibility with mass production. Superconducting nanowire single-photon detectors have emerged as the leading solution, combining near-unity efficiency, high temporal performance and the ability to be embedded across a wide range of photonic material platforms. In this review we trace the development of integrated superconducting nanowire single-photon detectors from early demonstrations to recent advances, outlining the progress in device architectures, material engineering and integration strategies. We also discuss performance benchmarks, emerging alternative designs, the future opportunities and challenges for this rapidly evolving field.
Physical Unclonable Functions (PUFs) are hardware security primitives whose inherent physical complexity can be exploited for secure authentication and cryptographic key generation. Silicon photonic devices, owing to their suitability for quantum and artificial intelligence applications alongside standard CMOS fabrication processes, constitute a highly promising substrate for integrated multifunctional PUFs. Despite the advanced security guarantees offered by quantum cryptographic protocols and the central role of silicon photonics in quantum technologies, quantum readout strategies based on single-photon states for photonic PUFs remain largely unexplored. In this work, we experimentally demonstrate a silicon nitride (SiN) programmable photonic Mach Zehnder interferometer mesh that implements a unitary transformation and operates as a PUF, whose secret physical signature arises from uncontrollable waveguide variations during fabrication. Using experimentally derived parameters from the SiN integrated mesh, we further introduce and numerically evaluate a quantum readout protocol that combines single-photon states with PUFs. Maximally mixed quantum states are employed to conceal the underlying unitary transformation from passive eavesdropping. Security against adversaries possessing devices fabricated under similar conditions is assessed, with authentication performance quantified through Monte Carlo analysis of the false acceptance and false rejection rates as a function of the number of detected events and corrected errors. The results indicate exceptional performance with equal error rates as low as 10 to the minus 14, highlighting the potential of quantum secure PUFs for high security authentication applications.
By implementing the Bogoliubov-de Gennes (BdG) formalism of population-imbalanced atomic Fermi gases with pairing interactions in a thin spherical shell, we characterize the Larkin-Ovchinnikov-Fulde-Ferrell (LOFF) state in such a compact geometry. We first construct a phase diagram showing where uniform solutions of spin-polarized Fermi superfluid from the BdG equation cease to exist due to the vanishing order parameter. Near the boundary, various LOFF states with spatially modulating order parameters and density profiles can survive as convergent solutions to the BdG equation. When both uniform and LOFF solutions are present, we compare their grand potentials to determine the energetically favorable state and find that the LOFF states with multiple nodes in the order parameter become more stable at higher spin polarization. However, the LOFF state only survives close to the phase boundary where the uniform solutions vanish, indicating fragility of the LOFF state on a spherical surface. We also briefly discuss possible implications.
Rydberg atoms provide a powerful platform for exploring strongly interacting quantum systems, both in free space and in structured electromagnetic environments, with growing applications in quantum technology. Accurately modeling their single-atom properties and mutual interactions is essential for interpreting experiments and designing new architectures. We present a unified theoretical framework for Rydberg atoms and their interactions based on multi-channel quantum defect theory (MQDT) and static electromagnetic Green's tensors. MQDT provides a precise description of Rydberg states of divalent atoms such as strontium and ytterbium, while the Green's tensor formalism provides a general and flexible approach for calculating interactions between two Rydberg atoms in arbitrary geometries, including modifications induced by nearby surfaces. We implement this framework in an updated version of the open-source PairInteraction software [Weber et al., J.~Phys.~B~50 (2017)]. The implementation leverages high-performance libraries and achieves speedups of one order of magnitude for pair-potential calculations compared to prior software. We demonstrate the capabilities of the framework through example applications to divalent atoms and show excellent agreement with experimental data for an exemplary Stark map of $^{174}$Yb. The modular software architecture enables the community to extend it further.
Classifying interactions is key in the physical sciences, and bonding mechanisms in matter-antimatter systems remain particularly enigmatic. Here we focus on a paradigmatic example of positronium hydride (PsH) dimer composed of two protons, two positrons, and four electrons, whose bonding nature has been previously described as either ionic, covalent, or van der Waals-like. Accurate quantum Monte Carlo calculations show that the two positrons occupy a delocalized molecular orbital that envelopes the two hydrogen anions and responds as a collective dipole to an applied electric field. This positronic bonding stems from quantum correlations that resemble a single covalent bond formed between negatively charged pseudo-nuclei, but with a bond strength commensurate with the traditional van der Waals interaction. Our findings suggest that the ability to form delocalized proto-bonds is a more general property of quantum systems, and could be present in a broader class of particles, antiparticles, and quasi-particles interacting with matter.
The possibility of laser cooling and the presence of closely spaced rovibrational doublets make polyatomic molecules an attractive platform for the $\mathcal{P}$, $\mathcal{T}$-violation searches. We study the spectrum of the lowest rovibrational state of the AcOCH$_3+$ symmetric top molecule. The electronic structure full-electron computation was performed within a relativistic coupled cluster method with double and perturbative triple excitations. The rovibrational wavefunctions are obtained using a coupled channel technique, taking into account all rovibrational effects and anharmonicities of the potential. As a result, the vibrational frequencies, as well as the values of the electric dipole moments for the rovibrational states, were computed.
We investigate realizations of (1+1)-dimensional fusion category symmetries on tensor-product Hilbert spaces, allowing for mixing with quantum cellular automata (QCAs). It was argued recently that any such realizable symmetry must be weakly integral. We develop a systematic analysis of QCA-refined realizations of fusion categories and prove two statements. First, we show that, under certain physical assumptions on defects, any QCA-refined realization has QCA and symmetry-operator indices determined by the categorical data, up to the freedom of redefining the symmetry operators. Second, we construct a lattice model that provides a QCA-refined realization for any weakly integral fusion category symmetry on a tensor product Hilbert space. We also compute indices of the QCAs in our lattice model and show agreement with the first result. As an application of the general construction, we give an explicit QCA-refined realization of general Tambara-Yamagami categorical symmetries.
We show topological configurations of the complex-valued spectra in gapped non-Hermitian systems. These arise when the distinctive EPs in the energy Riemann sheets of such models are annihilated after threading them across the boundary of the Brillouin zone. This results in a non-trivially closed branch cut that is protected by an energy gap in the spectrum. Their presence or absence establishes topologically distinct configurations for fully non-degenerate systems and tuning between them requires a closing of the gap, forming exceptional point degeneracies. We provide an outlook toward experimental realizations in metasurfaces and single-photon interferometry.
Practical quantum key distribution (QKD) protocols require a finite-size security proof. The phase error correction (PEC) approach is one of the general strategies for security analyses that has successfully proved finite-size security for many protocols. However, the conventional PEC approach cannot achieve the asymptotically optimal key rate in general, as long as the failure probability of PEC is estimated through the phase error rate. In this work, we propose a new PEC-type strategy that can provably achieve the asymptotically optimal key rate. The key piece for this is a virtual protocol based on universal source compression with quantum side information, which is of independent interest. A universal source compression with quantum side information protocol is first constructed for fixed-length independent and identically distributed (i.i.d.)~setups and then extended to adaptive-length setups with the restrictions on possible states imposed by joint random variables. Combined with the reduction method to collective attacks, this enables us to tightly evaluate the failure probability of PEC for permutation-symmetric QKD protocols, and thus leads to asymptotically tight analyses. As a result, the security of any permutation-symmetrizable QKD protocol gets reduced to the estimation problem of a single conditional Rényi entropy, which can be efficiently solved by a convex optimization.
As quantum computers scale, single-chip architectures face inherent limitations in qubit count. It drives the need for modular quantum computing and Quantum Data Centers (QDCs), where multiple quantum processor units (QPUs) are interconnected to enable the distributed execution of a quantum algorithm. However, evaluating distributed quantum computing (DQC) architectures is challenging. Classical simulation is limited by the growth of exponential state vector, limiting their ability to model large systems and realistically capture hardware noise and timing. Meanwhile, implementing QDC introduces interconnect noise challenges such as transduction inefficiency and optical fiber losses. In this work, we introduce a hardware-based emulation framework by partitioning a single quantum processor's qubit coupling map into multiple logical QPUs. We show how noise arising from transduction and optical fiber can be modeled by adding an ancilla qubit representing the environment based on quantum collisional dynamics. This model is then translated into a gate-based circuit, in which the couplings between each portion act as controllable noisy quantum communication channels. We demonstrate the framework on IBM quantum hardware by executing remote gates under controllable communication noise. To highlight the flexibility of the platform, we further replicate the implementation results of distributed Grover's search on an ion-trap system. Finally, we test a larger circuit, i.e., Grover's search algorithm and the Quantum Fourier Transform (QFT), achieving reasonable fidelity across logical QPUs. Overall, the framework enables hardware-level emulation beyond the limits of classical scaling, captures noise sources through physical qubits, and is compatible with any platform supporting the Qiskit SDK.
A longstanding goal in quantum information science is to demonstrate quantum computations that cannot be feasibly reproduced on a classical computer. Such demonstrations mark major milestones: they showcase fine control over quantum systems and are prerequisites for useful quantum computation. To date, quantum advantage has been demonstrated, for example, through violations of Bell inequalities and sampling-based quantum supremacy experiments. However, both forms of advantage come with important caveats: Bell tests are not computationally difficult tasks, and the classical hardness of sampling experiments relies on unproven complexity-theoretic assumptions. Here we demonstrate an unconditional quantum advantage in information resources required for a computational task, realized on Quantinuum's H1-1 trapped-ion quantum computer operating at a median two-qubit partial-entangler fidelity of 99.941(7)%. We construct a task for which the most space-efficient classical algorithm provably requires between 62 and 382 bits of memory, and solve it using only 12 qubits. Our result provides the most direct evidence yet that currently existing quantum processors can generate and manipulate entangled states of sufficient complexity to access the exponentiality of Hilbert space. This form of quantum advantage -- which we call quantum information supremacy -- represents a new benchmark in quantum computing, one that does not rely on unproven conjectures.
Linear operations, e.g., vector-matrix and vector-vector multiplications, are core operations of modern neural networks. To diminish computational time, these operations are implemented by parallel computations using different coprocessors. In this work we show that an open quantum system consisting of bosonic modes and interacting with bosonic reservoirs can be used as an analog thermodynamic coprocessor implementing multiple vector-matrix multiplications with stochastic matrices in parallel. Input vectors are encoded in occupancies of reservoirs, and the output result is presented by stationary energy flows. The operation takes time needed for the system's transition to a non-equilibrium stationary state independently on the number of the reservoirs, i.e., on the input vector dimension. With technological limitations being considered, a device of $5\times5$ cm$^2$ area covered with the coprocessors can conduct of the order of $10^{11}$ operations per second per a mode of the OQS. The computations are accompanied by an entropy growth. We construct a direct mapping between open quantum systems and electrical crossbar structures frequently used in analog vector-matrix multiplication, showing that dissipation rates multiplied by open quantum system's modes frequencies can be seen as conductivities, reservoirs' occupancies can be seen as potentials, and stationary energy flows can be seen as electric currents.
The interplay between topology and quantum criticality has given rise to the notion of symmetry-enriched criticality, which has attracted considerable attention in recent years. In this Letter, we demonstrate that parity time (PT) symmetry enriches non-Hermitian critical points, establishing a topologically distinct class of non unitary criticality. Through the analytic solution of PT symmetric free fermion models, we reveal a new family of critical points that are topologically nontrivial and host robust edge modes. Crucially, these points cannot be adiabatically connected to trivial ones without breaking PT symmetry or crossing a multicritical point, and distinct from Hermitian counterparts. We further show that, at these PT symmetry enriched critical points, conformal scaling of the entanglement entropy necessarily comes with a quantized imaginary subleading term, whose quantization is set by the number of boundary modes in the reduced density matrix. This term is robust against PT symmetric disorder and interactions, and admits an interpretation as the Affleck Ludwig g factor associated with the boundary states. These phenomena are shown to arise from a generalized mass inversion unique to non-Hermitian criticality.
One of the most fundamental problems in distribution testing is the identity testing problem: given samples $x_1,\ldots,x_s$, the goal is to determine whether the samples are drawn from a target distribution $\mathcal{D}$. When $\mathcal{D}$ is a distribution over $\bit^n$, the optimal sample complexity of identity testing is known to be $\Omega(\sqrt{2^n})$. Furthermore, most existing results assume that the samples $x_1,\ldots,x_s$ are generated independently from an unknown distribution. In this work, we overcome both of these limitations by initiating study of distribution testing in a more realistic setting. In our model, the unknown distribution is promised to be efficiently samplable, while allowing the observed samples $x_1,\ldots,x_s$ to be adversarially generated and arbitrarily correlated. Under this model, we show that polynomially many samples suffice to verify distributions. We further characterize the computational complexity of verifying classically- and quantumly-samplable distributions. Our techniques also extend to verifications of quantum states. In establishing some of our results, we employ Kolmogorov complexity techniques in a novel manner. We also present multiple applications of Kolmogorov complexity that are of independent interest. In particular, we show that certified randomness with a classical efficient prover can be achieved without computational assumptions when inefficient verification is allowed. Furthermore, we also show that a natural quantum extension of a well-studied Kolmogorov complexity measure provides a good benchmark for certifying sampling-based quantum advantage.
Quantum emitters coupled to nanophotonic structures are an excellent platform for controllable single-photon scattering. The tunable light-matter interaction enables the construction of a single-photon switch -- a device that can route a single photon from an input port to a selected output port. Such single-photon switching devices can be integrated into reconfigurable photonic circuits to actively control the photon propagation direction in a quantum network. Ideally, a single-photon switch should operate with high speed, efficiency, and fidelity, preserving the state of the input photon in the routing process. This review brings together key input-output methods from quantum optics, theoretical proposals of emitter-based single-photon routing mechanisms, and experimental demonstrations of single-photon switching devices across different physical platforms, including semiconductor quantum dots, neutral atoms, superconducting qubits, and color centers. We highlight the need for reporting the key figures of merit (speed/efficiency/fidelity) in future single-photon switch demonstrations to support further developments in the field.
Creation and manipulation of non-classical states of light is rapidly becoming the focus of modern attosecond science. Here, we demonstrate numerically how interaction with such states can trigger the emergence of a many-body system with spontaneously broken symmetry by considering a modification of the well-known problem of superradiance encountered already by Dicke. Similarly to him, we investigate photon emission by ensembles of indistinguishable atoms. In contrast to him, however, we leverage symmetry-based selection rules to suppress emission of single photons by single atoms. A steady state is therefore only reached following a spontaneous transition into a collective symmetry-broken state of atoms and photonic modes. This transition permanently locks the atomic dipoles to the quantum field experienced by the system at a particular instant, transforming the entire setup into a potent quantum sensor reproducing the phase of the recorded quantum fluctuation.
Estimating quantum entropies and divergences is an important problem in quantum physics, information theory, and machine learning. Quantum neural estimators (QNEs), which utilize a hybrid classical-quantum architecture, have recently emerged as an appealing computational framework for estimating these measures. Such estimators combine classical neural networks with parametrized quantum circuits, and their deployment typically entails tedious tuning of hyperparameters controlling the sample size, network architecture, and circuit topology. This work initiates the study of formal guarantees for QNEs of measured (Rényi) relative entropies in the form of non-asymptotic error risk bounds. We further establish exponential tail bounds showing that the error is sub-Gaussian and thus sharply concentrates about the ground truth value. For an appropriate sub-class of density operator pairs on a space of dimension $d$ with bounded Thompson metric, our theory establishes a copy complexity of $O(|\Theta(\mathcal{U})|d/\epsilon^2)$ for QNE with a quantum circuit parameter set $\Theta(\mathcal{U})$, which has minimax optimal dependence on the accuracy $\epsilon$. Additionally, if the density operator pairs are permutation invariant, we improve the dimension dependence above to $O(|\Theta(\mathcal{U})|\mathrm{polylog}(d)/\epsilon^2)$. Our theory aims to facilitate principled implementation of QNEs for measured relative entropies and guide hyperparameter tuning in practice.
Quantum error correction (QEC) requires ancilla qubits to extract error syndromes from data qubits which store quantum information. However, ancilla errors can propagate back to the data qubits, introducing additional errors and limiting fault-tolerance. In superconducting quantum circuits, Kerr-cat qubits (KCQs), which exhibit strongly biased noise, have been proposed as ancillas to suppress this back-action and enhance QEC performance. Here, we experimentally demonstrate a beamsplitter interaction between a KCQ and a transmon, realizing an effective $\hat{Z}_{cat}\hat{X}_q$ coupling that can be employed for parity measurements in QEC protocols. We characterize the interaction across a range of cat sizes and drive amplitudes, confirming the expected scaling of the interaction rate. These results establish a step towards hybrid architectures that combine transmons as data qubits with noise-biased bosonic ancillas, enabling hardware-efficient syndrome extraction and advancing the development of fault-tolerant quantum processors.
We introduce DeepQuantum, an open-source, PyTorch-based software platform for quantum machine learning and photonic quantum computing. This AI-enhanced framework enables efficient design and execution of hybrid quantum-classical models and variational quantum algorithms on both CPUs and GPUs. For photonic quantum computing, DeepQuantum implements Fock, Gaussian, and Bosonic backends, catering to different simulation needs. To our knowledge, it is the first framework to realize closed-loop integration of three paradigms of quantum computing, namely quantum circuits, photonic quantum circuits, and measurement-based quantum computing, thereby enabling robust support for both specialized and universal photonic quantum algorithm design. Furthermore, DeepQuantum supports large-scale simulations based on tensor network techniques and a distributed parallel computing architecture. We demonstrate these capabilities through comprehensive benchmarks and illustrative examples. With its unique features, DeepQuantum is intended to be a powerful platform for both AI for Quantum and Quantum for AI.
Localizable measurements are joint quantum measurements that can be implemented using only non-adaptive local operations and shared entanglement. We provide a protocol-independent characterization of localizable projection-valued measures (PVMs) by exploiting algebraic structures that any such measurement must satisfy. We first show that a rank-1 PVM on $\mathbb{C}^d\otimes\mathbb{C}^d$ containing an element with the maximal Schmidt rank can be localized using entanglement of a Schmidt number at most $d$ if and only if it forms a maximally entangled basis corresponding to a nice unitary error basis. This reveals strong limitations imposed by non-adaptive local operations, in contrast to the adaptive setting where any joint measurement is implementable. We then completely characterize two-qubit rank-1 PVMs that can be localized with two-qubit entanglement, resolving a conjecture of Gisin and Del Santo, and finally extend our characterization to ideal two-qudit measurements, strengthening earlier results.
We investigate the Floquet spectrum of a detuned, driven two-level system and show that it exhibits exact quasienergy crossings when the detuning is an integer multiple of the energy quantum of the driving field. This behavior can be explained by a hidden time-nonlocal parity, which allows the Floquet modes to be classified as even or odd. Then a generic feature is the emergence of exact crossings between quasienergies of different parity. A constructive proof of the existence of the symmetry is based on a scalar recurrence relation. Moreover, we present a general scheme for its numerical computation, which can be applied to models beyond the two-level system. Analytical results are illustrated with numerical data.
As qubit decoherence times are increased and readout technologies are improved, nonidealities in the drive signals, such as phase noise, are going to represent a crucial limitation to the fidelity achievable at the end of complex control pulse sequences. Although the effect of phase noise of reference oscillators on qubit performance has been studied previously, its interaction with realistic time-dependent control pulses and its contribution to fidelity degradation have not yet been investigated in sufficient detail, and remains a critical challenge. Here we study the impact on fidelity of phase noise affecting reference oscillators with the help of numerical simulations, which allow us to directly take into account the interaction between the phase fluctuations in the control signals and the evolution of the qubit state, thereby achieving a comprehensive understanding of the actual role played by the different spectral components of phase noise. In particular, we perform an analysis of the effect of the individual noise frequency contributions, providing a clear identification of the spectral regions that most critically impact fidelity and establishing their relative weight in the overall fidelity degradation. Our method is based on the generation of phase noise realizations consistent with a given power spectral density, that are then applied to the pulse carrier in simulations, with Qiskit-Dynamics, of the qubit temporal evolution. By comparing the final state obtained at the end of a noisy pulse sequence with that in the ideal case and averaging over multiple noise realizations, we estimate the resulting degradation in fidelity, and, exploiting an approximate analytical representation of a carrier affected by phase fluctuations, we shed new light on the nature of the different contributions, and provide an intuitive physical picture.
Quantum sensor networks promise precision advantages over classical and single-sensor strategies, in particular when the estimator is non-local. We address the problem of finding such estimators through a framework we connote spatial quantum sensing: given an underlying field interrogated by a network of quantum sensors at fixed positions, construct an estimator for a property of the field, for example, distinguishing a source of signal, or evaluating the field or its derivatives at an arbitrary point. We first treat polynomial fields, casting the task as an interpolation problem, and then generalize to fields modeled by analytic functions, which yields general least-squares estimators. A central and largely unaddressed question is under what conditions on sensor placement these estimators are well-defined and error-free. For $m$-dimensional arrays we give explicit constructions and proofs in the interpolation setting using algebraic geometry, and establish necessary and sufficient conditions in the general case. Comparing a non-local entangled protocol with the best local strategy, we show that entanglement yields maximal precision in distributed sensing under global resource constraints. Finally, we introduce error-free subspaces, a technique that translates prior knowledge of the field into a reduction in the number of required sensors. We expect these techniques to be broadly useful in sensing problems across scales, ranging from earth-scale experiments to local applications such as biological imaging.
Nonstabilizerness, also known as magic, plays a central role in universal quantum computation. Hypergraph states are nonstabilizer generalizations of graph states and constitute a key class of quantum states in various areas of quantum physics, such as the demonstration of quantum advantage, measurement-based quantum computation, and the study of topological phases. In this work, we investigate nonstabilizerness of 3-uniform hypergraph states, which are solely generated by controlled-controlled-Z gates, in terms of the stabilizer Rényi entropy (SRE). We find that the SRE of 3-uniform hypergraph states can be expressed using the matrix rank, which reduces computational cost from $\mathcal{O}(2^{3N})$ to $\mathcal{O}(N^3 2^{N})$ for $N$-qubit states. Based on this result, we exactly evaluate SREs of one-dimensional hypergraph states. We also present numerical results of SREs of several large-scale 3-uniform hypergraph states. Our results would contribute to an understanding of the role of nonstabilizerness in a wide range of physical settings where hypergraph states are employed.
While engineering long-range light-matter interactions is the principal aim in waveguide-QED, ironically most of the building blocks rest on local short-range couplings, such as nearest-neighbor-coupled cavity arrays employed in canonical models. Here, we propose a waveguide-QED system with native long-range interactions, comprising a single emitter coupled to a left-handed transmission line (LHTL). Interestingly, the LHTL emulates a synthetic photonic lattice with a slow logarithmic decay of hopping amplitudes over a distance set entirely by the ratio of UV and IR cutoffs of line dispersion. Its intrinsic long-range nature manifests both in the properties of atom-photon bound and scattering states, which exhibit algebraic localization and accelerated photon propagation respectively. Using a method of 'running exponents', we develop a unified picture connecting waveguide dispersion to bound state and light front profiles obtained in the strong long-range hopping regime. These results motivate how transmission lines can enable multi-qubit information processing with tunable-range interactions.
We introduce Extreme Quantum Cognition Machines, a class of quantum learning architectures for deliberative decision making that is tolerant to noisy and contradictory training data. Inspired by the quantum cognition paradigm, Extreme Quantum Cognition Machines are closely related to quantum extreme learning and quantum reservoir computing, where fixed quantum dynamics generates a nonlinear feature map and learning is confined to a linear readout. A dynamical attention mechanism, implemented through an input-dependent interaction term in the Hamiltonian, modulates the quantum evolution and biases the resulting feature embedding toward task-relevant correlations. The approach is validated on linguistic classification tasks, which serve as paradigmatic examples of deliberative inference. Hardware-compatible quantum implementations of the proposed framework are discussed, together with potential applications in symbolic inference, sequence analysis, anomaly detection, and automatic diagnosis, with direct relevance to domains such as biology, forensics, and cybersecurity.
An elementary prediction of the quantization of the gravitational field is that the Newtonian interaction can entangle pairs of massive objects. Conversely, in models of gravity in which the field is not quantized, the gravitational interaction necessarily comes with some level of noise, i.e., non-reversibility. Here, we give a systematic classification of all possible such models consistent with the basic requirements that the non-relativistic limit is Galilean invariant and reproduces the Newtonian interaction on average. We demonstrate that for any such model to be non-entangling, a quantifiable, minimal amount of noise must be injected into any experimental system. Thus, measuring gravitating systems at noise levels below this threshold would be equivalent to demonstrating that Newtonian gravity is entangling. As concrete examples, we analyze our general predictions in a number of experimental setups, and test it on the classical-quantum gravity models of Oppenheim et al., as well as on a recent model of Newtonian gravity as an entropic force.
We study the interconversion of families of quantum states ("statistical experiments") via positive, trace-preserving (PTP) maps and clarify its mathematical structure in terms of minimal sufficient Jordan algebras, which can be seen to generalize the Koashi-Imoto decomposition to the PTP setting. In particular, we show that Neyman-Pearson tests generate the minimal sufficient Jordan algebra, and hence also the minimal sufficient *-algebra corresponding to the Koashi-Imoto decomposition. As applications, we show that a) equality in the data-processing inequality for the relative entropy or the $\alpha$-$z$ quantum Rényi divergence implies the existence of a recovery map also in the PTP case and b) that two dichotomies can be interconverted by PTP maps if and only if they can be interconverted by decomposable, trace-preserving maps. We thoroughly review the necessary mathematical background on Jordan algebras. As a step beyond the finite-dimensional case, we prove Frenkel's formula for approximately finite-dimensional von Neumann algebras.
We develop a fidelity-informed neural pulse-compilation framework for a continuous family of single-qubit gates on a three-qubit liquid-state nuclear magnetic resonance (NMR) processor. Instead of decomposing each target unitary into a sequence of calibrated basis gates, the method learns a direct map from the axis-angle parameters of an arbitrary U_2 in SU(2) operation to a piecewise-constant radio-frequency control sequence that implements the desired transformation. Training is performed end-to-end through the time-ordered propagator of the driven Hamiltonian using global-phase-insensitive unitary fidelity as the learning signal. We show numerically that a single model generalizes across a continuous range of gate parameters and experimentally validate representative compiled pulses on a benchtop three-qubit NMR device. In addition, we analyze sensitivity to structured perturbations in Hamiltonian and control parameters by introducing a prescribed uncertainty set and performing a comparative risk-aware redesign based on right-tail Conditional Value-at-Risk (RU-CVaR). This stage produces pulse solutions with broader tolerance margins within the chosen uncertainty model. The results demonstrate continuous pulse-level gate synthesis in an experimentally accessible setting and illustrate a hardware-aware compilation strategy that can be extended to other quantum platforms. While the uncertainty model considered here is tailored to NMR, the neural compilation and risk-aware optimization framework are general and may be useful in architectures where calibration overhead, parameter drift, or control constraints make repeated per-gate optimization costly.
This paper investigates the interplay between the properties of quantum states on the Hilbert space \(\ell_2(\kappa)\) and the set-theoretic nature of the cardinal $\kappa$. We focus on the existence of singular $\sigma$-additive states~ -- functionals whose induced measures are $\sigma$-additive yet vanish on singletons. While the existence of such states is known to be equivalent to the Ulam measurability of $\kappa$, their structural and dynamical properties remain largely unexplored. We prove that any $\sigma$-additive state on the diagonal algebra is representable as a Pettis integral over a singular $\sigma$-additive measure, extending the classical representation theory to the non-normal sector. Furthermore, we construct a class of quantum channels using $\sigma$-complete ultrafilters that map normal states to singular $\sigma$-additive states, effectively <<archiving>> information into the singular part of the state space.
Markovian transport is often described by a master equation for the system state. The thermodynamic information measured in transport experiments, however, is carried by reservoir-resolved transfer records, such as particle currents, heat currents, entropy production, and current noise. We identify a thermodynamic incompleteness of state dynamics: a Markovian state generator can fix the occupation probabilities, stationary response, and relaxation without specifying how the underlying transitions are assigned to reservoirs and energy filters. We study a multi-terminal Coulomb-blockaded quantum dot coupled to energy-filtered reservoirs, for which different assignments of reservoir channels can generate the same state master equation. These assignments give identical occupation dynamics, stationary state, and linear response of the dot, but different heat currents, entropy production, and current noise. We formulate a thermodynamic completeness criterion: a transport observable can be reconstructed from state dynamics only when it is invariant under all changes of reservoir-channel assignments that leave the state generator unchanged. The criterion gives a practical diagnostic for Markovian transport models and a measurable prediction: state tomography can be insufficient to predict heat-noise and cross-correlation measurements, even when the full Markovian state dynamics is known. The analysis identifies a concrete limitation of state-only Markovian thermodynamics and shows which additional transport records must be specified to make thermodynamic predictions experimentally complete.
Dynamic quantum circuits integrate mid-circuit measurements and feed-forward operations to enable real-time classical processing and conditional quantum logic. These capabilities are central to key quantum protocols such as quantum error correction, and have recently demonstrated significant potential for reducing quantum resources, including circuit depth and gate count, across a range of applications. However, executing dynamic circuits on real quantum hardware introduces a critical trade-off: while resource requirements decrease, circuit fidelity degrades due to high error rates of mid-circuit measurements, as well as the decoherence errors accumulated during the extended idle periods introduced by both mid-circuit measurements and feed-forward operations. In this paper, we systematically investigate the impact of standard error mitigation techniques on dynamic circuit applications pertaining to Hamiltonian simulation and ground state estimation of physically relevant systems like the Heisenberg model. We explore dynamical decoupling (DD) as a strategy to suppress decoherence and crosstalk errors during idle windows introduced by mid-circuit measurements and feed-forward delays, and also examine error mitigation via zero-noise extrapolation (ZNE). Through experiments conducted on IBM quantum hardware, we benchmark effective combinations of these strategies that maximize the practical benefits of dynamic quantum circuits in these applications. We demonstrate that a combination of DD and ZNE is effective in mitigating the errors introduced during mid-circuit measurements and feed-forward operations, as well as the errors arising from faulty measurements. This approach yields a energy gap improvement of at least 60% in ground state estimation and reduces observed error of time-evolved states by up to 99% for the Ising model and up to 20% for the Heisenberg model.
Photon loss and dephasing rapidly degrade the sensitivity of quantum sensors, yet systematic methods for designing error-correcting codes whose geometry is simultaneously adapted to the sensing task and the noise channel do not exist. Here we establish that orbital-angular-momentum (OAM) encoding and Gottesman-Kitaev-Preskill (GKP) lattice geometry are structurally coupled: an OAM mode of topological charge $\ell$ induces a phase-space rotation $\theta_\ell=\ell\pi/\ell_{\max}$, corresponding to a family of twisted GKP stabilizer lattices. Using an end-to-end differentiable Strawberry Fields--TensorFlow circuit, we jointly optimise $\ell$, the lattice aspect ratio $r$, and the finite-energy envelope $\epsilon$ to maximise quantum Fisher information subject to $P_{\rm err}\leq10^{-3}$. The optimum occurs at the fractional charge $\ell=1.5$ ($\theta=67.5^\circ$), implementable with a half-integer spiral phase plate, which reduces $P_{\rm err}$ by $23.9\times$ relative to the square-lattice baseline while leaving $\mathcal{F}_Q$ unchanged to within $0.2\%$. This surpasses the best integer value ($\ell=2$, $15.7\times$) and arises from an exact $180^\circ$ periodicity of the $P_{\rm err}(\theta)$ landscape, confirmed analytically and numerically. We derive a transcendental balance equation for the optimal angle $\theta^*(\eta,\gamma,r)$ and prove that it decreases with both $\gamma$ and $\eta$. A Shannon-inspired metrological capacity $\mathcal{C}=\mathcal{F}_Q\cdot(-\ln P_{\rm err})$, maximised at $\ell=1.5$ with a $41\%$ gain over the square lattice, quantifies the joint sensitivity--fault-tolerance resource. These results establish a geometric design principle for noise-adaptive quantum sensors and a fully open-source differentiable template extensible to other bosonic code families.
We report a measurement of the radiative lifetime of the $^2F_{7/2}$ level of $^{171}$Yb$^+$ that is coupled to the $^2S_{1/2}$ ground state via an electric octupole this http URL radiative lifetime is determined to be $9.96(50)\times 10^7$~s, corresponding to 3.16(16) years. The result reduces the relative uncertainty in this exceptionally long excited state lifetime by one order of magnitude with respect to previous experimental estimates. Our method is based on the coherent excitation of the corresponding transition and avoids limitations through competing decay processes. The explicit dependence on the laser intensity is eliminated by simultaneously measuring the resonant Rabi frequency and the induced quadratic Stark shift. Combining the result with information on the dynamic differential polarizability permits a calculation of the transition matrix element to infer the radiative lifetime.
This paper concerns SIC-POVMs and their relationship to class field theory. SIC-POVMs are generalized quantum measurements (POVMs) described by $d^2$ equiangular complex lines through the origin in $\mathbb{C}^d$. Weyl--Heisenberg SICs are those SIC-POVMs described by the orbit a single vector under a finite Weyl--Heisenberg group ${\rm WH}(d)$. We relate known data on the structure and classification of Weyl--Heisenberg SICs in low dimensions to arithmetic data attached to certain orders of real quadratic fields. For $4 \le d \le 90$, we show the number of known geometric equivalence classes of Weyl--Heisenberg SICs in dimension $d$ equals the cardinality of the ideal class monoid of the real quadratic order $\mathcal{O}_{\Delta_d}$ of discriminant $\Delta_d=(d+1)(d-3)$; we conjecture the equality extends to all $d \ge 4$. We prove that this conjecture implies the existence of more than one geometric equivalence class of Weyl--Heisenberg SICs for $d > 22$. We conjecture Galois multiplets of SICs are in one-to-one correspondence with the over-orders $\mathcal{O}'$ of $\mathcal{O}_{\Delta_d}$ in such a way that the number of classes in the multiplet equals the ring class number of $\mathcal{O}'$. We test that conjecture against known data on exact SICs in low dimensions. We refine the class field hypothesis of Appleby, Flammia, McConnell, and Yard (arXiv:1604.06098) to predict the exact class field over $\mathbb{Q}(\sqrt{\Delta_d})$ generated by the ratios of vector entries for the equiangular lines defining a Weyl--Heisenberg SIC. The refined conjectures use a recently developed class field theory for orders of number fields (arXiv:2212.09177). The refined class fields assigned to over-orders $\mathcal{O}'$ have a natural partial order under inclusion; the inclusions of these fields fail to be strict in some cases. We characterize such cases and give a table of them for $d < 500$.
A new scheme for detecting wave-like dark matter (DM) using Rydberg atoms is proposed. Recent advances in trapping and manipulating Rydberg atoms make it possible to use Rydberg atoms trapped in optical tweezer arrays for DM detection. We propose to prepare a large ensemble of Rydberg atoms and to observe the excitations between Rydberg states by the DM-induced effective electric field. A scan over DM mass is enabled with the use of the Zeeman and diamagnetic shifts of energy levels under an applied external magnetic field. Taking dark-photon DM as an example, we demonstrate that our proposed experiment can have high enough sensitivity to probe previously unexplored regions of the parameter space of dark-photon coupling strengths and masses.
The information-geometric origin of fidelity susceptibility and its utility as a universal probe of quantum criticality in many-body settings have been widely discussed. Here we explore the metric response of quantum relative entropy (QRE), by tracing out all but $n$ adjacent sites from the ground state of spin chains of finite length $N$, as a parameter of the corresponding Hamiltonian is varied. The diagonal component of this metric defines a susceptibility of the QRE that diverges at quantum critical points (QCPs) in the thermodynamic limit. We study two spin-$1/2$ models as examples, namely the integrable transverse field Ising model (TFIM) and a non-integrable Ising chain with three-spin interactions. We demonstrate distinct scaling behaviors for the peak of the QRE susceptibility as a function of $N$: namely a square logarithmic divergence in TFIM and a power-law divergence in the non-integrable chain. This susceptibility encodes uncertainty of entanglement Hamiltonian gradients and is also directly connected to other information measures such as Petz-Rényi entropies. We further show that this susceptibility diverges even at finite $N$ if the subsystem size, $n$, exceeds a certain value when the Hamiltonian is tuned to its classical limits due to the rank of the RDMs being finite; unlike the divergence associated with the QCPs which require $N \rightarrow \infty$.
Building on the duality between Krylov complexity and geodesic length in Jackiw-Teitelboim and sine-dilaton gravity, we develop a precise holographic dictionary for quantities in the Krylov subspace of the double-scaled Sachdev-Ye-Kitaev model (DSSYK). First, we demonstrate that the growth rate of Krylov state complexity corresponds to the wormhole velocity, and show that its expectation value in coherent states serves as a boundary diagnostic of firewall-like structures via bulk reconstruction. We also delineate an alternative bulk description in terms of the proper momentum of an infalling particle at early times, establishing a threefold duality between the Krylov complexity growth rate, wormhole velocity, and proper momentum, with clear regimes of validity. Beyond the first moments, we argue that higher-order Krylov complexities capture connected bulk contributions encoded by replica wormholes, while the logarithmic variant probes the replica saddle structure. Finally, within a third-quantized setting incorporating baby universes, we show that the Krylov entropy equals the von Neumann entropy of the parent-geometry density matrix obtained after tracing out baby universes, thereby quantifying information flow into the baby universe sector. Together, these results elevate Krylov-space observables to sharp probes of bulk dynamics and topology in ensemble-averaged 2D gravity.
Non-Abelian topological charges (NATCs), characterized by their noncommutative algebra, offer a framework for describing multigap topological phases beyond conventional Abelian invariants. While higher-order topological phases (HOTPs) host boundary states at corners or hinges, their characterization has largely relied on Abelian invariants such as winding and Chern numbers. Here, we propose a coupled-wire scheme of constructing non-Abelian HOTPs and analyze a non-Abelian second-order topological insulator as its minimal model. The resulting Hamiltonian supports hybridized corner modes, protected by parity-time-reversal plus sublattice symmetries and described by a topological vector that unites a non-Abelian quaternion charge with an Abelian winding number. Corner states emerge only when both invariants are nontrivial, whereas weak topological edge states of non-Abelian origins arise when the quaternion charge is nontrivial, enriching the bulk-edge-corner correspondence. The system further exhibits both non-Abelian and Abelian topological phase transitions, providing a unified platform that bridges these two distinct topological classes. Our work extends the understanding of HOTPs into non-Abelian regimes and suggests feasible experimental realizations in synthetic quantum systems, such as photonic or acoustic metamaterials.
The discovery of critical points that can host quantized nonlocal order parameters and degenerate edge modes relocate the study of symmetry-protected topological phases (SPTs) to gapless regions. In this letter, we reveal gapless SPTs (gSPTs) in systems tuned out-of-equilibrium by periodic drivings and non-Hermitian couplings. Focusing on one-dimensional models with sublattice symmetry, we introduce winding numbers by applying the Cauchy's argument principle to generalized Brillouin zone (GBZ), yielding unified topological characterizations and bulk-edge correspondence in both gapped phases and at gapless critical points. The theory is demonstrated in a broad class of Floquet bipartite lattices, unveiling unique topological criticality of non-Hermitian Floquet origin. Our findings identify gSPTs in driven open systems and uncover robust topological edge modes at phase transitions beyond equilibrium.
Topological transitions in non-Hermitian systems are generally boundary sensitive: a point-gap winding transition under periodic boundary condition (PBC) and a non-Bloch bulk real-line-gap transition under open boundary condition (OBC) at $\mathrm{Re}(E)=0$ are governed by different spectra and therefore need not coincide. Here we show, for a class of chiral non-Hermitian Su--Schrieffer--Heeger (SSH)-type lattices, that these two criticalities can be locked by an exceptional-point-constrained (EP-constrained) parameter evolution. The key requirement is not the occurrence of isolated exceptional points, but the persistence of a zero-energy Bloch degeneracy along the entire sweep, which is generically exceptional in the non-Hermitian regime. In an analytically tractable limit of an extended non-Hermitian SSH chain, the EP-constrained manifolds and both transition boundaries are obtained in closed form, making the locking explicit. Away from this limit, numerical generalized-Brillouin-zone (GBZ) calculations confirm the correspondence for representative constrained sweeps, whereas unconstrained paths show that isolated exceptional points or Hermitian degeneracies do not enforce locking. We further verify the mechanism in a spinful four-band extension with branch-resolved GBZs, including strongly branch-imbalanced regimes. These results establish a path-dependent diagnostic principle: along EP-constrained sweeps in this SSH-type class, changes in PBC point-gap winding can indicate OBC non-Bloch bulk real-line-gap transitions and the corresponding changes in zero-energy boundary modes.
Establishing the fusion rules of anyonic quasiparticles in fractional quantum Hall fluids is essential for understanding their underlying topological order. Building on the conjecture that key topological properties are encoded in the "DNA" of candidate many-body wave functions - that is, the pattern of dominant orbital occupations restricted to a finite number of lowest Landau levels - we propose a combinatorial framework that derives these fusion rules directly from microscopic data. By extending Schrieffer's counting argument and introducing classes of topological excitations, our framework provides a unified route to the fusion rules for both Abelian and non-Abelian excitations. This approach elucidates the emergence of topological features from first principles in both fermionic and bosonic systems.
Quantum simulation offers a promising framework for quantum field theory calculations. Obtaining reliable results, however, requires careful characterization of systematic uncertainties. One important source is the boson truncation error, which arises from representing infinite-dimensional local Hilbert spaces with finite-dimensional ones. Previous studies have examined this problem from several perspectives. In particular, Jordan, Lee, and Preskill (arXiv:1111.3633) derived an energy-based bound applicable to generic low-energy states across a broad class of field theories. However, this approach often yields overly conservative bounds, especially at large volumes. In this work, we introduce a new methodology that significantly tightens the energy-based boson truncation bound through two complementary advances: an improved analytic derivation and a Monte Carlo-based numerical procedure. We demonstrate the method in (1+1)-dimensional scalar field theory and (2+1)-dimensional U(1) gauge theory in the dual formalism. Our approach substantially mitigates the volume dependence of the required truncation cutoff, achieving reductions nearly proportional to the volume in some cases and to the square root of the volume in others.
We apply the complex scaling method to black-hole perturbations in four-dimensional Schwarzschild--de~Sitter (dS) spacetimes. The method converts the outgoing-wave boundary-value problem into a non-Hermitian spectral problem and enables quasinormal-mode poles and the rotated continuum to be treated in a common framework. We focus in particular on the continuum level density, which characterizes the continuum response beyond isolated quasinormal-mode frequencies. Using Regge--Wheeler-type perturbation equations for scalar, electromagnetic, and gravitational fields, we investigate how a nonzero cosmological constant modifies the pole and continuum sectors. We also discuss a possible extension to string-inspired coupled-channel systems, and illustrate that higher-dimensional dS black holes can be treated within the same framework, at least in tensor- and vector-type sectors. Our results indicate that complex scaling offers a useful spectral framework for analyzing both quasinormal modes and continuum response in black-hole physics.
Understanding heat transport in low-dimensional and nano-architectured materials remains a central challenge in nonequilibrium statistical physics due to persistent deviations from Fourier's law. These deviations are driven by anharmonicity, reduced dimensionality, and the emergence of long-lived coherent excitations. In this work, we develop a unified theoretical framework for two-dimensional thermal metamaterials that combines nonlinear lattice dynamics, soliton-based effective field theories, and geometrically organized defect networks as guiding structures for energy flow. We introduce minimal discrete and continuum-inspired models suitable for controlled benchmarking of thermal transport in patterned two-dimensional architectures and identify a two-channel transport mechanism in which coherent nonlinear excitations coexist with incoherent hydrodynamic modes. The interplay between these channels is shown to be highly sensitive to geometry, nonlinearity, and temperature, offering new avenues for thermal management. We establish rigorous connections between microscopic nonlinearity, geometry-driven channeling of heat in two dimensions, and quantum-enabled exploration of both high-occupation classical regimes and genuinely quantum regimes beyond the reach of standard simulation strategies. The theoretical predictions are corroborated by recent experimental and computational results in Stone-Wales-defected PdSSe monolayers and silicon phononic crystal nanostructures, which exhibit ultra-low thermal conductivity coexisting with high carrier mobility and strong anisotropy -- direct manifestations of the two-channel mechanism. This synthesis provides actionable guidance for the design of engineered heat-spreading architectures and positions quantum simulation as a transformative tool for advancing the theory of nonlinear heat transport.