Blog
Kronecker states: a powerful source of multipartite maximally entangled states in quantum information
In quantum information theory, maximally entangled states, specifically locally maximally entangled (LME) states, are essential for quantum protocols. While many focus on bipartite entanglement, applications such as quantum error correction and multiparty secret sharing rely on multipartite entanglement. These LME states naturally...
Bidirectional Mamba for Single-Cell Data: Efficient Context Learning with Biological Fidelity
Single-cell RNA sequencing (scRNA-seq) enables high-resolution analysis of cellular heterogeneity, but its complexity, which is marked by high dimensionality, sparsity, and batch effects, which poses major computational challenges. Transformer-based models have made significant advances in this domain but are often limited by...
Fundamental Limits Of Quickest Change-point Detection With Continuous-Variable Quantum States
We generalize the quantum CUSUM (QUSUM) algorithm for quickest change-point detection, analyzed in finite dimensions by Fanizza, Hirche, and Calsamiglia (Phys. Rev. Lett. 131, 020602, 2023), to infinite-dimensional quantum systems. Our analysis relies on a novel generalization of a result by Hayashi...
On Euler’s magic matrices of sizes $3$ and $8$
A proper Euler’s magic matrix is an integer $n\times n$ matrix $M\in\mathbb Z^{n\times n}$ such that $M\cdot M^t=\gamma\cdot I$ for some nonzero constant $\gamma$, the sum of the squares of the entries along each of the two main diagonals equals $\gamma$, and...
Learning Energy-Based Generative Models via Potential Flow: A Variational Principle Approach to Probability Density Homotopy Matching
Energy-based models (EBMs) are a powerful class of probabilistic generative models due to their flexibility and interpretability. However, relationships between potential flows and explicit EBMs remain underexplored, while contrastive divergence training via implicit Markov chain Monte Carlo (MCMC) sampling is often unstable...
Gradient-Optimized Fuzzy Classifier: A Benchmark Study Against State-of-the-Art Models
This paper presents a performance benchmarking study of a Gradient-Optimized Fuzzy Inference System (GF) classifier against several state-of-the-art machine learning models, including Random Forest, XGBoost, Logistic Regression, Support Vector Machines, and Neural Networks. The evaluation was conducted across five datasets from the...
Term Coding for Extremal Combinatorics: Dispersion and Complexity Dichotomies
We introduce \emph{Term Coding}, a novel framework for analysing extremal problems in discrete mathematics by encoding them as finite systems of \emph{term equations} (and, optionally, \emph{non-equality constraints}). In its basic form, all variables range over a single domain, and we seek an...
Boosting Classifier Performance with Opposition-Based Data Transformation
In this paper, we introduce a novel data transformation framework based on Opposition-Based Learning (OBL) to boost the performance of traditional classification algorithms. Originally developed to accelerate convergence in optimization tasks, OBL is leveraged here to generate synthetic opposite samples that replace...
A Geometric Approach to Problems in Optimization and Data Science
We give new results for problems in computational and statistical machine learning using tools from high-dimensional geometry and probability. We break up our treatment into two parts. In Part I, we focus on computational considerations in optimization. Specifically, we give new algorithms...
Shape Alignment via Allen-Cahn Nonlinear-Convection
This paper demonstrates the impact of a phase field method on shape registration to align shapes of possibly different topology. It yields new insights into the building of discrepancy measures between shapes regardless of topology, which would have applications in fields of...