Neural operators are efficient surrogate models for solving partial
differential equations (PDEs), but their key components face challenges: (1) in
order to improve accuracy, attention mechanisms suffer from computational
inefficiency on large-scale meshes, and (2) spectral convolutions rely on the
Fast Fourier Transform (FFT) on regular grids and assume a flat geometry, which
causes accuracy degradation on irregular domains. To tackle these problems, we
regard the matrix-vector operations in the standard attention mechanism on
vectors in Euclidean space as bilinear forms and linear operators in vector
spaces and generalize the attention mechanism to function spaces. This new
attention mechanism is fully equivalent to the standard attention but
impossible to compute due to the infinite dimensionality of function spaces. To
address this, inspired by model reduction techniques, we propose a Subspace
Parameterized Attention (SUPRA) neural operator, which approximates the
attention mechanism within a finite-dimensional subspace. To construct a
subspace on irregular domains for SUPRA, we propose using the Laplacian
eigenfunctions, which naturally adapt to domains’ geometry and guarantee the
optimal approximation for smooth functions. Experiments show that the SUPRA
neural operator reduces error rates by up to 33% on various PDE datasets while
maintaining state-of-the-art computational efficiency.
Este artículo explora los viajes en el tiempo y sus implicaciones.
Descargar PDF:
2504.15897v1