Graph Neural Networks (GNNs) have demonstrated strong performance across
various graph-based tasks by effectively capturing relational information
between nodes. These models rely on iterative message passing to propagate node
features, enabling nodes to aggregate information from their neighbors. Recent
research has significantly improved the message-passing mechanism, enhancing
GNN scalability on large-scale graphs. However, GNNs still face two main
challenges: over-smoothing, where excessive message passing results in
indistinguishable node representations, especially in deep networks
incorporating high-order neighbors; and scalability issues, as traditional
architectures suffer from high model complexity and increased inference time
due to redundant information aggregation. This paper proposes a novel framework
for large-scale graphs named ScaleGNN that simultaneously addresses both
challenges by adaptively fusing multi-level graph features. We first construct
neighbor matrices for each order, learning their relative information through
trainable weights through an adaptive high-order feature fusion module. This
allows the model to selectively emphasize informative high-order neighbors
while reducing unnecessary computational costs. Additionally, we introduce a
High-order redundant feature masking mechanism based on a Local Contribution
Score (LCS), which enables the model to retain only the most relevant neighbors
at each order, preventing redundant information propagation. Furthermore,
low-order enhanced feature aggregation adaptively integrates low-order and
high-order features based on task relevance, ensuring effective capture of both
local and global structural information without excessive complexity. Extensive
experiments on real-world datasets demonstrate that our approach consistently
outperforms state-of-the-art GNN models in both accuracy and computational
efficiency.
Este artículo explora los viajes en el tiempo y sus implicaciones.
Descargar PDF:
2504.15920v1