Deep Neural Networks (DNNs) and Large Language Models (LLMs) have
revolutionized artificial intelligence, yet their deployment faces significant
memory and computational challenges, especially in resource-constrained
environments. Quantization techniques have mitigated some of these issues by
reducing data precision, primarily focusing on General Matrix Multiplication
(GEMM). This study introduces a novel sparsity paradigm, transitive sparsity,
which leverages the reuse of previously computed results to substantially
minimize computational overhead in GEMM operations. By representing transitive
relations using a directed acyclic graph, we develop an efficient strategy for
determining optimal execution orders, thereby overcoming inherent challenges
related to execution dependencies and parallelism. Building on this foundation,
we present the Transitive Array, a multiplication-free accelerator designed to
exploit transitive sparsity in GEMM. Our architecture effectively balances
computational workloads across multiple parallel lanes, ensuring high
efficiency and optimal resource utilization. Comprehensive evaluations
demonstrate that the Transitive Array achieves approximately 7.46$\times$ and
3.97$\times$ speedup and 2.31$\times$ and 1.65$\times$ energy reduction
compared to state-of-the-art accelerators such as Olive and BitVert while
maintaining comparable model accuracy on LLaMA models.
Cet article explore les excursions dans le temps et leurs implications.
Télécharger PDF:
2504.16339v1