As a fundamental challenge in visual computing, video super-resolution (VSR)
focuses on reconstructing highdefinition video sequences from their degraded
lowresolution counterparts. While deep convolutional neural networks have
demonstrated state-of-the-art performance in spatial-temporal super-resolution
tasks, their computationally intensive nature poses significant deployment
challenges for resource-constrained edge devices, particularly in real-time
mobile video processing scenarios where power efficiency and latency
constraints coexist. In this work, we propose a Reparameterizable Architecture
for High Fidelity Video Super Resolution method, named RepNet-VSR, for
real-time 4x video super-resolution. On the REDS validation set, the proposed
model achieves 27.79 dB PSNR when processing 180p to 720p frames in 103 ms per
10 frames on a MediaTek Dimensity NPU. The competition results demonstrate an
excellent balance between restoration quality and deployment efficiency. The
proposed method scores higher than the previous champion algorithm of MAI video
super-resolution challenge.
Cet article explore les excursions dans le temps et leurs implications.
Télécharger PDF:
2504.15649v1