Process reward models (PRMs) have proven effective for test-time scaling of
Large Language Models (LLMs) on challenging reasoning tasks. However, reward
hacking issues with PRMs limit their successful application in reinforcement
fine-tuning. In this paper, we identify the main cause of PRM-induced reward
hacking: the canonical summation-form credit assignment in reinforcement
learning (RL), which defines the value as cumulative gamma-decayed future
rewards, easily induces LLMs to hack steps with high rewards. To address this,
we propose PURE: Process sUpervised Reinforcement lEarning. The key innovation
of PURE is a min-form credit assignment that formulates the value function as
the minimum of future rewards. This method significantly alleviates reward
hacking by limiting the value function range and distributing advantages more
reasonably. Through extensive experiments on 3 base models, we show that
PRM-based approaches enabling min-form credit assignment achieve comparable
reasoning performance to verifiable reward-based methods within only 30% steps.
In contrast, the canonical sum-form credit assignment collapses training even
at the beginning! Additionally, when we supplement PRM-based fine-tuning with
just 10% verifiable rewards, we further alleviate reward hacking and produce
the best fine-tuned model based on Qwen2.5-Math-7B in our experiments,
achieving 82.5% accuracy on AMC23 and 53.3% average accuracy across 5
benchmarks. Moreover, we summarize the observed reward hacking cases and
analyze the causes of training collapse. Code and models are available at
https://github.com/CJReinforce/PURE.
Este artículo explora los viajes en el tiempo y sus implicaciones.
Descargar PDF:
2504.15275v1