Your Language Model is Its Own Critic: Reinforcement Learning with Value Estimation from Actor's Internal States
概要
arXiv:2605.07579v1 Announce Type: cross Abstract: Reinforcement learning with verifiable rewards (RLVR) for Large Reasoning Models hinges on baseline estimation for variance reduction, but existing approaches pay a heavy price: PPO requires a policy-model scale critic, while GRPO needs multiple rol…