arXiv cs.AI by Synapse Flow 編集部

OPSD Compresses What RLVR Teaches: A Post-RL Compaction Stage for Reasoning Models

概要

arXiv:2605.06188v1 Announce Type: new Abstract: On-Policy Self-Distillation (OPSD) has recently emerged as an alternative to Reinforcement Learning with Verifiable Rewards (RLVR), promising higher accuracy and shorter responses through token-level credit assignment from a self-teacher conditioned o…

元記事を読む →

関連記事