Asymmetric On-Policy Distillation: Bridging Exploitation and Imitation at the Token Level
概要
arXiv:2605.06387v1 Announce Type: cross Abstract: On-policy distillation (OPD) trains a student on its own trajectories with token-level teacher feedback and often outperforms off-policy distillation and standard reinforcement learning. However, we find that its standard advantage weighted policy g…