Confidence-Aware Alignment Makes Reasoning LLMs More Reliable
概要
arXiv:2605.07353v1 Announce Type: new Abstract: Large reasoning models often reach correct answers through flawed intermediate steps, creating a gap between final accuracy and reasoning reliability. Existing alignment strategies address this with external verifiers or massive sampling, limiting sca…