Optimizer-Model Consistency: Full Finetuning with the Same Optimizer as Pretraining Forgets Less
概要
arXiv:2605.06654v1 Announce Type: cross Abstract: Optimizers play an important role in both pretraining and finetuning stages when training large language models (LLMs). In this paper, we present an observation that full finetuning with the same optimizer as in pretraining achieves a better learnin…