FIT to Forget: Robust Continual Unlearning for Large Language Models
概要
arXiv:2601.21682v2 Announce Type: replace-cross Abstract: While large language models (LLMs) exhibit remarkable capabilities, they increasingly face demands to unlearn memorized privacy-sensitive, copyrighted, or harmful content. Existing unlearning methods primarily focus on \emph{single-shot} sce…