SHRED: Retain-Set-Free Unlearning via Self-Distillation with Logit Demotion
概要
arXiv:2605.07482v1 Announce Type: cross Abstract: Machine unlearning for large language models (LLMs) aims to selectively remove memorized content such as private data, copyrighted text, or hazardous knowledge, without costly full retraining. Most existing methods require a retain set of curated ex…