Safety Anchor: Defending Harmful Fine-tuning via Geometric Bottlenecks
概要
arXiv:2605.05995v1 Announce Type: cross Abstract: The safety alignment of Large Language Models (LLMs) remains vulnerable to Harmful Fine-tuning (HFT). While existing defenses impose constraints on parameters, gradients, or internal representations, we observe that they can be effectively circumven…