arXiv cs.AI by Synapse Flow 編集部

Multilingual Safety Alignment via Self-Distillation

概要

arXiv:2605.02971v1 Announce Type: cross Abstract: Large language models (LLMs) exhibit severe multilingual safety misalignment: they possess strong safeguards in high-resource languages but remain highly vulnerable to jailbreak attacks in low-resource languages. Current safety alignment methods gen…

元記事を読む →

関連記事