Is Your Prompt Poisoning Code? Defect Induction Rates and Security Mitigation Strategies
概要
arXiv:2510.22944v2 Announce Type: replace-cross Abstract: Large language models (LLMs) have become indispensable for automated code generation, yet the quality and security of their outputs remain a critical concern. Existing studies predominantly concentrate on adversarial attacks or inherent flaw…