Paper Title: Small Language Model Resilience to AST-Based Obfuscation Attacks on Reentrancy Vulnerability Detection
Conference Name: Korean Institute of Communications and Information Sciences (KICS Summer 2025)
Abstract: The rise of Small Language Models (SLMs) presents opportunities for enhancing code security analysis, yet their reliability against adversarial attacks like code obfuscation remains critical. This paper investigates the impact of semantics-preserving Abstract Syntax Tree (AST)-based identifier renaming on the reentrancy detection capabilities of two prominent 7B parameter SLMs: codellama:7b-instruct and mistral:7b-instruct. Evaluating on a curated dataset of 50 Solidity contracts, we find contrasting results: mistral:7b-instruct exhibited high baseline performance (F1 ≈ 0.93) and remarkable robustness, with minimal performance degradation after obfuscation (∆F1 ≈ −0.02). Conversely, codellama:7b-instruct struggled at baseline (F1 ≈
0.39) and displayed an anomalous performance increase post-obfuscation (∆F1 ≈ +0.17). Our contribution lies in providing direct, quantitative evidence of significant variance in SLM
robustness against a realistic obfuscation attack vector for vulnerability detection. This highlights the necessity of adversarial testing and motivates further research into the underlying factors influencing SLM resilience.