1. Defending Against Social Engineering Attacks in the Age of LLMs
- Author
-
Ai, Lin, Kumarage, Tharindu, Bhattacharjee, Amrita, Liu, Zizhou, Hui, Zheng, Davinroy, Michael, Cook, James, Cassani, Laura, Trapeznikov, Kirill, Kirchner, Matthias, Basharat, Arslan, Hoogs, Anthony, Garland, Joshua, Liu, Huan, and Hirschberg, Julia
- Subjects
Computer Science - Computation and Language - Abstract
The proliferation of Large Language Models (LLMs) poses challenges in detecting and mitigating digital deception, as these models can emulate human conversational patterns and facilitate chat-based social engineering (CSE) attacks. This study investigates the dual capabilities of LLMs as both facilitators and defenders against CSE threats. We develop a novel dataset, SEConvo, simulating CSE scenarios in academic and recruitment contexts, and designed to examine how LLMs can be exploited in these situations. Our findings reveal that, while off-the-shelf LLMs generate high-quality CSE content, their detection capabilities are suboptimal, leading to increased operational costs for defense. In response, we propose ConvoSentinel, a modular defense pipeline that improves detection at both the message and the conversation levels, offering enhanced adaptability and cost-effectiveness. The retrieval-augmented module in ConvoSentinel identifies malicious intent by comparing messages to a database of similar conversations, enhancing CSE detection at all stages. Our study highlights the need for advanced strategies to leverage LLMs in cybersecurity.
- Published
- 2024