Back to Search Start Over

Adversarial Math Word Problem Generation

Authors :
Xie, Roy
Huang, Chengxuan
Wang, Junlin
Dhingra, Bhuwan
Publication Year :
2024

Abstract

Large language models (LLMs) have significantly transformed the educational landscape. As current plagiarism detection tools struggle to keep pace with LLMs' rapid advancements, the educational community faces the challenge of assessing students' true problem-solving abilities in the presence of LLMs. In this work, we explore a new paradigm for ensuring fair evaluation -- generating adversarial examples which preserve the structure and difficulty of the original questions aimed for assessment, but are unsolvable by LLMs. Focusing on the domain of math word problems, we leverage abstract syntax trees to structurally generate adversarial examples that cause LLMs to produce incorrect answers by simply editing the numeric values in the problems. We conduct experiments on various open- and closed-source LLMs, quantitatively and qualitatively demonstrating that our method significantly degrades their math problem-solving ability. We identify shared vulnerabilities among LLMs and propose a cost-effective approach to attack high-cost models. Additionally, we conduct automatic analysis to investigate the cause of failure, providing further insights into the limitations of LLMs.<br />Comment: Code/data: https://github.com/ruoyuxie/adversarial_mwps_generation

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.17916
Document Type :
Working Paper