Back to Search Start Over

Adversarial Data Poisoning for Fake News Detection: How to Make a Model Misclassify a Target News without Modifying It

Authors :
Siciliano, Federico
Maiano, Luca
Papa, Lorenzo
Baccini, Federica
Amerini, Irene
Silvestri, Fabrizio
Publication Year :
2023

Abstract

Fake news detection models are critical to countering disinformation but can be manipulated through adversarial attacks. In this position paper, we analyze how an attacker can compromise the performance of an online learning detector on specific news content without being able to manipulate the original target news. In some contexts, such as social networks, where the attacker cannot exert complete control over all the information, this scenario can indeed be quite plausible. Therefore, we show how an attacker could potentially introduce poisoning data into the training data to manipulate the behavior of an online learning method. Our initial findings reveal varying susceptibility of logistic regression models based on complexity and attack type.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.15228
Document Type :
Working Paper