Back to Search Start Over

Handling the adversarial attacks

Authors :
Maoling Yan
Yongbin Zhao
Ning Cao
Jing Li
Yingying Wang
Sun Qian
Guofu Li
Pengjia Zhu
Source :
Journal of Ambient Intelligence and Humanized Computing. 10:2929-2943
Publication Year :
2018
Publisher :
Springer Science and Business Media LLC, 2018.

Abstract

The i.i.d assumption is the corner stone of most conventional machine learning algorithms. However, reducing the bias and variance of the learning model on the i.i.d dataset may not help the model to prevent from their failure on the adversarial samples, which are intentionally generated by either the malicious users or its rival programs. This paper gives a brief introduction of machine learning and adversarial learning, discussing the research frontier of the adversarial issues noticed by both the machine learning and network security field. We argue that one key reason of the adversarial issue is that the learning algorithms may not exploit the input feature set enough, so that the attackers can focus on a small set of features to trick the model. To address this issue, we consider two important classes of classifiers. For random forest, we propose a type of random forest called Weighted Random Forest (WRF) to encourage the model to give even credits to the input features. This approach can be further improved by careful selection of a subset of trees based on the clustering analysis during the run time. For neural networks, we propose to introduce extra soft constraints based on the weight variance to the objective function, such that the model would base the classification decision on more evenly distributed feature impact. Empirical experiments show that these approaches can effectively improve the robustness of the learnt model against their baseline systems.

Details

ISSN :
18685145 and 18685137
Volume :
10
Database :
OpenAIRE
Journal :
Journal of Ambient Intelligence and Humanized Computing
Accession number :
edsair.doi...........a2c4c7e8130b437ddb4476cfa7a20ac8
Full Text :
https://doi.org/10.1007/s12652-018-0714-6