作者: Paolo Russu , Ambra Demontis , Battista Biggio , Giorgio Fumera , Fabio Roli
关键词:
摘要: Machine learning is widely used in security-sensitive settings like spam and malware detection, although it has been shown that malicious data can be carefully modified at test time to evade detection. To overcome this limitation, adversary-aware algorithms have developed, exploiting robust optimization game-theoretical models incorporate knowledge of potential adversarial manipulations into the algorithm. Despite these techniques effective some tasks, their adoption practice hindered by different factors, including difficulty meeting specific theoretical requirements, complexity implementation, scalability issues, terms computational space required during training. In work, we aim develop secure kernel machines against evasion attacks are not computationally more demanding than non-secure counterparts. particular, leveraging recent work on robustness regularization, show security a linear classifier drastically improved selecting proper regularizer, depending kind attack, as well unbalancing cost classification errors. We then discuss nonlinear machines, choice function crucial. also errors varying parameters further improve security, yielding decision functions better enclose legitimate data. Our results PDF detection corroborate our analysis.