作者: Battista Biggio
关键词:
摘要: Learning to discriminate between secure and hostile patterns is a crucial problem for species survive in nature. Mimetism camouflage are well-known examples of evolving weapons defenses the arms race predators preys. It thus clear that all information acquired by our senses should not be considered necessarily or reliable. In machine learning pattern recognition systems, however, we have started investigating these issues only recently. This phenomenon has been especially observed context adversarial settings like malware detection spam filtering, which data can purposely manipulated humans undermine outcome an automatic analysis. As current methods natively designed deal with intrinsic, nature problems, they exhibit specific vulnerabilities attacker may exploit either mislead evade detection. Identifying analyzing impact corresponding attacks on algorithms one main open novel research field learning, along design more algorithms.In first part this talk, I introduce general framework encompasses unifies previous work field, allowing systematically evaluate classifier security against different, potential attacks. example application framework, second discuss evasion attacks, where malicious samples at test time then show how carefully-designed poisoning some manipulating small fraction their training data. addition, defense mechanisms both real-world applications, including biometric identity computer security. Finally, briefly ongoing clustering algorithms, sketch promising future directions.