作者: Amira Guesmi , Ihsen Alouani , Khaled Khasawneh , Mouna Baklouti , Tarek Frikha
DOI:
关键词:
摘要: In the past few years, deep learning structures, such as Convolutional Neural Networks (CNNs), have been used in a wide range of real-life problems [4, 3, 2]. While providing breakthrough improvements in classification performance, these architectures are vulnerable to adversarial machine learning (AML) attacks: carefully-crafted humanly-imperceptible perturbations to the inputs that cause the system to output a wrong label to disrupt the system or otherwise provide the attacker with an advantage. In safety-critical domains, AML can have catastrophic consequences. For example, AML attacks threaten intelligent transportation systems where deep neural networks are a critical component of environment perception used in controlling autonomous vehicles leading to potential crashes and loss of life. Several defenses against adversarial attacks have been proposed, but subsequent, more sophisticated attacks continue to evolve and challenge these defenses. Often defenses require expensive retraining and/or substantial overheads, increasing the cost and reducing the performance of CNNs. As attacks keep getting more sophisticated, the cost of defenses also increases. Our proposed defense, Defensive Approximation (DA), leverages for the first time approximate computing (AC) to achieve quantifiable improvement in the resilience of CNNs to AML attacks. We observe this advantage for all adversarial example generation algorithms we study and under a range of attack scenarios, without harming classification performance. The defense does not require retraining, and by rooting the defense in the architecture, we achieve robustness while …