作者: Dario Pasquini , Marco Mingione , Massimo Bernaschi
DOI: 10.1109/EUROSPW.2019.00037
关键词:
摘要: Deep generative models are rapidly becoming a common tool for researchers and developers. However, as exhaustively shown the family of discriminative models, test-time inference deep neural networks cannot be fully controlled erroneous behaviors can induced by an attacker. In present work, we show how malicious user force pre-trained generator to reproduce arbitrary data instances feeding it suitable adversarial inputs. Moreover, that these latent vectors shaped so statistically indistinguishable from set genuine The proposed attack technique is evaluated with respect various GAN images generators using different architectures, training processes both conditional not-conditional setups.