作者: Chengjiang Long , Bhavan Vasu
DOI:
关键词:
摘要: Deep neural networks have achieved great success in many real-world applications, yet it remains unclear and difficult to explain their decision-making process an end-user. In this paper, we address the explainable AI problem for deep with our proposed framework, named IASSA, which generates importance map indicating how salient each pixel is model's prediction iterative adaptive sampling module. We employ affinity matrix calculated on multi-level learning features explore long-range pixel-to-pixel correlation, can shift saliency values guided by parameter-free spatial attention. Extensive experiments MS-COCO dataset show that approach matches or exceeds performance of state-of-the-art black-box explanation methods.