作者: Jia Liu , Jinhua Tian , Siyuan Hu , Hailun Xie
DOI: 10.3389/FNCOM.2021.620281
关键词: Face (geometry) 、 Computer science 、 Optimal distinctiveness theory 、 Pattern recognition 、 Identification (information) 、 Similarity (psychology) 、 Transfer of learning 、 Cognition 、 Convolutional neural network 、 Euclidean distance 、 Artificial intelligence
摘要: The increasingly popular application of AI runs the risk amplifying social bias, such as classifying non-white faces animals. Recent research has largely attributed this bias to training data implemented. However, underlying mechanism is poorly understood; therefore, strategies rectify are unresolved. Here, we examined a typical deep convolutional neural network (DCNN), VGG-Face, which was trained with face dataset consisting more white than black and Asian faces. transfer learning result showed significantly better performance in identifying faces, similar well-known humans, other-race effect (ORE). To test whether resulted from imbalance images, retrained VGG-Face containing found reverse ORE that newly-trained preferred over identification accuracy. Additionally, when number were matched dataset, DCNN did not show any bias. further examine how imbalanced image input led ORE, performed representational similarity analysis on VGG-Face's activation. We contained representation distinct, indexed by smaller in-group larger Euclidean distance. That is, scattered sparsely space other Importantly, distinctiveness positively correlated accuracy, explained observed VGG-Face. In summary, our study revealed DCNNs, provides novel approach studying ethics. addition, multidimensional theory discovered humans also applicable advocating for future studies apply cognitive theories understand DCNNs' behavior.