Multidimensional Face Representation in a Deep Convolutional Neural Network Reveals the Mechanism Underlying AI Racism.

作者: Jia Liu , Jinhua Tian , Siyuan Hu , Hailun Xie

DOI: 10.3389/FNCOM.2021.620281

关键词: Face (geometry)Computer scienceOptimal distinctiveness theoryPattern recognitionIdentification (information)Similarity (psychology)Transfer of learningCognitionConvolutional neural networkEuclidean distanceArtificial intelligence

摘要: The increasingly popular application of AI runs the risk amplifying social bias, such as classifying non-white faces animals. Recent research has largely attributed this bias to training data implemented. However, underlying mechanism is poorly understood; therefore, strategies rectify are unresolved. Here, we examined a typical deep convolutional neural network (DCNN), VGG-Face, which was trained with face dataset consisting more white than black and Asian faces. transfer learning result showed significantly better performance in identifying faces, similar well-known humans, other-race effect (ORE). To test whether resulted from imbalance images, retrained VGG-Face containing found reverse ORE that newly-trained preferred over identification accuracy. Additionally, when number were matched dataset, DCNN did not show any bias. further examine how imbalanced image input led ORE, performed representational similarity analysis on VGG-Face's activation. We contained representation distinct, indexed by smaller in-group larger Euclidean distance. That is, scattered sparsely space other Importantly, distinctiveness positively correlated accuracy, explained observed VGG-Face. In summary, our study revealed DCNNs, provides novel approach studying ethics. addition, multidimensional theory discovered humans also applicable advocating for future studies apply cognitive theories understand DCNNs' behavior.

参考文章(19)
Christian A. Meissner, John C. Brigham, Thirty years of investigating the own-race bias in memory for faces: A meta-analytic review Psychology, Public Policy and Law. ,vol. 7, pp. 3- 35 ,(2001) , 10.1037/1076-8971.7.1.3
Tim Valentine, Michael B. Lewis, Peter J. Hills, Face-space: A unifying concept in face recognition research Quarterly Journal of Experimental Psychology. ,vol. 69, pp. 1996- 2019 ,(2016) , 10.1080/17470218.2014.990392
Gabriel Oliver, Francisco Bonin-Font, Alberto Ortiz, Visual Navigation for Mobile Robots: A Survey Journal of Intelligent and Robotic Systems. ,vol. 53, pp. 263- 296 ,(2008) , 10.1007/S10846-008-9235-4
Tim Valentine, A Unified Account of the Effects of Distinctiveness, Inversion, and Race in Face Recognition: Quarterly Journal of Experimental Psychology. ,vol. 43, pp. 161- 204 ,(1991) , 10.1080/14640749108400966
Patrick Chiroro, Tim Valentine, An Investigation of the Contact Hypothesis of the Own-race Bias in Face Recognition: Quarterly Journal of Experimental Psychology. ,vol. 48, pp. 879- 894 ,(1995) , 10.1080/14640749508401421
Roy S. Malpass, Jerome Kravitz, Recognition for faces of own and other race. Journal of Personality and Social Psychology. ,vol. 13, pp. 330- 334 ,(1969) , 10.1037/H0028434
Tim Valentine, Mitsuo Endo, Towards an Exemplar Model of Face Processing: The Effects of Race and Distinctiveness: Quarterly Journal of Experimental Psychology. ,vol. 44, pp. 671- 703 ,(1992) , 10.1080/14640749208401305
Omkar M. Parkhi, Andrea Vedaldi, Andrew Zisserman, Deep face recognition british machine vision conference. ,(2015) , 10.5244/C.29.41
Nikhil Garg, Londa Schiebinger, Dan Jurafsky, James Zou, Word embeddings quantify 100 years of gender and ethnic stereotypes Proceedings of the National Academy of Sciences. ,vol. 115, pp. E3635- E3644 ,(2018) , 10.1073/PNAS.1720347115
James Zou, Londa Schiebinger, AI can be sexist and racist - it's time to make it fair. Nature. ,vol. 559, pp. 324- 326 ,(2018) , 10.1038/D41586-018-05707-8