The effect of incorrect reliability information on expectations, perceptions, and use of automation

作者: Laura H. Barg-Walkow , Wendy A. Rogers

DOI: 10.1177/0018720815610271

关键词: Human factors and ergonomicsApplied psychologyAffect (psychology)Set (psychology)AutomationComputer scienceReliability (statistics)Task (project management)Computer securityPoison controlDuration (project management)

摘要: OBJECTIVE: We examined how providing artificially high or low statements about automation reliability affected expectations, perceptions, and use of over time. BACKGROUND: One common method introducing is explicit the automation's capabilities. Research needed to understand expectations from such introductions affect perceptions automation. METHOD: Explicit-statement were manipulated set higher-than (90%), same-as (75%), lower-than (60%) levels in a dual-task scenario with 75% reliable Two experiments conducted assess compliance, reliance, task performance (a) 2 days (b) 4 days. RESULTS: The baseline assessments showed initial matched introduced expectation. For duration each experiment, groups' lower than actual reliability. However, no different after Day 1 either study. There few differences between groups for use, which generally stayed same increased experience using system. CONCLUSION: Introductory describing have long-lasting impact on performance. Statements including incorrect do not appear APPLICATION: Introductions should be designed according desired outcomes Low effects. Language: en

参考文章(42)
Stephanie M. Merritt, Daniel R. Ilgen, Not all trust is created equal: dispositional and history-based trust in human-automation interactions. Human Factors. ,vol. 50, pp. 194- 210 ,(2008) , 10.1518/001872008X288574
R. Parasuraman, T.B. Sheridan, C.D. Wickens, A model for types and levels of human interaction with automation systems man and cybernetics. ,vol. 30, pp. 286- 297 ,(2000) , 10.1109/3468.844354
Vlad L. Pop, Alex Shrewsbury, Francis T. Durso, Individual Differences in the Calibration of Trust in Automation Human Factors. ,vol. 57, pp. 545- 556 ,(2015) , 10.1177/0018720814564422
James P. Bliss, Mariea C. Dunn, Behavioural implications of alarm mistrust as a function of task workload Ergonomics. ,vol. 43, pp. 1283- 1300 ,(2000) , 10.1080/001401300421743
Sara E. McBride, Wendy A. Rogers, Arthur D. Fisk, Understanding the Effect of Workload on Automation Use for Younger and Older Adults Human Factors. ,vol. 53, pp. 672- 686 ,(2011) , 10.1177/0018720811421909
Sara J. Czaja, Neil Charness, Arthur D. Fisk, Christopher Hertzog, Sankaran N. Nair, Wendy A. Rogers, Joseph Sharit, Factors Predicting the Use of Technology: Findings From the Center for Research and Education on Aging and Technology Enhancement (CREATE) Psychology and Aging. ,vol. 21, pp. 333- 352 ,(2006) , 10.1037/0882-7974.21.2.333
James P. Bliss, AN INVESTIGATION OF EXTREME ALARM RESPONSE PATTERNS IN LABORATORY EXPERIMENTS Proceedings of the Human Factors and Ergonomics Society Annual Meeting. ,vol. 47, pp. 1683- 1687 ,(2003) , 10.1177/154193120304701319
John D. Lee, Neville Moray, Trust, self-confidence, and operators' adaptation to automation International Journal of Human-computer Studies \/ International Journal of Man-machine Studies. ,vol. 40, pp. 153- 184 ,(1994) , 10.1006/IJHC.1994.1007
Neta Ezer, Arthur D. Fisk, Wendy A. Rogers, Age-related differences in reliance behavior attributable to costs within a human-decision aid system. Human Factors. ,vol. 50, pp. 853- 863 ,(2008) , 10.1518/001872008X375018
Wendy A. Rogers, Nina Lamson, Gabriel K. Rousseau, Warning research: an integrative perspective. Human Factors. ,vol. 42, pp. 102- 139 ,(2000) , 10.1518/001872000779656624