作者: Marissa F. McBride , Fiona Fidler , Mark A. Burgman
DOI: 10.1111/J.1472-4642.2012.00884.X
关键词:
摘要: Aim Expert knowledge routinely informs ecological research and decision-making. Its reliability is often questioned, but rarely subject to empirical testing validation. We investigate the ability of experts make quantitative predictions variables for which answers are known. Location Global. Methods Experts in four subfields were asked about outcomes scientific studies, form unpublished (in press) journal articles, based on information article introduction methods sections. Estimates from students elicited one case study comparison. For each variable, participants assessed a lower upper bound, best guess level confidence that observed value will lie within their ascribed interval. Responses (1) accuracy: degree corresponded with experimental results, (2) informativeness: precision uncertainty bounds, (3) calibration: bounds contained truth as specified. Results responses found be overconfident, specifying 80% intervals captured only 49-65% time. In contrast, student 76% time, displaying close perfect calibration. Best estimates average more accurate than those students. The outperformed worst experts. No consistent relationships between performance years experience, publication record or self-assessment expertise. Main conclusions possess valuable may require training communicate this accurately. status poor guide good performance. absence past performance, simple averages expert provide robust counter individual variation © 2012 Blackwell Publishing Ltd.