作者: Omar Alonso , Ricardo Baeza-Yates
DOI: 10.1007/978-3-642-20161-5_16
关键词: Information retrieval 、 Work (electrical) 、 Data science 、 Information design 、 Crowdsourcing software development 、 Human interface guidelines 、 Approval rate 、 Crowdsourcing 、 Presentation 、 Relevance (information retrieval) 、 Computer science
摘要: In the last years crowdsourcing has emerged as a viable platform for conducting relevance assessments. The main reason behind this trend is that makes possible to conduct experiments extremely fast, with good results and at low cost. However, like in any experiment, there are several details would make an experiment work or fail. To gather useful results, user interface guidelines, inter-agreement metrics, justification analysis important aspects of successful experiment. we explore design execution judgments using Amazon Mechanical Turk platform, introducing methodology assessments series TREC 8 fixed budget. Our findings indicate workers experts, even providing detailed feedback certain query-document pairs. We also importance document presentation when performing assessment tasks. Finally, show our examples interesting their own.