Quality control for comparison microtasks

作者: Petros Venetis , Hector Garcia-Molina

DOI: 10.1145/2442657.2442660

关键词:

摘要: We study quality control mechanisms for a crowdsourcing system where workers perform object comparison tasks. error masking techniques (e.g., voting) and detection of bad workers. For the latter, we consider using gold-standard questions, as well disagreement with plurality answer. experiments on Mechanical Turk that yield insights to role task difficulty in control, effectiveness schemes.

参考文章(6)
D. Prelec, A Bayesian Truth Serum for Subjective Data Science. ,vol. 306, pp. 462- 466 ,(2004) , 10.1126/SCIENCE.1102081
Petros Venetis, Hector Garcia-Molina, Kerui Huang, Neoklis Polyzotis, Max algorithms in crowdsourcing environments the web conference. pp. 989- 998 ,(2012) , 10.1145/2187836.2187969
Paul Heymann, Hector Garcia-Molina, Turkalytics Proceedings of the 20th international conference on World wide web - WWW '11. pp. 477- 486 ,(2011) , 10.1145/1963405.1963473
Devavrat Shah, Sewoong Oh, David R. Karger, Iterative Learning for Reliable Crowdsourcing Systems neural information processing systems. ,vol. 24, pp. 1953- 1961 ,(2011)
Aaron D. Shaw, John J. Horton, Daniel L. Chen, Designing incentives for inexpert human raters conference on computer supported cooperative work. pp. 275- 284 ,(2011) , 10.1145/1958824.1958865
John Joseph Horton, Lydia B. Chilton, The labor economics of paid crowdsourcing Proceedings of the 11th ACM conference on Electronic commerce - EC '10. pp. 209- 218 ,(2010) , 10.1145/1807342.1807376