Exploring the use of crowdsourcing to support empirical studies in software engineering

作者: Kathryn T. Stolee , Sebastian Elbaum

DOI: 10.1145/1852786.1852832

关键词: EngineeringMechanism (biology)Data scienceCrowdsourcing software developmentSoftware engineeringWork (electrical)Empirical researchGeneralityCrowdsourcing

摘要: The power and the generality of findings obtained through empirical studies are bounded by number type participating subjects. In software engineering, obtaining a large adequate subjects to evaluate technique or tool is often major challenge. this work we explore use crowdsourcing as mechanism address that challenge assisting in subject recruitment. More specifically, show how adapted study be performed under an infrastructure not only makes it possible reach base users but also provides capabilities manage those being conducted. We discuss lessons learned experience, which illustrate potential tradeoffs engineering studies.

参考文章(12)
C. Scaffidi, M. Shaw, B. Myers, Estimating the numbers of end users and end user programmers symposium on visual languages and human-centric computing. pp. 207- 214 ,(2005) , 10.1109/VLHCC.2005.34
Rion Snow, Brendan O'Connor, Daniel Jurafsky, Andrew Y. Ng, Cheap and fast---but is it good? Proceedings of the Conference on Empirical Methods in Natural Language Processing - EMNLP '08. pp. 254- 263 ,(2008) , 10.3115/1613715.1613751
Winter Mason, Duncan J. Watts, Financial incentives and the "performance of crowds" knowledge discovery and data mining. ,vol. 11, pp. 77- 85 ,(2009) , 10.1145/1600150.1600175
M. Cameron Jones, Elizabeth F. Churchill, Conversations in developer communities Proceedings of the fourth international conference on Communities and technologies - C&T '09. pp. 195- 204 ,(2009) , 10.1145/1556460.1556489
Julie S. Downs, Mandy B. Holbrook, Steve Sheng, Lorrie Faith Cranor, Are your participants gaming the system? Proceedings of the 28th international conference on Human factors in computing systems - CHI '10. pp. 2399- 2402 ,(2010) , 10.1145/1753326.1753688
Joel Ross, Lilly Irani, M. Six Silberman, Andrew Zaldivar, Bill Tomlinson, Who are the crowdworkers? Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems - CHI EA '10. pp. 2863- 2872 ,(2010) , 10.1145/1753846.1753873
Jeffrey Heer, Michael Bostock, Crowdsourcing graphical perception: using mechanical turk to assess visualization design human factors in computing systems. pp. 203- 212 ,(2010) , 10.1145/1753326.1753357
Greg Little, Lydia B. Chilton, Max Goldman, Robert C. Miller, TurKit: tools for iterative tasks on mechanical Turk knowledge discovery and data mining. pp. 29- 30 ,(2009) , 10.1145/1600150.1600159
Aniket Kittur, Ed H. Chi, Bongwon Suh, Crowdsourcing user studies with Mechanical Turk Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI '08. pp. 453- 456 ,(2008) , 10.1145/1357054.1357127