作者: Katherine Elizabeth Trushkowsky
DOI:
关键词:
摘要: Author(s): Trushkowsky, Katherine Elizabeth | Advisor(s): Franklin, Michael J Abstract: Hybrid human/machine database and search systems promise to greatly expand the usefulness of query processing by incorporating human knowledge experience via "crowdsourcing" into existing for data gathering other tasks. Of course, such raise many implementation questions. For example, how can we reason about result quality in a hybrid system? How best combine benefits machine computation computation? In this thesis describe attacked these challenges developing statistical tools that enable users developers completeness systems, as well combining automated engines. We present evaluations techniques using experiments run on popular crowdsourcing platform, Amazon's Mechanical Turk.