Evaluating and predicting answer quality in community QA

作者: Chirag Shah , Jefferey Pomerantz

DOI: 10.1145/1835449.1835518

关键词:

摘要: Question answering (QA) helps one go beyond traditional keywords-based querying and retrieve information in more precise form than given by a document or list of documents. Several community-based QA (CQA) services have emerged allowing seekers pose their need as questions receive answers from fellow users. A question may multiple users the asker community can choose best answer. While thus indicate if he was satisfied with received, there is no clear way evaluating quality that information. We present study to evaluate predict an answer CQA setting. chose Yahoo! Answers such service selected small set questions, each at least five answers. asked Amazon Mechanical Turk workers rate for based on 13 different criteria. Each rated workers. then matched assessments actual asker's rating show criteria we used faithfully match perception furthered our investigation extracting various features answers, who posted them, training number classifiers select using those features. demonstrate high predictability trained models along relative merits prediction. These support argument case CQA, contextual user's profile, be critical predicting content quality.

参考文章(15)
Ellen M Voorhees, L Buckland, Overview of the TREC 2003 Question Answering Track. text retrieval conference. pp. 54- 68 ,(2003)
Iryna Gurevych, Delphine Bernhard, Zhemin Zhu, A Multi-Dimensional Model for Assessing the Quality of Answers in Social Q&A Sites ICIQ'09. pp. 264- 265 ,(2009)
Jin Ha Lee, J. Stephen Downie, Sally Jo Cunningham, Challenges in cross-cultural/multilingual music information seeking international symposium/conference on music information retrieval. pp. 1- 7 ,(2005) , 10.5072/ZENODO.243526
Jeffrey Pomerantz, Virtual reference services: Evaluation of online reference services Bulletin of the American Society for Information Science and Technology. ,vol. 34, pp. 15- 19 ,(2008) , 10.1002/BULT.2008.1720340206
Soojung Kim, Jung Sun Oh, Sanghee Oh, Best-answer selection criteria in a social Q&A site from the user-oriented relevance perspective Proceedings of the American Society for Information Science and Technology. ,vol. 44, pp. 1- 15 ,(2008) , 10.1002/MEET.1450440256
Jeffrey Pomerantz, Scott Nicholson, Yvonne Belanger, R David Lankes, The current state of digital reference: validation of a general digital reference model through a survey of digital reference services Information Processing and Management. ,vol. 40, pp. 347- 363 ,(2004) , 10.1016/S0306-4573(02)00085-7
Chirag Shah, Sanghee Oh, Jung Sun Oh, Research agenda for social Q&A Library & Information Science Research. ,vol. 31, pp. 205- 209 ,(2009) , 10.1016/J.LISR.2009.07.006