作者: Richard J. Cook
DOI: 10.1002/9781118445112.STAT05243
关键词:
摘要: The kappa statistic is a widely used measure for quantifying agreement between raters using nominal scales. It advocated on the grounds that it “corrects” level of would be expected by chance alone based marginal rates. operating characteristics kappa, however, show responds in complicated and unintuitive way to subject-specific agreement. Modifications supplementary statistics, which have been proposed help interpret are discussed. Remarks made as why has such problematic features. Keywords: agreement; association; contingency table; kappa; margins; reliability