作者: Kilem Li Gwet
关键词:
摘要: Pi (pi) and kappa (kappa) statistics are widely used in the areas of psychiatry psychological testing to compute extent agreement between raters on nominally scaled data. It is a fact that these coefficients occasionally yield unexpected results situations known as paradoxes kappa. This paper explores origin limitations, introduces an alternative more stable coefficient referred AC1 coefficient. Also proposed new variance estimators for multiple-rater generalized pi statistics, whose validity does not depend upon hypothesis independence raters. improvement over existing variances, which assumption. A Monte-Carlo simulation study demonstrates confidence interval construction, confirms value improved inter-rater reliability statistics.