作者: Lesley Wiart , KAT Kolaski , Charlene Butler , Laura Vogtle , Lynne Romeiser Logan
DOI: 10.1111/J.1469-8749.2012.04307.X
关键词:
摘要: Aim The aim of this study was to evaluate the interrater reliability and convergent validity American Academy for Cerebral Palsy Developmental Medicine’s (AACPDM) methodology conducting systematic reviews (group design studies). Method Four clinicians independently rated 24 articles level evidence conduct using AACPDM methodology. Study also assessed Effective Public Health Practice Project scale. Raters were randomly assigned one two pairs resolve discrepancies. agreement between individual raters calculated kappa (α=0.05) intraclass correlations (ICCs; α=0.05). Spearman’s rank correlation coefficient relationship raters’ categorization quality categories tools. Results There acceptable (κ=0.77; p<0.001; ICC=0.90) (κ=0.83; ICC=0.96) ratings. four seven questions (κ=0.53–0.87). ICCs (all raters) category ratings (weak, moderate, strong) indicated good (ICC=0.76). rho a significant positive overall comparisons tools (0.52; p<0.001). Conclusions rating system has reliability. Evaluation its demonstrated reasonable when compared with similar tool.