作者: Patrick Ramos , Jeremy Montez , Adrian Tripp , Casey K Ng , Inderbir S Gill
DOI: 10.1111/BJU.12559
关键词:
摘要: Objectives To evaluate robotic dry laboratory (dry lab) exercises in terms of their face, content, construct and concurrent validities. To the applicability Global Evaluative Assessment Robotic Skills (GEARS) tool to assess lab performance. Materials Methods Participants were prospectively categorized into two groups: novice (no cases as primary surgeon) expert (≥30 cases). Participants completed three virtual reality (VR) using da Vinci Simulator (Intuitive Surgical, Sunnyvale, CA, USA), well corresponding versions each exercise (Mimic Technologies, Seattle, WA, USA) on Surgical System. Simulator performance was assessed by metrics measured simulator. Dry blindly video-evaluated review six-metric GEARS tool. Participants a post-study questionnaire (to face content validity). A Wilcoxon non-parametric test used compare between groups (construct validity) Spearman's correlation coefficient simulation (concurrent validity). Results The mean number experienced for novices 0 experts (range) 200 (30–2000) cases. Expert surgeons found both ‘realistic’ (median [range] score 8 [4–10] out 10) ‘very useful’ training residents 9 [5–10] 10). Overall, all tasks more efficiently (P < 0.001) effectively (GEARS total P than novices. In addition, outperformed individual metric 0.001). Finally, comparing with simulator performance, there moderate overall (r = 0.54, 0.001). Most correlated moderately strongly 0.001). Conclusions The present study have validity VR tasks. Until now, assessment has been limited basic (i.e. time completion error avoidance). For first time, we shown it is feasibile apply global training.