作者: Adam Porter , Lawrence Votta
关键词: Graduate students 、 Loss rate 、 Credibility 、 Software engineering 、 Computer science 、 Software requirements 、 Population 、 Statistical hypothesis testing 、 Fault detection rate 、 Checklist
摘要: Software requirements specifications (SRS) are often validated manually. One such process is inspection, in which several reviewers independently analyze all or part of the specification and search for faults. These faults then collected at a meeting author(s). Usually, use Ad Hoc Checklist methods to uncover force rely on nonsystematic techniques wide variety We hypothesize that Scenario-based method, each reviewer uses different, systematic specific classes faults, will have significantly higher success rate. In previous work we evaluated this hypothesis using 48 graduate students computer science as subjects. We now replicated experiment 18 professional developers from Lucent Technologies Our goals were (1) extend external credibility our results by studying developers, (2) compare performances professionals with better understand how generalizable less expensive student experiments were. For inspection performed four measurements: individual fault detection rate, team (3) percentage first identified collection (meeting gain rate), (4) an individual, but never reported loss rate). both experimental Scenario method had rate than either methods, no more effective reviewers, Collection meetings produced net improvement fault, rate—meeting gains offset losses, Finally, although measures differed between populations, outcomes almost statistical tests identical. This suggests provided adequate model population much greater expense conducting studies may not always be required.