Comparing Detection Methods For Software Requirements Inspections: A Replication Using Professional Subjects

作者: Adam Porter , Lawrence Votta

DOI: 10.1023/A:1009776104355

关键词: Graduate studentsLoss rateCredibilitySoftware engineeringComputer scienceSoftware requirementsPopulationStatistical hypothesis testingFault detection rateChecklist

摘要: Software requirements specifications (SRS) are often validated manually. One such process is inspection, in which several reviewers independently analyze all or part of the specification and search for faults. These faults then collected at a meeting author(s). Usually, use Ad Hoc Checklist methods to uncover force rely on nonsystematic techniques wide variety We hypothesize that Scenario-based method, each reviewer uses different, systematic specific classes faults, will have significantly higher success rate. In previous work we evaluated this hypothesis using 48 graduate students computer science as subjects. We now replicated experiment 18 professional developers from Lucent Technologies Our goals were (1) extend external credibility our results by studying developers, (2) compare performances professionals with better understand how generalizable less expensive student experiments were. For inspection performed four measurements: individual fault detection rate, team (3) percentage first identified collection (meeting gain rate), (4) an individual, but never reported loss rate). both experimental Scenario method had rate than either methods, no more effective reviewers, Collection meetings produced net improvement fault, rate—meeting gains offset losses, Finally, although measures differed between populations, outcomes almost statistical tests identical. This suggests provided adequate model population much greater expense conducting studies may not always be required.

参考文章(14)
William G. Wood, Temporal Logic Case Study computer aided verification. ,vol. 407, pp. 257- 263 ,(1990) , 10.1007/3-540-52148-8_21
Sidney Addelman, Statistics for experimenters ,(1978)
Victor R. Basili, David M. Weiss, Evaluation of a software requirements document by analysis of change data international conference on software engineering. pp. 314- 323 ,(1981) , 10.5555/800078.802544
Mark A. Ardis, Lessons from using basic LOTOS international conference on software engineering. pp. 5- 14 ,(1994) , 10.5555/257734.257736
G. Michael Schneider, Johnny Martin, W. T. Tsai, An experimental study of fault detection in user requirements documents ACM Transactions on Software Engineering and Methodology. ,vol. 1, pp. 188- 204 ,(1992) , 10.1145/128894.128897
S. Gerhart, D. Craigen, T. Ralston, Experience with formal methods in critical systems IEEE Software. ,vol. 11, pp. 21- 28 ,(1994) , 10.1109/52.251198
Del Scott, Computation for the analysis of designed experiments Technometrics. ,vol. 33, pp. 471- 472 ,(1989) , 10.1080/00401706.1991.10484875
Lawrence G. Votta, Does every inspection need a meeting foundations of software engineering. ,vol. 18, pp. 107- 114 ,(1993) , 10.1145/167049.167070
A.A. Porter, L.G. Votta, V.R. Basili, Comparing detection methods for software requirements inspections: a replicated experiment IEEE Transactions on Software Engineering. ,vol. 21, pp. 563- 575 ,(1995) , 10.1109/32.391380
Stephen G. Eick, Clive R. Loader, M. David Long, Lawrence G. Votta, Scott Vander Wiel, Estimating software fault content before coding international conference on software engineering. pp. 59- 65 ,(1992) , 10.1145/143062.143090