作者: Carina Andersson
DOI: 10.1007/S10664-006-9018-0
关键词: Context (language use) 、 Machine learning 、 Stability (learning theory) 、 System testing 、 Software reliability testing 、 Replication (statistics) 、 Functional testing 、 Computer science 、 Reliability engineering 、 Software quality 、 Artificial intelligence 、 Empirical research
摘要: Replications are commonly considered to be important contributions investigate the generality of empirical studies. By replicating an original study it may shown that results either valid or invalid in another context, outside specific environment which was launched. The replicated show how much confidence we could possibly have study. We present a replication method for selecting software reliability growth models decide whether stop testing and release software. applied selection study, conducted different development than with changed values stability curve fit, works well on system test data available, i.e., applicable from one. application SRGMs failures during functional resulted predictions low relative error, thus providing useful approach giving good estimates total number expect testing.