作者: Sebastian Müller , Thomas Vogel , Lars Grunske
DOI: 10.1007/978-3-030-59762-7_15
关键词:
摘要: Anyone working in the technology sector is probably familiar with question: "Have you tried turning it off and on again?", as this usually default question asked by tech support. Similarly, known search based testing that metaheuristics might get trapped a plateau during search. As human, one can look at gradient of fitness curve decide to restart search, so hopefully improve results optimization next run. Trying automate such restart, has be programmatically decided whether metaheuristic encountered yet, which an inherently difficult problem. To mitigate problem context theoretical problems, Bet Run strategy was developed, where multiple algorithm instances are started concurrently, after some time all but single most promising instance terms values killed. In paper, we adopt evaluate for test case generation. Our work indicates use does not generally lead gains quality metrics, when instantiated best parameters found literature.