作者: Michael Sampels , Mark Zlochin , Christian Blum
DOI:
关键词:
摘要: Giving positive feedback to good solutions is a common base technique in model-based search algorithms, such as Ant Colony Optimization, Estimation of Distribution Algorithms, or Neural Networks. In particular, the reinforcement components by known successful tackling hard combinatorial optimization problems. We show simple algorithm for node-weighted k-cardinality tree problem that this strategy doesn't guarantee steadily increasing performance general. It rather possible some "problem"-"probabilistic model" combinations average system decreasing and even probability sampling over time. The result proven analytically consequences are studied empirical case studies.