作者: Arnaud Berny
关键词:
摘要: The aim of this paper is to extend selection learning, initially designed for the optimization real functions over fixed-length binary strings, toward strings on an arbitrary finite alphabet. We derive learning algorithms from clear principles. First, we are looking product probability measures d-ary or equivalently, random variables whose components statistically independent. Second, these distributions evaluated relatively expectation fitness function. More precisely, consider logarithm introduce proportional and Boltzmann selections. Third, define two kinds gradient systems maximize expectation. first one drives unbounded parameters, whereas second directly probabilities, a la PBIL. also composite selection, that which take into account positively as well negatively selected strings. propose stochastic approximations systems, finally, apply three resulting test functions, OneMax BigJump, draw some conclusions their relative strengths weaknesses.