作者: Lingwei Zhu , Zheng Chen , Matthew Kyle Schlegel , Martha White
DOI:
关键词:
摘要: Many policy optimization approaches in reinforcement learning incorporate a Kullback-Leilbler (KL) divergence to the previous policy, to prevent the policy from changing too quickly. This idea was initially proposed in a seminal paper on Conservative Policy Iteration, with approximations given by algorithms like TRPO and Munchausen Value Iteration (MVI). We continue this line of work by investigating a generalized KL divergence---called the Tsallis KL divergence. Tsallis KL defined by the -logarithm is a strict generalization, as corresponds to the standard KL divergence; provides a range of new options. We characterize the types of policies learned under the Tsallis KL, and motivate when could be beneficial. To obtain a practical algorithm that incorporates Tsallis KL regularization, we extend MVI, which is one of the simplest approaches to incorporate KL regularization. We show that this generalized MVI () obtains significant improvements over the standard MVI () across 35 Atari games.