R-IAC : Robust Intrinsically Motivated Active Learning

作者: Adrien Baranes , Pierre-Yves Oudeyer

DOI:

关键词:

摘要: IAC was initially introduced as a developmental mechanisms allowing robot to self-organize trajectories of increasing complexity without pre-programming the particular stages. In this paper, we argue that and other intrinsically motivated learning heuristics could be viewed active algorithms are particularly suited for forward models in unprepared sensorimotor spaces with large unlearnable subspaces. Then, introduce novel formulation IAC, called R-IAC, show its performances an algorithm far superior complex space where only small subspace is neither nor trivial. We also results which learnt model reused control scheme.

参考文章(15)
Andrew G. Barto, Satinder Singh, Nuttapong Chentanez, Intrinsically Motivated Learning of Hierarchical Collections of Skills ,(2004)
M. Hasenjäger, H. Ritter, Active learning in neural networks soft computing. pp. 137- 169 ,(2002) , 10.1007/978-3-7908-1803-1_5
D. E. Berlyne, Conflict, arousal, and curiosity ,(2014)
D. E. Berlyne, Curiosity and Exploration Science. ,vol. 153, pp. 25- 33 ,(1966) , 10.1126/SCIENCE.153.3731.25
Wolfram Schultz, Getting Formal with Dopamine and Reward Neuron. ,vol. 36, pp. 241- 263 ,(2002) , 10.1016/S0896-6273(02)00967-4
Peter Dayan, Bernard W. Balleine, Reward, Motivation, and Reinforcement Learning Neuron. ,vol. 36, pp. 285- 298 ,(2002) , 10.1016/S0896-6273(02)00963-7
Peter Redgrave, Kevin Gurney, The short-latency dopamine signal: a role in discovering novel actions? Nature Reviews Neuroscience. ,vol. 7, pp. 967- 975 ,(2006) , 10.1038/NRN2022