作者: EI Emilia Barakova , Rmc Rene Ahn , D Dieter Vanderelst
DOI:
关键词: Mechanism (sociology) 、 Individual learning 、 Trustworthiness 、 Computer science 、 Proactive learning 、 Artificial intelligence 、 Error-driven learning 、 Social learning 、 Machine learning 、 Risk analysis (engineering)
摘要: Social learning is a potentially powerful mechanism to use in artificial multi-agent systems. However, findings about how animals social show that it also possibly detrimental. By using agents act based on second-hand information might not be trustworthy. This can lead the spread of maladaptive behavior throughout populations. Animals employ number strategies selectively only when appropriate. suggests could learn more successfully if they are able strike appropriate balance between and individual learning. In this paper, we propose simple regulates extent which rely Our vary amount trust have others. The determined by performance others but depends exclusively agents’ own rating demonstrations. effectiveness examined through series simulations. We first there various circumstances under multi-agents systems indeed seriously hampered indiscriminate then investigate incorporate proposed fare same circumstances. simulations indicate quite effective regulating It causes considerable improvements rate, can, some circumstances, even improve eventual agents. Finally, possible extensions being discussed.