作者: Tim Miller , Ana Marasović , Alon Jacovi , Yoav Goldberg
DOI:
关键词:
摘要: Trust is a central component of the interaction between people and AI, in that 'incorrect' levels trust may cause misuse, abuse or disuse technology. But what, precisely, nature AI? What are prerequisites goals cognitive mechanism trust, how can we promote them, assess whether they being satisfied given interaction? This work aims to answer these questions. We discuss model inspired by, but not identical to, sociology's interpersonal (i.e., people). rests on two key properties vulnerability user ability anticipate impact AI model's decisions. incorporate formalization 'contractual trust', such an some implicit explicit contract will hold, 'trustworthiness' (which detaches from notion trustworthiness sociology), with it concepts 'warranted' 'unwarranted' trust. then present possible causes warranted as intrinsic reasoning extrinsic behavior, design trustworthy evaluate has manifested, warranted. Finally, elucidate connection XAI using our formalization.