Modelling Strategic Conversation: model, annotation design and corpus

作者: Laure Vieu , Anaïs Cadilhac , Stergos Afantenos , Oliver Lemon , Alex Lascarides

DOI:

关键词:

摘要: A Gricean view of cognitive agents holds that are fully rational and adhere to the maxims conversation entail speakers adopt shared intentions aligned preferences– e.g. (Allen Litman, 1987; Lochbaum, 1998). These assumptions unwarranted in many conversational settings. In this paper we propose a different an annotation scheme for it. We game theoretic approach conversation. While assume like Grice rational, talk maximize their expected utility (a measure combines belief preference). Preferences together with beliefs guide actions as much they non linguistic actions. Conversations dynamic extensive games, have principle unbounded number possible moves no mandatory stopping points— you can, some sense, always say anything, can continue The each player consist making discourse contribution, which finitely characterize using structure sense (Asher Lascarides, 2003). Such structures units linked via relations Elaboration, QuestionAnswer-Pair (QAP) Explanation. addition these serve link one participant’s contribution another; instance, if agent asks question, another may respond answer, two contributions then by relation QAP. Conversational participants alternatively senders (S) or receivers messages (R). S sends signal s bearing mind receiver R has figure out: (a) what is message m(s)? What publicly committed to? (b) Is m(s) credible not? (c) Given status m(s), s′ should send return? now becomes sender S, receiver, goes through calculation steps (a)-(c). at least part conventional meaning determined prior play. (a), must calculate form generalized signaling public commitments made—these include not only fixed semantics but also implicatures introduce between contributions. Sometimes involve strategic considerations: actually replying question asked turn she engaged other move? If answering something cannot plausibly later deny? Quinley, 2011) argue trust format right computing optimal task (c). (Traum Allen, 1994) advocates related on cooperativity social conventions guiding conversation, obligations do presuppose other’s goals et al., 2008). For us, foundational Traum’s account however themselves based utility. Utility basis training behave certain way reinforcement learning (Frampton Lemon, 2009).

参考文章(8)
Nicholas Asher, Jason Quinley, Begging Questions, Their Answers and Basic Cooperativity New Frontiers in Artificial Intelligence. pp. 3- 12 ,(2012) , 10.1007/978-3-642-32090-3_2
Karen E. Lochbaum, A collaborative planning model of intentional structure Computational Linguistics. ,vol. 24, pp. 525- 572 ,(1998) , 10.5555/972764.972765
Alex Lascarides, Nicholas Asher, Logics of conversation ,(2003)
Diane J. Litman, James F. Allen, A plan recognition model for subdialogues in conversations Cognitive Science. ,vol. 11, pp. 163- 200 ,(1987) , 10.1016/S0364-0213(87)80005-8
Matthew Frampton, Oliver Lemon, Recent research advances in Reinforcement Learning in Spoken Dialogue Systems Knowledge Engineering Review. ,vol. 24, pp. 375- 408 ,(2009) , 10.1017/S0269888909990166
David Traum, William Swartout, Jonathan Gratch, Stacy Marsella, A Virtual Human Dialogue Model for Non-Team Interaction Springer, Dordrecht. pp. 45- 67 ,(2008) , 10.1007/978-1-4020-6821-8_3
David R. Traum, James F. Allen, Discourse obligations in dialogue processing Proceedings of the 32nd annual meeting on Association for Computational Linguistics -. pp. 1- 8 ,(1994) , 10.3115/981732.981733
Candace L. Sidner, An artificial discourse language for collaborative negotiation national conference on artificial intelligence. pp. 814- 819 ,(1994)