作者: Jingwen Xiang , Zoie Zhao , Mackie Zhou , Megan McKenzie , Alexis Kilayko
DOI:
关键词:
摘要: Research in deep learning has recently produced models of natural language that are capable of generating natural language output which, at a glance, has strong similarities to that written by intelligent humans. However, the texts produced by deep learning-based large language models (LLMs), upon deeper examination, reveal the challenges that they have in producing outputs that maintain logical coherence. One specific application area of interest for LLMs is in fictional narrative generation, a mode of operation in which stories are generated by the model in response to a prompt text that indicates the start of a story or the desired style of writing to be produced. In this paper we present an initial study into a method for combining an LLM with a symbolic system to perform story generation. The method generates stories by repeatedly prompting the LLM by interleaving the output of a state-of-the-art LLM with the output of a classic story generation system, while attempting to control and shape the output of the LLM. We present a number of stories generated with a prototype interleaving system, and discuss the qualities of the stories and challenges for future development of the method.