作者: KEVIN THOMPSON , PAT LANGLEY
DOI: 10.1016/B978-1-4832-0773-5.50011-0
关键词: Representation language 、 Concept learning 、 Matching (graph theory) 、 Mathematics 、 Artificial intelligence 、 Simple language 、 Probabilistic logic 、 Natural (music) 、 Structure (mathematical logic) 、 Theoretical computer science
摘要: Publisher Summary This chapter describes a system that learns concepts in structured domains. Most recent work on unsupervised concept learning has been limited to unstructured domains, which instances are described by fixed sets of attribute-value pairs. Many domains can be this simple language. Frequently, however, have some natural structure; objects components and relations among those components. In such an language is inadequate. The Labyrinth, implemented induces from objects. Labyrinth viewed as approach incremental formation. goal formation find allow useful predictions partial information. make effective generalizations using more powerful representation It carries out incremental, probabilistic uses them missing attribute values, components, relations. also decomposes into constrain matching.