作者: Daniel Lowd , Pedro Domingos
DOI: 10.1007/978-3-540-74976-9_21
关键词:
摘要: Markov logic networks (MLNs) combine and first-order logic, are a powerful increasingly popular representation for statistical relational learning. The state-of-the-art method discriminative learning of MLN weights is the voted perceptron algorithm, which essentially gradient descent with an MPE approximation to expected sufficient statistics (true clause counts). Unfortunately, these can vary widely between clauses, causing problem be highly ill-conditioned, making very slow. In this paper, we explore several alternatives, from per-weight rates second-order methods. particular, focus on two approaches that avoid computing partition function: diagonal Newton scaled conjugate gradient. experiments standard SRL datasets, obtain order-of-magnitude speedups, or more accurate models given comparable times.