作者: Kai-Wei Chang , Vivek Srikumar , Dan Roth
DOI: 10.1007/978-3-642-40991-2_26
关键词:
摘要: Many problems in natural language processing and computer vision can be framed as structured prediction problems. Structural support vector machines (SVM) is a popular approach for training predictors, where learning an optimization problem. Most structural SVM solvers alternate between model update phase inference (which predicts structures all examples). As become more complex, becomes bottleneck thus slows down considerably. In this paper, we propose new algorithm SVMs called DEMIDCD that extends the dual coordinate descent by decoupling phases into different threads. We take advantage of multicore hardware to parallelize with minimal synchronization phases.We prove our not only converges but also fully utilizes available processors speed up learning, validate on two real-world NLP problems: part-of-speech tagging relation extraction. both cases, show achieves competitive performance. For example, it relative duality gap 1% POS problem 192 seconds using 16 threads, while standard implementation multi-threaded same number threads requires than 600 reach solution quality.