摘要

The task of natural language parsing can naturally be embedded in the maximum-margin framework for structured output prediction using an appropriate joint feature map and a suitable structured loss function. While there are efficient learning algorithms based on the cutting-plane method for optimizing the resulting quadratic objective with potentially exponential number of linear constraints, their efficiency crucially depends on the inference algorithms used to infer the most violated constraint in a current iteration. In this paper, we derive an extension of the well-known Cocke-Kasami-Younger (CKY) algorithm used for parsing with probabilistic context-free grammars for the case of loss-augmented inference enabling an effective training in the cutting-plane approach. The resulting algorithm is guaranteed to find an optimal solution in polynomial time exceeding the running time of the CKY algorithm by a term, which only depends on the number of possible loss values. In order to demonstrate the feasibility of the presented algorithm, we perform a set of experiments for parsing English sentences.

  • 出版日期2017-1