摘要

Although the solution of support vector machine (SVM) is relatively sparse, it makes unnecessarily liberal use of basis functions since the number of support vectors required typically grows linearly with the size of the training set. In this paper, we present a simple post-processing method to sparsify the solution of support vector regression (SVR). The main idea is as follows: first, we train a SVR machine on the full training set; then another SVR machine is trained only on a subset of the full training set with modified target values. This process is done several times iteratively. Experiments indicate that the proposed method can reduce the support vectors greatly while maintaining the good generalization capacity of SVR.