摘要

Kernel-based methods (KBMs) such as support vector machines (SVMs) are popular data mining tools for solving classification and regression problems. Due to their high prediction accuracy, KBMs have been successfully used in various fields. However, KBMs have three major drawbacks. First, it is not easy to obtain an explicit description of the discrimination (or regression) function in the original input space and to make a variable selection decision in the input space. Second, depending on the magnitude and numeric range of the given data points, the resulting kernel matrices may be ill-conditioned, with the possibility that the learning algorithms will suffer from numerical instability. Although data scaling can generally be applied to deal with this problem and related issues, it may not always be effective. Third, the selection of an appropriate kernel type and its parameters can be a complex undertaking, with the choice greatly affecting the performance of the resulting functions. To overcome these drawbacks, we present here the sparse signomial classification and regression (SSCR) model. SSCR seeks a sparse signomial function by solving a linear program to minimize the weighted sum of the l(1)-norm of the coefficient vector of the function and the l(1)-norm of violation (or loss) caused by the function. SSCR employs the signomial function in the original variables and can therefore explore the nonlinearity in the data. SSCR is also less sensitive to numerical values or numeric ranges of the given data and gives a sparse explicit description of the resulting function in the original input space, which will be useful for the interpretation purpose in terms of which original input variables and/or interaction terms are more meaningful than others. We also present column generation techniques to select important signomial terms in the classification and regression processes and explore a number of theoretical properties of the proposed formulation. Computational studies demonstrate that SSCR is at the very least competitive and can even perform better compared to other widely used learning methods for classification and regression.

  • 出版日期2014-5

全文