摘要

By utilizing kernel functions, support vector machines (SVMs) successfully solve the linearly inseparable problems. Subsequently, its applicable areas have been greatly extended. Using multiple kernels (MKs) to improve the SVM classification accuracy has been a hot topic in the SVM research society for several years. However, most MK learning (MKL) methods employ L-1-norm constraint on the kernel combination weights, which forms a sparse yet nonsmooth solution for the kernel weights. Alternatively, the L-p-norm constraint on the kernel weights keeps all information in the base kernels. Nonetheless, the solution of L-p-norm constraint MKL is nonsparse and sensitive to the noise. Recently, some scholars presented an efficient sparse generalized MKL (L-1-and L-2-norms based GMKL) method, in which L-1 L-2 established an elastic constraint on the kernel weights. In this paper, we further extend the GMKL to a more generalized MKL method based on the L-1-norm, by joining L-1-and L-2-norms. Consequently, the L-1-and L-2-norms based GMKL is a special case in our method when p = 2. Experiments demonstrated that our L-1-and L-p-norms based MKL offers a higher accuracy than the L-1-and L-2-norms based GMKL in the classification, while keeping the properties of the L-1 and L-2-norms based on GMKL.