摘要

Regularization plays an important role in learning tasks, to incorporate prior knowledge about a problem and thus improve learning performance. Well known regularization methods, including l(2) and l(1) regularization, have shown great success in a variety of conventional learning tasks, and new types of regularization have also been developed to deal with modem problems, such as multi-task learning. In this paper, we introduce the l(2)/l(1) regularization for diverse learning tasks. The l(2)/l(1) regularization is a mixed norm defined over the parameters of the diverse learning tasks. It adaptively encourages the diversity of features among diverse learning tasks, i.e., when a feature is responsible for some tasks it is unlikely to be responsible for the rest of the tasks. We consider two applications of the l(2)/l(1) regularization framework, i.e., learning sparse self-representation of a dataset for clustering and learning one-vs.-rest binary classifiers for multi-class classification, both of which confirm the effectiveness of the new regularization framework over benchmark datasets.