摘要

In this study, we propose a dynamical memory strategy to efficiently control the size of the support set in a kernel-based Perceptron learning algorithm. The method consists of two operations, namely, the incremental and decremental projections. In the incremental projection, a new presented instance is either added to the support set or discarded depending on a predefined rule. To diminish information loss, we do not throw away those discarded examples cheaply, instead their impact to the discriminative function is sustained by a projection technique, which maps the modified discriminative function into the space spanned by the original support set. When a new example is added to the support set, the algorithm moves to the decremental projection. We evaluate the minimum information loss by deleting one instance from the support set. If this minimum information loss is less than a tolerable threshold, then the corresponding instance is removed; however, its contribution to the discriminative function is reserved by the projection technique. By this, our method can on one hand keep a relatively small size of the support set and on the other hand achieve a high classification accuracy. We also develop a method which sets a budget for the size of the support set. We test our approaches to four benchmark data sets, and find that our methods outperform others in either having higher classification accuracies when the sizes of their support sets are comparable or having smaller sizes of the support sets when their classification accuracies are similar.

  • 出版日期2012-1
  • 单位福建工程学院

全文