摘要

One of the serious challenges in machine learning and pattern recognition is to transfer knowledge from related but different domains to a new unlabeled domain. Feature selection with maximum mean discrepancy (f-MMD) is a novel and effective approach to transfer knowledge from source domain (training set) into target domain (test set) where training and test sets are drawn from different distributions. However, f-MMD has serious challenges in facing datasets with large number of samples and features. Moreover, f-MMD ignores the feature-label relation in finding the reduced representation of dataset. In this paper, we exploit jointly transfer learning and class discrimination to cope with domain shift problem on which the distribution difference is considerably large. We therefore put forward a novel transfer learning and class discrimination approach, referred to as RandOm k-samplesets feature Weighting Approach (ROWA). Specifically, ROWA reduces the distribution difference across domains in an unsupervised manner where no label is available in the test set. Moreover, ROWA exploits feature-label relation to separate various classes alongside the domain transfer, and augments the relation of selected features and source domain labels. In this work, we employ disjoint/overlapping small-sized samplesets to iteratively converge to final solution. Employment of local sets along with a novel optimization problem constructs a robust and effective reduced representation for adaptation across domains. Extensive experiments on real and synthetic datasets verify that ROWA can significantly outperform state-of-the-art transfer learning approaches.

  • 出版日期2016-2