摘要

From the view of classification, linear discriminant analysis (LDA) is a proper dimensionality reduction method which finds an optimal linear transformation that maximizes the class separability. However it is difficult to apply LDA in under sampled problems where the number of data samples is smaller than the dimensionality of data space, due to the singularity of scatter matrices caused by high-dimensionality. In order to make LDA applicable, we propose a new dimensionality reduction algorithm called discriminant multidimensional mapping (DMM), which combines the advantages of multidimensional scaling (MDS) and LDA. DMM is effective for small sample datasets with high-dimensionality. Its superiority is given from theoretical point of view. Then we extend DMM for large datasets and datasets with non-linear manifold respectively, and get two algorithms: landmark DMM (LDMM) and geodesic-metric discriminant mapping (GDM). The performances of these algorithms are also shown by preliminary numerical experiments.

全文