摘要

We consider the problem of classification in non-adaptive dimensionality reduction. Specifically, we give an average-case bound on the classification error of Fisher%26apos;s linear discriminant classifier when the classifier only has access to randomly projected versions of a given training set. By considering the system of random projection and classifier together as a whole, we are able to take advantage of the simple class structure inherent in the problem, and so derive a non-trivial performance bound without imposing any sparsity or underlying low-dimensional structure restrictions on the data. Our analysis also reveals and quantifies the effect of class %26apos;flipping%26apos; - a potential issue when randomly projecting a finite sample. Our bound is reasonably tight, and unlike existing bounds on learning from randomly projected data, it becomes tighter as the quantity of training data increases. A preliminary version of this work received an IBM Best Student Paper Award at the 20th International Conference on Pattern Recognition.

  • 出版日期2012-5-1