摘要

Multi-view texture classification is a very challenging task since the view-point variation often leads to the inconsistent local texton patterns. Existing studies focus on the extraction of scale, rotation or affine invariant representations by some specially designed invariant measurements or local descriptors. Differently, in this paper, we propose another framework for multi-view texture classification. A number of synthetic images are hierarchically created to enlarge the training dataset to cover the possible variations of different view-points. Then, a classifier based on random forests is trained based on these synthetic images. In the classification stage, we also create synthetic images for each testing image, and the synthetic images are classified with the pre-trained classifier. The final decision for this testing image is made by the majority voting of the classification results of all these synthetic images. The classification performance is evaluated on the UIUC texture dataset. Our method achieves the classification rate of 99.21%, which is higher than most of the state-of-the-arts.