摘要

Finding substantial features for image representation is one of the keys to cope with the challenges of person re-identification given video streams. The important features for re-identification can be found by image saliency computation and subject appearance modeling. State-of-the-art models explore this direction by balancing between the global low-level features and the features from local patches. We proposed a novel nested patch tree for a tree structured feature representation, and the feature representation is used to match between a probe image and a gallery image to solve the re-identification problem. The feature representation is learned based on an unsupervised approach, which is different from the majority of the community when they work on finding similar subjects. Usually, the video streams for the same figure may have highly repetitive information, and the pseudo repetitiveness should be useful for a center-learning based method. We further improve the prediction accuracy by learning components by components for the same subject and working in the multi-color space. We evaluate the proposed method for person re-identification on the VIPeR and GRID datasets. The result shows that the proposed method is indeed superior to other state-of-the-art methods.

  • 出版日期2016-12