摘要

Person re-identification aims to identify the same person across non-overlapping camera views. It remains very challenging due to large differences in pose, illumination, and viewpoint between images. To improve robustness to such variations, here we develop a joint asymmetric projection and dictionary-learning algorithm by adopting listwise similarity and identity consistency constraints. Benefiting from the listwise similarities, dictionary learning considers the similarity list between each pedestrian image, thus exploiting the large amount of discriminative information contained in the samples. This approach endows the dictionary with discriminative power. In addition, we impose an identity consistency constraint on the coding coefficients to further improve the discriminative ability of the dictionary. To overcome appearance variability across non -overlapping camera views, two asymmetric projection dictionaries are employed to map the pedestrian features into a unified subspace such that the correlation between data from the same people in different views is maximized Finally, by integrating the coding coefficient and classification results, we develop a fusion strategy with a modified cosine similarity measure to match the pedestrians. Experiments on different challenging data sets demonstrate that our method is effective and outperforms some current state-of-the-art approaches.