摘要

Multi-label and multi-modality are two dramatic characteristics of social images. Multi-labels illustrate the co-occurrence of objects in an image; while multimodal features represent the image from different viewpoints. They describe social images from two different aspects. However, it is of considerable challenge to integrate multimodal features and multi-labels simultaneously for social images classification. In this paper, we propose a hypergraph learning algorithm to integrate multi-modal features and multi-label correlation seamlessly. More specifically, we first propose a new feature fusion strategy by integrating multi-modal features into a unified hypergraph. An efficient multimodal hypergraph (EMHG) is constructed to solve the high computational complexity problem of the proposed fusion scheme. Secondly, we construct a multi-label correlation hypergraph (LCHG) to model the complex associations among labels. Moreover, an adaptive learning algorithm is adopted to learn the label scores and hyperedge weights simultaneously with the combination of the two hypergraphs. Experiments conducted on real-world social image datasets demonstrate the superiority of our proposed method compared with representative transductive baselines.