摘要

With the rapid advances of cyber-physical-social systems (CPSS), large amounts of dynamic multi-modal data are being generated and collected. Analyzing those data effectively and efficiently can help to promote the development and improve the service quality of CPSS applications. As an important technique of multi-modal data analysis, co-clustering, designed to identify groupings of multi-dimensional data based on cross-modality fusion, is often exploited. Unfortunately, most existing co-clustering methods that mainly focus on dealing with static data become infeasible to fuse huge volume of multi-modal data in dynamic CPSS environments. To tackle this problem, this paper proposes a parameter-free incremental co-clustering method to deal with multi-modal data dynamically. In the proposed method, the single-modality similarity measure is extended to multiple modalities and three operations, namely, cluster creating, cluster merging, and instance partitioning, are defined to incrementally integrate new arriving objects to current clustering patterns without introducing additive parameters. Moreover, an adaptive weight scheme is designed to measure the importance of feature modalities based on the intra-cluster scatters. Extensive experiments on three real-world multi-modal datasets collected from CPSS demonstrate that the proposed method outperforms the compared state-of-the-art methods in terms of effectiveness and efficiency, thus it is promising for clustering dynamic multi-modal data in cyber-physical-social systems.