摘要

Outlier detection is a very essential problem in a variety of application areas. Many detection methods are deficient for high-dimensional time series data sets containing both isolated and assembled outliers. In this paper, we propose an Outlier Detection method based on Cross-correlation Analysis (ODCA). ODCA consists of three key parts. They are data preprocessing, outlier analysis, and outlier rank. First, we investigate a linear interpolation method to convert assembled outliers into isolated ones. Second, a detection mechanism based on the cross-correlation analysis is proposed for translating the high-dimensional data sets into 1-D cross-correlation function, according to which the isolated outlier is determined. Finally, a multilevel Otsu's method is adopted to help us select the rank thresholds adaptively and output the abnormal samples at different levels. To illustrate the effectiveness of the ODCA algorithm, four experiments are performed using several high-dimensional time series data sets, which include two smallscale sets and two large-scale sets. Furthermore, we compare the proposed algorithm with the detection methods based on wavelet analysis, bilateral filtering, particle swarm optimization, auto-regression, and extreme learning machine. In addition, we discuss the robustness of the ODCA algorithm. The statistical results show that the ODCA algorithm is much better than existing mainstream methods in both effectiveness and time complexity.