摘要

Focus of this research work is optimizing the deduplication system by adjusting the pertinent factors in content defined chunking (CDC) to identify as the key ingredients by declaring chunk cut-points and efficient fingerprint lookup using bucket based index partitioning. For efficient chunking, proposed Differential Evolution (DE) algorithm based approach is optimized Two Thresholds Two Divisors (TTTD-P) CDC algorithm where significantly it reduces the number of computing operations by using single dynamic optimal parameter divisor D with optimal threshold value exploiting the multi-operations nature of TTTD. Therefore, proposed DE based TTTD-P optimize chunking to maximize chunking throughput with increased deduplication ratio (DR); and bucket indexing approach reduces hash values judgment time to identify and declare redundant chunk about 16 times faster than Rabin CDC, 5 times than Asymmetric Extremum (AE) CDC, 1.6 times than FAST CDC. Experimental results comparative analysis reveal that TTTD-P using fast BUZ rolling hash function with bucket indexing on Hadoop Distributed File System (HDFS) provide a comparatively maximum redundancy detection with higher throughput, higher deduplication ratio, lesser computation time and very low hash values comparison time as being best distributed deduplication for big data storage systems.

  • 出版日期2018