摘要

Hadoop HDFS is an open source project from Apache Software Foundation for scalable, distributed computing and data storage. HDFS has become a critical component in today's cloud computing environment and a wide range of applications built on top of it. However, the initial design of HDFS has introduced a single-point-of-failure, since HDFS contains only one active namenode, if this namenode experiences software or hardware failures, the whole HDFS cluster is unusable, this is a reason why people are reluctant to deploy HDFS for an application whose requirement is high availability. In this paper, we present a solution to enable the high availability for HDFS's namenode through efficient metadata replication. Our solution has 3 major advantages than existing ones: We utilize multiple active namenodes, instead of one, to build a cluster to serve requests of metadata simultaneously;We implement a pub/sub system to handle the metadata replication process across these active namonodes efficiently;We also propose a novel replication algorithm to deal with the network delay when the namonodes are deployed in different areas. Based on the solution we build a prototype called NCluster and integrate it with HDFS. We evaluate NCluster to exhibit its feasibility and effectiveness. The experimental results show that our solution performs well with low replication cost, good throughput and scalability.

全文