摘要

Various dynamic data driven applications systems (DDDAS) such as hazard management, target tracking, and battlefield monitoring often leverage multiple heterogeneous sensors, and generate huge volume of data. Not surprisingly, researchers are investigating ways to support such applications on the cloud. However, in such applications, the importance of a subset of sensors may change quickly due to changes in the execution environment, which often require adaptation of sampling rates accordingly. Additionally, such variations in sampling rates can create significant load imbalance on back-end servers, leading toward performance degradation. To address this, we investigate a closed-loop integrated solution as follows. First, we develop a centralized algorithm that attempts to maximize the overall quality of information for the whole network given the utility functions and the importance rankings of sensor nodes. Next, we present a threshold based heuristic that prevents omission of highly important nodes at critical times. Finally, a proactive resource optimization framework is investigated that adaptively allocate resources (e.g., servers) in response to changed sampling rates. Extensive evaluation on cloud platform for various scenarios shows that our approach can quickly adapt sampling rates and reallocate resources in response to the changed importance of sensor nodes, minimizing data loss significantly.

  • 出版日期2015-11