摘要

When simulating multiscale stochastic differential equations (SDEs) in high-dimensions, separation of timescales, stochastic noise, and high-dimensionality can make simulations prohibitively expensive. The computational cost is dictated by microscale properties and interactions of many variables, while the behavior of interest often occurs at the macroscale level and at large timescales, often characterized by few important, but unknown, degrees of freedom. For many problems bridging the gap between the microscale and macroscale by direct simulation is computationally infeasible. In this work we propose a novel approach to automatically learn a reduced model with an associated fast macroscale simulator. Our unsupervised learning algorithm uses short parallelizable microscale simulations to learn provably accurate macroscale SDE models, which are continuous in space and time. The learning algorithm takes as input the microscale simulator, a local distance function, and a homogenization spatial or temporal scale, which is the smallest time scale of interest in the reduced system. The learned macroscale model can then be used for fast computation and storage of long simulations. We prove guarantees that relate the number of short paths requested from the microscale simulator to the accuracy of the learned macroscale simulator. We discuss various examples, both low- and high-dimensional, as well as results about the accuracy of the fast simulators we construct, and the model's dependency on the number of short paths requested from the microscale simulator.

  • 出版日期2017