摘要

In an optimal variance stopping problem the goal is to determine the stopping time at which the variance of a sequentially observed stochastic process is maximized. A solution method for such a problem has been recently provided by Pedersen (2011). Using the methodology developed by Pedersen and Peskir (2012), our aim is to show that the solution to the initial problem can be equivalently obtained by constraining the variance stopping problem to the expected size of the stopped process and then by maximizing the solution to the latter problem over all the admissible constraints. An application to a diffusion process used for modeling the dynamics of interest rates illustrates the proposed technique.

  • 出版日期2015-12