摘要

A Markov chain with memory is no different from the conventional Markov chain on the product state space. Such a Markovianization, however, increases the dimensionality exponentially. Instead, Markov chain with memory can naturally be represented as a tensor, whence the transitions of the state distribution and the memory distribution can be characterized by specially defined tensor products. In this context, the progression of a Markov chain can be interpreted as variants of power-like iterations moving toward the limiting probability distributions. What is not clear is the makeup of the "second dominant eigenvalue" that affects the convergence rate of the iteration, if the method converges at all. Casting the power method as a fixed-point iteration, this paper examines the local behavior of the nonlinear map and identifies the cause of convergence or divergence. As an application, it is found that there exists an open set of irreducible and aperiodic transition probability tensors where the Z-eigenvector type power iteration fails to converge.