摘要

Motivated by the study of the asymptotic normality of the least-squares estimator in the (autoregressive) AR(1) model under possibly infinite variance, in this paper we investigate a self-normalized central limit theorem for Markov random walks. That is, let {X-n, n >= 0} be a Markov chain on a general state space chi with transition probability P and invariant measure pi. Suppose that an additive component S-n takes values on the real line R, and is adjoined to the chain such that {S-n, n >= 1) is a Markov random walk. Assume that S-n = Sigma(n)(k=1) xi k, and that {xi(n), n >= 1) is a nondegenerate and stationary sequence under pi that belongs to the domain of attraction of the normal law with zero mean and possibly infinite variance. By making use of an asymptotic variance formula of S-n/root n, we prove a self-normalized central limit theorem for S-n under some regularity conditions. An essential idea in our proof is to bound the covariance of the Markov random walk via a sequence of weight functions, which plays a crucial role in determining the moment condition and dependence structure of the Markov random walk. As illustrations, we apply our results to the finite-state Markov chain, the AR(1) model, and the linear state space model.