摘要

This paper addresses the iterative learning control problem under random data dropout environments. The recent progress on iterative learning control in the presence of data dropouts is first reviewed from 3 aspects, namely, data dropout model, data dropout position, and convergence meaning. A general framework is then proposed for the convergence analysis of all 3 kinds of data dropout models, namely, the stochastic sequence model, the Bernoulli variable model, and the Markov chain model. Both mean square and almost sure convergence of the input sequence to the desired input are strictly established for noise-free systems and stochastic systems, respectively, where the measurement output suffers from random data dropouts. Illustrative simulations are provided to verify the theoretical results.