On Convergence of General Truncation-Augmentation Schemes for Approximating Stationary Distributions of Markov Chains
In the analysis of Markov chains and processes, it is sometimes convenient to replace an unbounded state space with a "truncated" bounded state space. When such a replacement is made, one often wants to know whether the equilibrium behavior of the truncated chain or process is close to that of the untruncated system. For example, such questions arise naturally when considering numerical methods for computing stationary distributions on unbounded state space. In this paper, we study general truncation-augmentation schemes, in which the substochastic truncated "northwest corner" of the transition matrix or kernel is stochasticized (or augmented) arbitrarily. In the presence of a Lyapunov condition involving a coercive function, we show that such schemes are generally convergent in countable state space, provided that the truncation is chosen as a sublevel set of the Lyapunov function. For stochastically monotone Markov chains on ℤ_+, we prove that we can always choose the truncation sets to be of the form {0,1,...,n}. We then provide sufficient conditions for weakly continuous Markov chains under which general truncation-augmentation schemes converge weakly in continuous state space. Finally, we briefly discuss the extension of the theory to continuous time Markov jump processes.
READ FULL TEXT