Skip to Main content Skip to Navigation
Preprints, Working Papers, ...

On the convergence of stochastic approximations under a subgeometric ergodic Markov dynamic

Abstract : IIn this paper, we extend the framework of the convergence ofstochastic approximations. Such a procedure is used in many methods such as parameters estimation inside a Metropolis Hastings algorithm, stochastic gradient descent or stochastic Expectation Maximization algorithm. It is given by θ n+1 = θn + ∆ n+1 H θn (X n+1) , where (Xn)n∈N is a sequence of random variables following a parametric distribution which depends on (θn)n∈N, and (∆n)n∈N is a step sequence. The convergence of such a stochastic approximation has already been proved under an assumption of geometric ergodicity of the Markov dynamic. However, in many practical situations this hypothesis is not satisfied, for instance for any heavy tail target distribution in a Monte Carlo Metropolis Hastings algorithm. In this paper, we relax this hypothesis and prove the convergence of the stochastic approximation by only assuming a subgeometric ergodicity of the Markov dynamic. This result opens up the possibility to derive more generic algorithms with proven convergence. As an example, we first study an adaptive Markov Chain Monte Carlo algorithm where the proposal distribution is adapted by learning the variance of a heavy tail target distribution. We then apply our work to the Independent Component Analysis when a positive heavy tail noise leads to a subgeometric dynamic in an Expectation Maximization algorithm.
Document type :
Preprints, Working Papers, ...
Complete list of metadatas
Contributor : Vianney Debavelaere <>
Submitted on : Tuesday, December 22, 2020 - 6:52:57 PM
Last modification on : Thursday, January 14, 2021 - 3:14:13 PM


Files produced by the author(s)



Vianney Debavelaere, Stanley Durrleman, Stéphanie Allassonnière. On the convergence of stochastic approximations under a subgeometric ergodic Markov dynamic. 2020. ⟨hal-02549618v2⟩



Record views


Files downloads