Communications in Information and Systems

Volume 12 (2012)

Number 4

Convergence analysis of a randomly perturbed infomax algorithm for blind source separation

Pages: 251 – 275



Qi He (Department of Mathematics, University of California at Irvine)

Jack Xin (Department of Mathematics, University of California at Irvine)


We present a novel variation of the well-known infomax algorithm of blind source separation. Under natural gradient descent, the infomax algorithm converges to a stationary point of a limiting ordinary differential equation. However, due to the presence of saddle points or local minima of the corresponding likelihood function, the algorithm may be trapped around these “bad” stationary points for a long time, especially if the initial data are near them. To speed up convergence, we propose to add a sequence of random perturbations to the infomax algorithm to “shake” the iterating sequence so that it is “captured” by a path descending to a more stable stationary point. We analyze the convergence of the randomly perturbed algorithm, and illustrate its fast convergence through numerical examples on blind demixing of stochastic signals. The examples have analytical structures so that saddle points or local minima of the likelihood functions are explicit. The results may have implications for online learning algorithms in dissimilar problems.


blind source separation, unstable equilibria, randomly perturbed infomax method

2010 Mathematics Subject Classification

34A05, 39A50, 65M12

Full Text (PDF format)