Statistics and Its Interface

Volume 9 (2016)

Number 4

Special Issue on Statistical and Computational Theory and Methodology for Big Data

Guest Editors: Ming-Hui Chen (University of Connecticut); Radu V. Craiu (University of Toronto); Faming Liang (University of Florida); and Chuanhai Liu (Purdue University)

Embarrassingly parallel sequential Markov-chain Monte Carlo for large sets of time series

Pages: 497 – 508



Roberto Casarin (Department of Economics, University Ca’Foscari of Venice, Italy)

Radu V. Craiu (Department of Statistical Sciences, University of Toronto, Ontario, Canada)

Fabrizio Leisen (School of Mathematics, Statistics and Actuarial Science, University of Kent, Canterbury, Kent, United Kingdom)


Bayesian computation crucially relies on Markov chain Monte Carlo (MCMC) algorithms. In the case of massive data sets, running the Metropolis-Hastings sampler to draw from the posterior distribution becomes prohibitive due to the large number of likelihood terms that need to be calculated at each iteration. In order to perform Bayesian inference for a large set of time series, we consider an algorithm that combines “divide and conquer” ideas previously used to design MCMC algorithms for big data with a sequential MCMC strategy. The performance of the method is illustrated using a large set of financial data.


big data, panel of time series, parallel Monte Carlo, sequential Markov-chain Monte Carlo

2010 Mathematics Subject Classification

Primary 37M10, 62Cxx. Secondary 62M99.

Full Text (PDF format)

Published 14 September 2016