Communications in Information and Systems

Volume 16 (2016)

Number 4

Multiview conversion of 2D cartoon images

Pages: 229 – 254

DOI: http://dx.doi.org/10.4310/CIS.2016.v16.n4.a2

Authors

Shao-Ping Lu (Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Belgium)

Sibo Feng (Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Belgium)

Beerend Ceulemans (Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Belgium)

Miao Wang (Department of Computer Science, Tsinghua University, Beijing, China)

Rui Zhong (Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Belgium)

Adrian Munteanu (Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Belgium)

Abstract

Multiview images offer great potential for immersive autostereoscopic displays due to the multiple perspectives of a dynamic 3D scene that can be simultaneously presented to a viewer. Traditional 2D cartoons do not contain depth information, and their painting styles are usually quite different from those of real images captured from the real world. This makes existing 2D-to-3D conversion techniques inapplicable because of the difficulty on geometry recovery or lack of sufficient data. This paper introduces an interactive multiview conversion scheme from a single 2D cartoon image. The proposed approach mainly consists of depth assignment and view synthesis. An interactive depth assignment approach is proposed to treat a cartoon image as a composition of ordered depth layers, and the depth can be easily assigned to these layers. A depth smoothing procedure is introduced by solving a Laplace equation with boundary conditions and further depth refinement is performed in order to produce a complete version of the depth map. An interactive image inpainting method is finally proposed to perform multiview image synthesis. The experimental results demonstrate the effectiveness and efficiency of the proposed approach.

Full Text (PDF format)

Research supported by the 3DLicornea project funded by the Brussels Region (Brussels Institute for Research and Innovation – Innoviris). Miao Wang was supported by China Postdoctoral Science Foundation (Project Number 2016M601032), the National Key Technology R&D Program(Project Number 2016YFB1001402), the Joint NSFC-ISF Research Program (project number 61561146393), Research Grant of Beijing Higher Institution Engineering Research Center and Tsinghua- Tencent Joint Laboratory for Internet Innovation Technology.