Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Temporally-coherent novel video synthesis using texture-based priors|
|Citation:||IPSJ Transactions on Computer Vision and Applications, 2009; 1:72-81|
|Publisher:||Information Processing Society of Japan|
|Ali Shahrokni, Oliver Woodford, Ian Reid|
|Abstract:||In this paper we propose a method to construct a virtual sequence for a camera moving through a static environment, given an input sequence from a different camera trajectory. Existing image-based rendering techniques can generate photorealistic images given a set of input views, though the output images almost unavoidably contain small regions where the colour has been incorrectly chosen. In a single image these artifacts are often hard to spot, but become more obvious when viewing a real image with its virtual stereo pair, and even more so when a sequence of novel views is generated, since the artifacts are rarely temporally consistent. To address this problem of consistency, we propose a new spatio-temporal approach to novel video synthesis. Our method exploits epipolar geometry to impose constraints on temporal coherence of the rendered views. The pixels in the output video sequence are modelled as nodes of a 3-D graph. We define an MRF on the graph which encodes photoconsistency of pixels as well as texture priors in both space and time. Unlike methods based on scene geometry, which yield highly connected graphs, our approach results in a graph whose degree is independent of scene structure. The MRF energy is therefore tractable and we solve it for the whole sequence using a state-of-the-art message passing optimisation algorithm. We demonstrate the effectiveness of our approach in reducing temporal artifacts.|
|Rights:||Copyright © 2009 by the Information Processing Society of Japan|
|Appears in Collections:||Computer Science publications|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.