Please use this identifier to cite or link to this item:
https://hdl.handle.net/2440/44932
Citations | ||
Scopus | Web of Science® | Altmetric |
---|---|---|
?
|
?
|
Type: | Conference paper |
Title: | A decoupled algorithm for vision parameter estimation with application to the trifocal tensor |
Author: | Scoleri, T. Chojnacki, W. Brooks, M. |
Citation: | Proceedings of DICTA 2007 / pp.138-143 |
Publisher: | IEEE |
Publisher Place: | CDROM |
Issue Date: | 2007 |
ISBN: | 0769530672 9780769530673 |
Conference Name: | Biennial Conference of the Australian Pattern Recognition Society on Digital Image Computing Techniques and Applications (9th : 2007 : Glenelg, Australia) |
Editor: | Bottema, M. |
Statement of Responsibility: | Scoleri, Tony, Chojnacki, Wojciech and Brooks, Michael J. |
Abstract: | We consider the problem of estimating parameters of a model described by a system of equations which underlies a wide class of computer vision applications. One method to solve such a problem is the fundamental numerical scheme (FNS) previously proposed by some of the authors. In this paper, a more stable version of FNS is developed, with better convergence properties than the original version. The improvement in performance is achieved by reducing the original estimation problem to a couple of problems of lower dimension. By way of example, the new algorithm has been applied to the problem of estimating the trifocal tensor relating three views of a scene. Experiments carried out with both synthetic and real images reveal the new estimator to be more stable compared to the original FNS method, and commensurate in accuracy with the Gold Standard maximum likelihood estimator. |
Description: | © Copyright 2008 IEEE – All Rights Reserved |
DOI: | 10.1109/DICTA.2007.4426788 |
Published version: | http://dx.doi.org/10.1109/dicta.2007.4426788 |
Appears in Collections: | Aurora harvest 6 Computer Science publications |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.