Please use this identifier to cite or link to this item:
|Scopus||Web of Science®||Altmetric|
|Title:||Blind image deconvolution by automatic gradient activation|
Van Den Hengel, A.
|Citation:||Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016), 2016 / vol.2016-December, pp.1827-1836|
|Series/Report no.:||IEEE Conference on Computer Vision and Pattern Recognition|
|Conference Name:||29th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2016) (26 Jun 2016 - 01 Jul 2016 : Las Vegas, NV)|
|Dong Gong, Mingkui Tan, Yanning Zhang, Anton van den Hengel, Qinfeng Shi|
|Abstract:||Blind image deconvolution is an ill-posed inverse problem which is often addressed through the application of appropriate prior. Although some priors are informative in general, many images do not strictly conform to this, leading to degraded performance in the kernel estimation. More critically, real images may be contaminated by nonuniform noise such as saturation and outliers. Methods for removing specific image areas based on some priors have been proposed, but they operate either manually or by defining fixed criteria. We show here that a subset of the image gradients are adequate to estimate the blur kernel robustly, no matter the gradient image is sparse or not. We thus introduce a gradient activation method to automatically select a subset of gradients of the latent image in a cutting-plane-based optimization scheme for kernel estimation. No extra assumption is used in our model, which greatly improves the accuracy and flexibility. More importantly, the proposed method affords great convenience for handling noise and outliers. Experiments on both synthetic data and real-world images demonstrate the effectiveness and robustness of the proposed method in comparison with the state-of-the-art methods.|
|Rights:||© 2016 IEEE|
|Appears in Collections:||Computer Science publications|
Files in This Item:
|RA_hdl_105522.pdf||Restricted access||2.93 MB||Adobe PDF||View/Open|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.