File(s) under permanent embargo
Aided Blind Deblurring Image Degraded by Motion Blur
Blur effect is a common phenomenon in photographs when relative motion between a camera and the captured scene occurs during the exposure time. It significantly degrades image quality and has great effect on the performance of subsequent applications. Image deblurring aims to recover a sharp image from blurred observations and involves two subproblems: blur kernel estimation and image deconvolution. The accuracy of blur kernel is critical to the deblurring performance.
This work explores the possibility of using inertial sensors to facilitate kernel estimation. The key lies on how to address the noise amplification issue with using inertial sensors for motion estimation. A single image deblurring approach is first proposed and an alternating optimization scheme is employed to gradually move the drifted camera motion trajectory to the correct position. By replacing the camera pose space in the geometric model with a motion trajectory set, an explicit trajectory can be generated, filling the gap between kernel estimation and motion estimation. This also allows us to tackle the blur variation problem resulting from electronic rolling shutter mechanism which imposes temporal constraints of camera motion. To improve image restoration quality, a joint deblurring and demosaicing method is developed by leveraging both spatial prior and spectral prior.
In practice, camera shake is not restricted to in-plane translation. The presence of camera rotation would lead to spatially-varying blur effect. A multi-image deblurring framework is proposed to mitigate the stress arising from a large amount of unknowns. Exposure times are varied among three captured images such that a long-exposed blurred image is bracketed by two short-exposed noisy but sharp images, which is related through the simultaneously recorded inertial measurements. This special exposure arrangement effectively addresses the problem inherent in reconstructing camera motion from inertial measurements. Experimental evaluations show its robustness to longer exposure time and larger measurement noise. Additionally, the proposed framework enables the incorporation of depth factor which also contributes to blur variation. The short-exposed images provide necessary constraints for initial depth map inference and a MRF framework is formulated to further refine the depth map by exploiting both stereo cues and motion blur cues.
As a third contribution, a dynamic scene deblurring method, which handles both camera shake blur and object motion blur, is developed by augmenting the previous framework with a layered blur model. Contrary to previous methods which individually parametrize the motion blur of each layer, we exploit the fact that camera shake has a global influence to decompose the motion of the foreground layer such that a more tight constraint between the motion of layers is established. The problem is formulated as a regularized form of energy function and by minimizing it we jointly solve the estimate of motion, layer appearance, segmentation as well as depth map. Compared to previous methods, our approach is automatic, and neither requires user interaction nor special hardware.
This work improves deblurring performance by incorporating more factors that influence blur kernel estimation, and also provides insights on how to efficiently fuse image data and inertial information. It is tested on various images, and comparative experiments are conducted to demonstrate the superior performance of this work.
History
Date Modified
2017-06-05Defense Date
2017-02-15Research Director(s)
Robert L. StevensonCommittee Members
Ken Sauer Peter Bauer Scott HowardDegree
- Doctor of Philosophy
Degree Level
- Doctoral Dissertation
Program Name
- Electrical Engineering