learning to see through turbulent water

Department: Computer Science & Engineering
Research Institute Affiliation: Center for Visual Computing
Faculty Advisor(s): Manmohan K. Chandraker | Ravi Ramamoorthi | David Kriegman

Primary Student
Name: Zhengqin Li
Email: zhl378@ucsd.edu
Phone: 555-555-5555
Grad Year: 2021

Imaging through dynamic refractive media, such as looking into turbulent water, or through hot air, is challenging since light rays are bent by unknown amounts leading to complex geometric distortions. Inverting these distortions and recovering high quality images is an inherently ill-posed problem, leading previous works to require extra information such as high frame-rate video or a template image, which limits their applicability in practice. This paper proposes training a deep convolution neural network to undistort dynamic refractive effects using only a single image. The neural network is able to solve this ill-posed problem by learning image priors as well as distortion priors. Our network consists of two parts, a warping net to remove geometric distortion and a color predictor net to further refine the restoration. Adversarial loss is used to achieve better visual quality and help the network hallucinate missing and blurred information. To train our network, we collect a large training set of images distorted by a turbulent water surface. Unlike prior works on water undistortion, our method is trained end-to-end, only requires a single image and does not use a ground truth template at test time. Experiments show that by exploiting the structure of the problem, our network outperforms state-of-the-art deep image to image translation.

Industry Application Area(s)
Image processing

« Back to Posters or Search Results