image to image translation for domain adaptation

Department: Computer Science & Engineering
Research Institute Affiliation: Center for Visual Computing
Faculty Advisor(s): David Kriegman | Ravi Ramamoorthi | Manmohan K. Chandraker

Primary Student
Name: Zachary Paul Murez
Phone: 310-913-1490
Grad Year: 2018

We propose a general framework for unsupervised domain adaptation, which allows deep neural networks trained on a source domain to be tested on a different target domain without requiring any training annotations in the target domain. This is achieved by adding extra networks and losses that help regularize the features extracted by the backbone encoder network. To this end we propose the novel use of the recently proposed unpaired image-toimage translation framework to constrain the features extracted by the encoder network. Specifically, we require that the features extracted are able to reconstruct the images in both domains. In addition we require that the distribution of features extracted from images in the two domains are indistinguishable. Many recent works can be seen as specific cases of our general framework. We apply our method for domain adaptation between MNIST, USPS, and SVHN datasets, and Amazon, Webcam and DSLR Office datasets in classification tasks, and also between GTA5 and Cityscapes datasets for a segmentation task. We demonstrate state of the art performance on each of these datasets.

Industry Application Area(s)
Aerospace, Defense, Security | Life Sciences/Medical Devices & Instruments | Software, Analytics

« Back to Posters or Search Results