Google Selects Center for Visual Computing Director for Research Award
|Professor Ravi Ramamoorthi received a Faculty Research Award from Google.|
San Diego, Calif, Feb. 18, 2016 -- Google has selected computer scientist Ravi Ramamoorthi, director of the Center for Visual Computing at the University of California, San Diego, to receive one of its Faculty Research Awards in 2016. It is Ramamoorthi's second such award, after receiving one in 2014 while he was still at UC Berkeley (just months before he joined the faculty of the Department of Computer Science and Engineering at the Jacobs School of Engineering at UC San Diego).
The program received 950 proposals in the company's open call, which aims to "identify and support world-class, permanent faculty pursuing cutting-edge research in areas of mutual interest." The program allows Google to maintain strong ties with academic institutions pursuing innovative research in core areas relevant to its mission, notably computer science and related topics such as machine learning, speech recognition, natural language processing, and computational neuroscience. This year's proposals came from over 350 universities across 55 countries.
For the awards, the selection committee selected 151 winning projects, including the one to Professor Ramamoorthi.
Ramamoorthi's proposal was submitted in the Physical Interfaces and Immersive Experiences category, a relatively new area for the Faculty Research Awards (with proposals in this category up 19 percent over last year). Specifically, Google will support the computer science professor's research on "Unified Multi-Cue 3D Depth Estimation from Light Field Images."
"We are seeing a revolution in imaging technology, with the conventional camera being replaced with a light-field sensor," said Ramamoorthi. "Instead of capturing a simple 2D image, these new cameras use micro-lenses or multi-camera arrays to capture a full 4D light field." He also believes the next revolution in light-field cameras will involve bringing the technology to smartphones.
"Light-field cameras are a single-shot 3D depth estimation and shape capture device," he said, noting that the passive devices can overtake active devices such as the Kinect sensor, which usually cannot be used outdoors. "Indeed, we believe that light-field cameras will revolutionize and democratize 3D capture, bringing it to the masses."