UCSD Computer Scientists Develop Ubiquitous Video Application for 3D Environments
"Instead of watching all the feeds simultaneously on a bank of monitors, the viewer can navigate an integrated, interactive environment as if it were a video game," said UCSD computer science and engineering professor Bill Griswold, who is working on the project with Ph.D. candidate Neil McCurdy. "RealityFlythrough creates the illusion of complete live camera coverage in a physical space. It's a new form of situational awareness, and we designed a system that can work in unforgiving environments with intermittent network connectivity."
On June 6 at MobiSys 2005 in Seattle, McCurdy presented a joint paper* with Griswold about RealityFlythrough and a "systems architecture for ubiquitous video." The third international conference on mobile systems, applications and services brings together academic and industry researchers in the area of mobile and wireless systems.
Griswold and McCurdy are testing their new system as part of the WIISARD (Wireless Internet Information System for Medical Response in Disasters) project, which is funded by NIH's National Library of Medicine. During a May 12 disaster drill organized by San Diego's Metropolitan Medical Strike Team, the researchers shadowed a hazmat team responding to a simulated terrorist attack. They wore cameras mounted on their hardhats, tilt sensors with magnetic compasses, and global positioning (GPS) devices. Walking through the simulated disaster scene at the city's Cruise Ship Terminal, McCurdy and Griswold captured continuous video to be fed over an ad hoc wireless network to a makeshift command post nearby.
The UCSD researchers say the biggest research challenge was to overcome the limitation of incomplete coverage of live video streams. "Every square meter of a space cannot be viewed from every angle with a live video stream at any given moment," said Griswold, an academic participant in the California Institute for Telecommunications and Information Technology (Calit2). "We had to find a way to fill in the empty space that would give the user a sense of how the video streams relate to one another spatially."
The fundamental research finding to date, according to McCurdy, is that some of the processing can be offloaded to the human. "We take advantage of a principle called closure, which allows our brains to make sense of incomplete information. The visual cortex does this all the time when it corrects for blind spots in our vision, for example," explained the graduate student. "RealityFlythrough supplies as much information as possible to the human operator, and the operator can easily fill in the blanks."
Since dead-reckoning systems lose accuracy over time, the researchers implemented a system that allows the camera operators to periodically correct their locations. "We created a Wizard-of-Oz approach to correcting inadequate location information," explained McCurdy. "Since we're combining this self-reporting technology with GPS or dead reckoning, it only has to be done occasionally. From all the footage we got from the May 12 drill, I only had to put in four corrections, and that was sufficient to give us pretty good accuracy indoors."
McCurdy will work on refining the system for his dissertation. And if consumers start to show interest in RealityFlythrough, he holds open the possibility of starting up a company to commercialize the technology -- but only after finishing his Ph.D. in 2006.
McCurdy discusses potential commercial applications of the technology Length: 1:49 'Tele-Reality in the Wild' research video produced by Neil McCurdy Length: 6:53 Neil McCurdy explains why RealityFlythrough is a hybrid of tele-presence, tele-reality and ubiquitous computing. Length: 3:26