Skip to main content

NEWS RELEASE

June 26, 2002

Media Contacts:

   Doug Ramsey (858) 822-5825 dramsey@ucsd.edu

ECE AUTO SAFETY RESEARCH IS PROFILED IN SUMMER ISSUE OF "UCSD PERSPECTIVES"

-- Mohan Trivedi, Professor
Eletrical & Computer Engineering

In its Summer 2002 issue, the magazine "UCSD Perspectives" profiles cutting-edge research into the use of computer vision and other technologies for auto safety. The piece focuses on several projects underway in the Computer Vision and Robotics Research lab of ECE professor Mohan Trivedi. The piece follows:

"Safety in Digital Numbers"

A car phone system that will notify a caller if your driving conditions are hazardous, or if you're angry or distracted.

Airbags that 'refuse' to deploy if a passenger is leaning forward at the time of impact.

A steering wheel that begins vibrating if you’re drowsy or take your eyes too often off the road.

Sound out-there? Way out? Perhaps, but those and other innovations that could take automobile safety to the next level are under active development at the Jacobs School of Engineering’s Computer Vision and Robotics Research (CVRR) laboratory. Located inside the SERF building on campus, the lab is the brainchild of Jacobs School of Engineering professor Mohan Trivedi. "We do research on a wide range of non transportation projects as well," says the Electrical and Computer Engineering professor. "But cars are uniquely suited as platforms for the next generation of electronics, and because of the benefit to society, there is compelling interest among funding agencies and automakers to see if we can harness the new technology to enhance auto safety."

CVRR auto research grew out of work Trivedi and his team did on a 'smart' room, filled with cameras, microphones and other sensors. The intelligent environment in Trivedi's lab is constantly 'aware' of who is inside, who's talking, and logs the activity. From smart rooms, it was only a short jump to research on smart cars. "In a way, cars are easier because inside the passenger compartment, riders are stationary rather than moving about, and they generally look in one direction," says Trivedi. "On the other hand, cars are mobile environments, and some of the functions required for safety require split-second timing—so from that viewpoint, it's an entirely new challenge."

At its core, the auto safety research relies on a series of sensors inside and outside the car to keep track of who's driving, noise levels, passenger positions, and driving conditions such as weather and traffic. The information is derived from arrays of video cameras and other sensors, a local area network, on-board computer processing, plus wireless technology to send and receive data.

Inside the CVRR lab on the ground floor of the SERF building, graduate student Ofer Achler sits in the driver's seat of a "vehicle" that looks like something out of a Mad Max movie. There are no panels, just steel tubing that traces the bare-bones outline of a car. Four seats. A steering wheel and column. Instead of wheels, the contraption is on cement blocks. On second glance, some unusual accoutrements become apparent. Two omni-directional cameras provide 360-degree coverage inside the car. A thermal camera. So-called trinocular stereo cameras which portray depth. Plus external and internal cameras that provide a driver's-eye view of the road, and a view of the driver looking in. There are also laser range-finders—to monitor the car's distance from other cars, or in the case of this lab, some old equipment. All told, more than a dozen cameras and other sensors. Trivedi has a word for them: DIVA—distributed interactive video arrays.

It doesn't look like the car of the future, but in many ways, that's what it is. Already, the average American car comes equipped with more than 25 sensors and microprocessors to gauge everything from engine temperature and speed, to fuel levels, engine diagnostics and even the car's location. But while today's car sensors monitor the state of the car, tomorrow's sensors will monitor the car's occupants, as well as other conditions inside and outside the vehicle.

Car Phones: Safe At Any Speed?

One of Trivedi’s research projects began with a call from a research director at DaimlerChrysler, the giant U.S.-German automaker that makes Mercedes Benz and Chrysler vehicles. He said he wanted to discuss how to make it safer for drivers to use cell phones. "I assumed he must want us to devise a system for easier no-hands operation of the phone, but he had something much more revolutionary in mind," recalls Trivedi. "He wanted us to re-think what it would take to make driving and talking on a cell phone safe, whether you’re using your hands or not."

The real safety problem, Trivedi concluded, is that a caller on the other end of the telephone call doesn't know the context within which the driver is speaking. After all, if the family is in the car, and someone asks a question from the back seat, and the driver answers the question, the interaction is generally considered safe. The reason: The person in the back seat is aware of the context: if there's a lot of traffic; whether the driver just spilled coffee on himself; are his eyes off the road; and so on. Obviously, if the situation seems hazardous, the passenger may not even ask the question, or will use language and demeanor that is appropriate to the situation.

So, Trivedi wondered, is there a way to give the person on the other end of the cell-phone call the same context they would have if they were a passenger in the car? With $250,000 in funding from DaimlerChrysler over two years, Trivedi began an ambitious program to explore "visual context capture and televiewing for enhanced driver safety and convenience." Not yet half way through the project's calendar, he says capturing the context is the easy part. "Our sensors can detect in minute detail everything that is happening inside the car, including the driver's body language, and it's fairly easy to send a video feed from multiple cameras," he explains. "But to be useful, this technology has to result in easy-to-understand cues to the caller that require as little bandwidth as possible."

To deal with the bandwidth issue, and counter concerns that supplying video from the car may infringe on privacy, Trivedi's team is working on the use of "avatars"—basically, cartoon-type characters that reflect real-time visual changes. For the driver's face, Trivedi's researchers have designed a yellow circle with black dots for eyes, and lines for eyebrows and mouth. As a student smiles or frowns, the video image is processed through a program and is reflected in real-time on the avatar's 'face.' A tired driver's drowsy eyes would show up in squinting eyes in the cartoon. "The great benefit of this is that sending the avatar's information over the air takes as little as 8 bits, while sending the full picture gobbles up between 1 and 5 million bits," notes Trivedi. "The important issue is: can the avatar provide the caller with a good sense of the affect of the driver? We think it can, although we still need to develop the software so it more clearly reflects a range of emotions and positions." To do that, grad student Joel McCall is working on gaze-detection—analyzing video of the driver’s eye movement, to track not just which way the driver is looking, but also his emotional or affective state based on changes in dilation of the pupils (including reaction to alcohol).

While facial avatars are one way to go, Trivedi is also exploring other ways to convey context to the caller. Traffic information observed by the car's external cameras could send a signal to the caller's phone (a red light for heavy traffic, for instance, or a pre-set audio signal). One of his researchers, Roscoe Cook, is also successfully capturing speed limit and other road signs detected in the driver's field of vision, and turning them instantly into icons. From there, it would be easy to display the latest speed-limit or stop sign on the driver’s own dashboard, or trigger an audio cue to 'please slow down' if the car is traveling above the posted speed limit.

'Smart' Airbags

Trivedi and his colleagues are also using the camera arrays and other sensors to capture and then analyze the posture of car passengers. With $600,000 in funding from the University of California’s Digital Media Innovation (DiMI) program and Volkswagen of America, Trivedi is exploring how to use the posture and body information to enhance the safe deployment of airbags.

Since 1990 in the U.S. alone, more than 200 people have lost their lives as a direct result of airbag deployment. Most were children, or small adults.

Even average-size passengers are at risk if they are leaning forward when the airbag deploys. On the other hand, if the car "knew" the size and posture of the passenger, it could process that information and make a decision to a) deploy as usual, b) deploy with less force, or c) not deploy at all.

Using the camera array in the lab vehicle, Trivedi's researchers are devising algorithms to classify crucial posture information. They’re also using camera input as well as pressure sensors under the seats to determine the weight and mass of the passenger. Assuming all the information can be correlated into a fool-proof assessment of the vulnerability of each passenger to an exploding airbag, next comes the hard part. "Time," explains Trivedi. "All of the vision, capture, pattern recognition, analysis, decision-making and transmission of that decision to the airbag mechanism—it all must be done in less than 25 milliseconds. That's the time it takes between impact and airbag deployment." Information on the bulk and weight of the passenger is available when he or she enters the car, but posture information has to be monitored in real-time. Will the system be fast enough to save lives? "We just don't know," admits Trivedi, "but we hope so. It's a two-year project, and it just started in March."

A New "Driving Ecology"

Even newer and more wide-ranging, is a multi-disciplinary, $3 million research project that got the green light in May from the UC DiMI program and Nissan Research. In June, the CVRR group will take delivery of an Infiniti Q45 sedan that will be instrumented with an array of cameras and sensors, as well as actuators for the steering wheel and pedals to test sending vibrational signals to the driver. The goal: a new interface to reinforce the driver’s attention and enhance safety.

In the year 2002, there will be about one million injuries and over 8,000 deaths directly or indirectly due to crashes resulting from driver distraction on American roads, according to figures compiled by the National Highway Traffic Safety Administration.

Driver distraction ranges from picking up something from the floor, or searching for something in the glove compartment, to drowsiness because of lack of sleep. Given the advent of in-car telematic devices, Trivedi wondered, wouldn't it be possible to design a system that could support the driver in attention management, perception, decision-making and control? And do so, without shifting too much of the burden of monitoring conditions to the car, thereby making the driver feel he or she could pay even less attention?

According to Trivedi, the goal of the three-year project is to develop a "human-centered" system that allows the vehicle to act as an extension of the driver's cognition, and that is aware of the driver’s inherent attention limitations. The system would also evaluate the state of the environment and the driver in a manner consistent with the driver's perception of criticality and performance. "We are abandoning the notion of binary warnings to communicate a problem," says the CVRR director. "We want to explore a new role of communications between vehicle and driver that places the driver central in the monitoring and control loop at all times."

To do so, Trivedi put together a multi-disciplinary team of professors, including Psychology’s Harold Pashler and Jim Hollan of Cognitive Sciences, as well as Bhaskar Rao, a colleague in the Jacobs School’s Electrical and Computer Engineering department. The researchers will focus on ways to assess when there’s a problem, and how best to make the driver aware of it. Initially, they will explore sensory channels capable of processing information even if the driver’s visual and verbal attention is overloaded. The obvious one is touch. “The driving wheel could begin vibrating to signal that the driver isn’t paying attention, and the strength of the vibration would escalate to indicate the problem is getting worse,” says Trivedi. “We will also explore other modalities for intelligent driver-vehicle interfaces."

"Our quest is to create what we call a new 'driving ecology' that manages a driver’s attention rather than controls it."

Print News Release  Email News Release

Search News

 


Subscribe to our Newsletter

RSS Feeds