SIGGRAPH Los Angeles, CA, August 12, 2008 -- The images of rocks, clouds, marble and other textures that serve as background images and details for 3D video games are often hand painted and thus costly to generate. A breakthrough from a UC San Diego computer science undergraduate now offers video game developers the possibility of high quality yet lightweight images for 3D video games that are generated “on the fly” and are free of stretch marks, flickering and other artifacts.
The advance is being presented this week at one of the most prestigious computer graphics conference in the world, ACM SIGGRAPH 2008.
|The stretched out graphics in the top image disappear when the new algorithms from UC San Diego are used to generate high quality images for 3D video games.|
“People are looking for ways to get rid of these distortions, preferably without having to pay artists to generate background and detail images by hand. We have come up with a way to do this, and we are planning to provide code for download soon,” explained Goldberg, who recently graduated from UC San Diego and is now working for San Diego video game studio PixelActive Inc.
The 2008 SIGGRAPH paper marks an important improvement over Perlin noise, an established technique in which small computer programs create many layers of noise that are piled on top of each other. The layers are then manipulated -- like layers of paint on a canvas -- in order to develop detailed and realistic textures such as rock, soil, cloud, water and marble that serve as background images and details for 3D video games.
“The existing methods for using computer generated noise to make images for backgrounds and details for 3D video games are fast, but the images that you get don’t look very good. Our work gives you the full computational benefit of noise but without many of the tradeoffs such as distortion and flickering,” said Goldberg.
The new approach also eliminates the need to store the textures as huge images that take up valuable memory. Instead the textures are generated by computer programs on the fly every time an image is rendered, explained computer science professor Matthias Zwicker from UC San Diego’s Jacobs School of Engineering.
“The graphics generated from the procedural approach that we explored in this project are very small. Illustrating video games with small images is going to be increasingly important in the future as more and more games are downloadable,” said Zwicker.
|Alex Goldberg's breakthrough will be used to make the images in 3D video games look better. This UC San Diego undergraduate student presented his work as a full paper at SIGGRAPH in August 2008.|
“Getting a paper into SIGGRAPH is an accomplishment for any senior researcher in computer graphics. Presenting a paper in SIGGRAPH based on work done as an undergraduate is astonishing. We’re very proud of Alex and his work,” said Keith Marzullo, professor and chair of the Department of Computer Science and Engineering at UC San Diego’s Jacobs School of Engineering.
“I’ve never given a talk in front of more than 30 or 40 people. At SIGGRAPH, the audience will be 300 or 400,” Goldberg said. “But I’m excited. These are exactly the people I want to show my work to.”
|Click here to watch a June 2007 demo at Calit2 of the game Super Hurtball created from scratch by Goldberg and five other undergrads as a part of CSE125. Length: 13:22|
|An example of the kind of 3D video game images that could be improved by the new computer graphics research pioneered by an undergraduate student from UC San Diego.|
“When one pixel covers a large area in a 3D video game landscape…what color should that pixel be? It can only be one color, but the area it covers may contain many different colors,” UC San Diego computer science professor Matthias Zwicker explained.
Color averaging is one solution. For example, if a pixel covers a patch of tiny black bumps on a piece of armor on a soldier far in the distance, and if these armor bumps are partially lit up with white light, then averaging the colors and turning the pixel gray is often in order. But before you can average colors, you have to determine the exact region of the scene that needs to be squeezed into one particular pixel. A simple solution is to slide circular areas of scenes into circle shaped pixels. But when you are trying to map areas of 3D scenes back to 2D pixels, circular areas of background images are not the best option even though the pixels are circles themselves, according to the computer graphics researchers.
In the SIGGRAPH paper, the computer scientists mapped elliptical areas of background images back to circular pixels and found that their technique yielded higher quality background images with less stretching and other distortions.
The reason elliptical shapes are a better fit for circular pixels in backgrounds for 3D video games goes back to basic geometry: when a cone that extends from a circular pixel intersects with the background of a 3D video game scene, the region of the cone that hits the background is an ellipse rather than a circle.
"Anisotropic Noise,” by Alexander Goldberg and Matthias Zwicker from the Department of Computer Science and Engineering at UC San Diego’s Jacobs School of Engineering and Frédo Durand from the Massachusetts Institute of Technology.
SIGGRAPH 2008 paper »
Matthias Zwicker's UC San Diego Web page »
PixelActive Inc. »
UC San Diego Computer Graphics Recent Publications »
UC San Diego video game class »
Hair Photobooth: Another Hot SIGGRAPH paper from UC San Diego »
More Computer Science and Engineering News Via RSS
Get the monthly newsletter from the Jacobs School of Engineering