San Diego, CA, March 19, 2007 -- Imagine an Internet design where dropping packets is part of the plan.
This “science fiction” version of the Internet is not all fiction: computer scientists and electrical engineers at UC San Diego are at work determining how feasible an Internet with a planned-packet-dropping protocol might be.
|UC San Diego Center for Networked Systems|
“This rethinking of the Internet was just one of the research projects that CNS-affiliated researchers presented at our January 2007 Research Review,” said Amin Vahdat, CNS Director and Professor of Computer Science and Engineering at the UCSD Jacobs School of Engineering. The CNS offers research collaboration between UC San Diego and industry. Research projects are developed jointly between academic scientists and industry partners and focus on technologies and foundations for robust, secure and open networked systems.
The two-day event included a student poster session, talks from CNS faculty affiliated with UCSD, Calit2 and the San Diego Supercomputer Center, as well as talks from researchers at Qualcomm, Symantec, Hewlett Packard and Sun Computers.
The Research Review focused on a wide range of topics, including resource management for heterogeneous wireless environments and wireless sensor networks, resource configuration and scheduling with virtual clusters, the generation of realistic network traffic and topologies, and the automated cross-layer diagnosis of enterprise wireless networks.
The remainder of this article outlines some of what Snoeren and Lin discussed at the CNS Research Review.
The Promise of Packet Dropping
|Alex Snoeren, a professor in the Department of Computer Science and Engineering.|
“If you run a knife across a CD – scratch the heck out of it – it still plays because there is information redundancy. It almost doesn’t matter what part you lose – pieces of every song are stored in multiple places on the disk,” explained Snoeren.
With funding from the CNS and the National Science Foundation, Snoeren, Lin and their collaborators are exploring the possibilities of a radically remade Internet in which the principles behind erasure coding help to make packet dropping okay. Even if some packets are dropped, all of the information makes it from sender to receiver, assuming the information is properly coded. The way the information is coded depends on the real-time packet drop rate.
“For example, if I need to send 5 packets, then I could encode the information in such a way that if I send 6 packets and if any 5 make it to the receiver, all the information will be transmitted,” said Snoeren.
If the rate at which packets are being dropped increases, you change the way you encode the information so that a smaller fraction of the packets need to make it to the other end in order to transmit all the information. In other words, you might need to send ten packets to encode 5 packets’ worth of information, but up to five of them could be lost without harm. When the packet drop rate decreases, you turn “the information coding dial” in the other direction.
If it is okay to lose packets, then Internet users may not have to be as “friendly” to others who are trying to send information. Not having to be nice means that everyone can send packets as fast as they want, possibly improving overall throughput rates. This raises the possibility of redesigning the Internet’s congestion control protocols such as TCP, which tests bandwidth and makes you “worry” about forcing other people’s packets to be lost. Currently, the Internet only survives because end hosts are polite; greedy senders that disobey TCP’s rules can overrun other, more timid hosts. Erasure coding has the potential to empower the well-behaved hosts to fight back.
Congestion control protocols like TCP were created in the early days of the Internet in an attempt to prevent congestion and share the thin long distance pipes fairly among many users who had a lot of information sending capacity. Now that bandwidth is not the primary limiting factor, replacing these kinds of protocols with protocols that better reflect the current state of the Internet would open up a wide rage of new possibilities.
For example, if everyone is sending as fast as they can and packets are dropped fairly, then Internet traffic demands would stabilize, explained Snoeren.
|Bill Lin, a professor in the Department of Electrical and Computer Engineering.|
“But if you were to create a different version of the Internet in which you can drop packets, then you wouldn’t have to spend the money and energy on the high speed memory for routers, and the complex work of scheduling could be simplified,” Snoeren explained.
“And the really cool thing is,” Snoeren continued, “getting rid of traditional buffering at routers means you can start to thing about an entirely optical backbone with no electronics.”
With current technologies, you cannot buffer light pulses like you can buffer the 1s and 0s that run through today’s routers. “When you eliminate the need for buffers, you can start to think about going optical,” Snoeren mused.
Related work: “Decongestion Control,” B. Raghavan and A. Snoeren, University of California, San Diego. Proceedings of the 5th ACM Workshop on Hot Topics in Networks (HotNets-V), Irvine, CA, November 2006.