To borrow from John Lennon: Imagine there's no latency, no spam or phishing, a community of trust. Imagine all the people, able to get online.
This is the kind of utopian network architecture that leading Internet engineers are dreaming about today.
To borrow from John Lennon, this is the kind of utopian network architecture that leading Internet engineers are dreaming about today.
As they imagine the Internet of 2020, computer scientists across the country are starting from scratch and re-thinking everything: from IP addresses to DNS to routing tables to Internet security in general. They're envisioning how the Internet might work without some of the most fundamental features of today's ISP and enterprise networks.
Their goal is audacious: To create an Internet without so many security breaches, with better trust and built-in identity management. Researchers are trying to build an Internet that's more reliable, higher performing and better able to manage exabytes of content. And they're hoping to build an Internet that extends connectivity to the most remote regions of the world, perhaps to other planets.
This high-risk, long-range Internet research will kick into high gear in 2010, as the U.S. federal government ramps up funding to allow a handful of projects to move out of the lab and into prototype. Indeed, the United States is building the world's largest virtual network lab across 14 college campuses and two nationwide backbone networks so that it can engage thousands -- perhaps millions -- of end users in its experiments.
"We're constantly trying to push research 20 years out," says Darleen Fisher, program director of the National Science Foundation's Network Technology and Systems (NeTS) program. "My job is to get people to think creatively potentially with high risk but high payoff. They need to think about how their ideas get implemented, and if implemented how it's going to [affect] the marketplace of ideas and economics."
The stakes are high. Some experts fear the Internet will collapse under the weight of ever-increasing cyber attacks, an increasing demand for multimedia content and the requirements for new mobile applications unless a new network architecture is developed.
The research comes at a critical juncture for the Internet, which is now so closely intertwined with the global economy that its failure is inconceivable. As more critical infrastructure -- such as the banking system, the electric grid and government-to-citizen communications -- migrate to the Internet, there's a consensus that the network needs an overhaul.
At the heart of all of this research is a desire to make the Internet more secure.
"The security is so utterly broken that it's time to wake up now and do it a better way," says Van Jacobson, a Research Fellow at PARC who is pitching a novel approach dubbed content-centric networking. "The model we're using today is just wrong. It can't be made to work. We need a much more information-oriented view of security, where the context of information and the trust of information have to be much more central."
NSF ramps up research
Futuristic Internet research will reach a major milestone as it moves from theory to prototype in 2010.
NSF plans to select anywhere from two to four large-scale research projects to receive grants worth as much as $9 million each to prototype future Internet architectures. Bids will be due in the first quarter of 2010, with awards expected in June.
"We would like to see over-arching, full-scale network architectures," Fisher says. "The proposals can be fairly simple with small, but profound changes from the current Internet, or they can be really radical changes.''
NSF is challenging researchers to come up with ideas for creating an Internet that's more secure and more available than today's. They've asked researchers to develop more efficient ways to disseminate information and manage users' identities while taking into account emerging wireless and optical technologies. Researchers also must consider the societal impacts of changing the Internet's architecture.
NSF wants bidders to consider "economic viability and demonstrate a deep understanding of the social values that are preserved or enabled by whatever future architecture people propose so they don't just think as technicians," Fisher says. "They need to think about the intended and unintended consequences of their design."
Key to these proposals is how researchers address Internet security problems.
"One of the things we're really concerned about is trustworthiness because all of our critical infrastructure is on the Internet," Fisher says. "The telephone systems are moving from circuits to IP. Our banking system is dependent on IP. And the Internet is vulnerable."
NSF says it won't make the same mistake today as was made when the Internet was invented, with security bolted on to the Internet architecture after-the-fact instead of being designed in from the beginning.
"We are not going to fund any proposals that don't have security expertise on their teams because we think security is so important," Fisher says. "Typically, network architects design and security people say after the fact how to secure the design. We're trying to get both of these communities to stretch the way they do things and to become better team players."
The latest NSF funding is a follow-on to the NSF's Future Internet Design (FIND) efforts, which asked researchers to conduct research as if they were designing the Internet from scratch. Launched in 2006, NSF's FIND program has funded around 50 research projects, with each project receiving $500,000 to $1 million over three to four years. Now, the NSF is narrowing these 50 research projects down to a handful of leading contenders.
World's largest Internet testbed
The Internet research projects chosen for prototyping will run on a new virtual networking lab being built by BBN Technologies. The lab is dubbed GENI for the Global Environment for Network Innovations.
The GENI program has developed experimental network infrastructure that's being installed in U.S. universities. This infrastructure will allow researchers to run large-scale experiments of new Internet architectures in parallel with -- but separated from -- the day-to-day traffic running on today's Internet.
"One of the key goals of GENI is to let researchers program very deep into the network," says Chip Elliott, GENI Project Director. "When we use today's Internet, you and I can buy any application program that we want and run it....GENI takes this idea several steps further. It allows you to install any software you want deep into the network anywhere you want. You can program switches and routers."
BBN was chosen to lead the GENI program in 2007 and has received $45 million from the NSF to build it. BBN received an $11.5 million grant in October to install GENI-enabled platforms on 14 U.S. college campuses and on two research backbone networks: Internet 2 and the National Lambda Rail. These installations will be done by October 2010.
"GENI won't be in a little lab on campus. We'd like to take the whole campus network and allow it to run experimental research in addition to the Internet traffic," Elliott says. "Nobody has done this before. It'll take about a year."
The GENI project involves enabling three types of network infrastructure to handle large-scale experiments. One type uses the OpenFlow protocol developed by Stanford University to allow deep programming of Ethernet switches from vendors such as HP, Arista, Juniper and Cisco. Another type of GENI-enabled infrastructure is the Internet 2 backbone, which has highly programmable Juniper routers. And the third type of GENI-enabled infrastructure is a WiMAX network for testing mobile and wireless services.
Once these GENI-enabled infrastructures are up and running, researchers will begin running large-scale experiments on them. The first four experiments have been selected for the GENI platforms, and they will test novel approaches to cloud computing, first responder networks, social networking services and inter-planetary communications.
"All of these experiments are beyond the next-generation Internet," Elliott says. "All of these efforts are targeting the Internet in 10 to 15 years."
The benefit of GENI for these projects is that researchers can test them on a very large scale network instead of on a typical testbed. That's why BBN and its partners are GENI-enabling entire campus networks, including dorm rooms.
"What's distinctive about GENI is its emphasis on having lots and lots of real people involved in the experiments," Elliott says. "Other countries tend to use traffic generators....We're looking at hundreds or thousands or millions of people engaged in these experiments."
Another key aspect of GENI is that it will be used to test new security paradigms. Elliott says the GENI program will fund 10 security-related efforts between now and October 2010.
"If I were rank ordering the experiments we are doing, security is the most important," Elliott says. "We need strong authentication of people, forensics and audit trails and automated tools to notice if [performance] is going south."
Elliott says GENI will be the best platform for large-scale network research that's been available in 20 years.
"You could argue that the Arpanet back in the '70s and early '80s was like this. People simultaneously did research and used the network," Elliott says. "But at some point it got impossible to do experimentation. For the past 20 years or so we have not had an infrastructure like this."
Stanford protocol drives GENI platform
One idea that GENI is testing is software-defined networking, a concept that is the opposite of today's hardware-driven Internet architecture.
Today's routers and switches come with software written by the vendor, and customers can't modify the code. Researchers at Stanford University's Clean Slate Project are proposing -- and the GENI program is trialing -- an open system that will allow users to program deep into network devices.
"The people that buy large amounts of networking equipment want less cost and more control in their networks. They want to be able to program networking devices directly," says Guido Appenzeller, head of the Clean Slate Lab.
Stanford's answer to this problem is an alternative architecture that removes the intelligence from switches and routers and places these smarts in an external controller. Users can program the central controller using Stanford's OpenFlow, which was developed with NSF funding.
"Juniper and Cisco are struggling with lots of customer demand for more flexibility in networks," Appenzeller says. "Juniper has done some steps in that direction with its SDK on top of their switches and routers...But it's harder to do that because of the issues of real-time control. It's easier to do this in an external controller.''
If software-defined networking were to become widespread, enterprises would have more choice in terms of how they buy networking devices. Instead of buying hardware and software from the same vendor, they'd be able to mix and match hardware and software from different vendors.
Stanford has demonstrated OpenFlow protocol running on switches from Cisco, Juniper, HP and NEC. With OpenFlow, an external controller manages these switches and makes all the high-level decisions.
Appenzeller says the OpenFlow architecture has several advantages from an Internet security perspective because the external controller can view which computers are communicating with each other and make decisions about access control.
"OpenFlow is about changing how you innovate in your network," Appenzeller says. "We have several large Internet companies looking at this. We're pretty optimistic that we'll see some deployments."
Stanford anticipates publishing Version 1.1 of OpenFlow by early 2010. Already deployed in Stanford's computer sciences buildings, OpenFlow will be installed in seven universities and two research backbone networks through the GENI program build-out in 2010.
Tackling routing table growth
Among the Internet architectures that will run on the GENI infrastructure is a research project out of Rochester Institute of Technology that is trying to address the issue of routing table growth.
Dubbed Floating Cloud Tiered Internet Architecture, the RIT project is one of the few NSF-funded future Internet research projects that has software up and running as well as a corporate sponsor.
RIT's Floating Cloud concept was designed to address the problem of routing scalability. At issue is the 300,000 routing table entries that keep growing as more enterprises use multiple carriers to support their network infrastructure. As the routing table grows, the Internet's core routers need more computational power and memory.
With the Floating Cloud approach, ISPs would not have to keep buying larger routers to handle ever-growing routing tables. Instead, ISPs would use a new technique to forward packets within their own network clouds.
RIT is proposing a flexible, peering structure that would be overlayed on the Internet. The architecture uses forwarding across network clouds, and the clouds are associated with tiers that have number values. When packets are sent across the cloud, only their tier values are used for forwarding, which eliminates the need for global routing within a cloud.
"There will be no routers containing the whole routing table. The routing table is going to be residing within the cloud. To forward information across the cloud, you just use the tier value and send it across," explains Dr. Nirmala Shenoy, a professor in the network security and systems administration department at RIT.
The Floating Cloud approach runs over Multi-Protocol Label Switching (MPLS). Shenoy says it completely bypasses current routing protocols within a particular cloud, which is why she refers to it as "Layer 2.5."
RIT has been running its Floating Cloud software on a testbed of 12 Linux systems. Shenoy is excited about testing the software on the GENI-enabled platforms operated by Internet 2.
"Twelve systems is not the Internet," Shenoy says. "We've been talking to the GENI project people about a more realistic set up."
RIT also is collaborating with Level 3 Communications, an ISP that plans to test the Floating Cloud architecture in its backbone network.
Shenoy sees many benefits for enterprise network managers in the Floating Cloud approach.
"This architecture is affording the flexibility of a defined network cloud, which can be defined to any network granularity," Shenoy says. "What happens in the cloud, is [the responsibility] of the network manager. This cloud structure introduces more economy, or you can make it more granular if you want better control and management."
Shenoy says the Floating Cloud approach has some security advantages.
"The very fact that I'm going to have control on the cloud size is going to give me more control and management and should positively impact security," Shenoy says. "Also, the fact that I don't have these huge global routing tables, and my packets shouldn't get shunted all over the place. Instead I will have more structured forwarding, and that should impact security."
Sometimes-on mobile wireless networks
Researchers from Howard University in Washington, D.C. will be experimenting with a new type of mobile wireless network on the GENI platform. The group's research is focused on networks that aren't connected all the time -- so called opportunistic networks, which have intermittent network connectivity.
"In this kind of opportunistic network...sometimes you are out of the signal range and you cannot talk to the Internet or talk to other mobile devices," explains Jiang Li, an associate professor in the Department of Systems and Computer Science at Howard University. "One example is driving a car on a highway in a remote area."
Opportunistic networks would use peer-to-peer communications to transfer communications if the network is unavailable. For example, you may want to send an e-mail from a car in a remote location without network access. With an opportunistic wireless network, your PDA might send that message to a device inside a passing vehicle, which might take the message to a nearby cell tower.
Li sees this type of opportunistic network architecture as useful for data transmission and could be a complement to cellular networks.
"The most fundamental difference about this architecture is that the network has intermittent connections, as compared to the Internet which assumes you are connected all of the time," Li says.
Li says opportunistic networks involve rethinking "everything" about the Internet's architecture.
"Seventy to eighty percent of the protocols may have to be redesigned because the current Internet assumes that a connection is always there," Li says. "If the connection is gone for a minute, all of the protocols will be broken."
Li says opportunistic mobile networks are useful for emergency response if the network infrastructure is wiped out by a disaster or is unavailable for a period of time.
Li's research team also has an NSF grant to study the network management aspects of opportunistic networks, which may have long delays between when a message is sent and when it is received.
Li says these types of delay-tolerant network management schemes would be useful in developing countries such as India, which isn't covered by traditional wireless infrastructure such as cell towers.
"We're trying to extend the current networks to a much broader geographic area," Li says. "Now, if you want to get access to the Internet, you have to have infrastructure, at least a cell tower. If you look at the map to see what's covered by cell towers...we still have lots of red areas that aren't covered."
The Facebook-style Internet
Davis Social Links uses the format of Facebook -- with its friends-based ripple effect of connectivity -- to propagate connections on the Internet. That's how it creates connections based on trust and true identities, according to S. Felix Wu, a professor in the Computer Science Department at UC Davis.
"If somebody sends you an e-mail, the only information you have about whether this e-mail is valuable is to look at the sender's e-mail which can be faked and then look at the content," Wu says. "If you could provide the receiver of the e-mail with the social relationship with the sender, this will actually help the receiver to set up certain policies about whether the message should be higher or lower priority."
Davis Social Links creates an extra layer in the Internet architecture: on top of the network control layer, it creates a social control layer, which explains the social relationship between the sender and the receiver.
"Our social network represents our trust and our interest with other parties," Wu explains. "That information should be combined together with the packets we are sending each other."
Davis Social Links currently runs on Facebook, but researchers are porting it to the GENI platform.
Although based on the popular Facebook application, Davis Social Links represents a radical change over today's Internet. The current Internet is built upon the idea of users being globally addressable. Davis Social Links replaces that idea with social rather than network connectivity.
"This is revolutionary change," Wu says. "One of the fundamental principles of today's Internet is that it provides global connectivity. If you have an IP address, you by default can connect to any other IP address. In our architecture, we abandon that concept. We think it's not only unnecessary but also harmful. We see [distributed denial-of-service] attacks as well as some of the spamming activity as a result of global connectivity."
Davis Social Links also re-thinks DNS. While it still uses DNS for name resolution, Davis Social Links doesn't require the result of resolution to be an IP address or any unique routable identity. Instead, the result is a social path toward a potential target.
"The social control layer interface under Davis Social Links is like a social version of Google. You type some keywords...and the social Google will give you a list of pointers to some of the social content matching the keywords and the social path to that content," Wu explains.
Wu suggests that it's better and safer to have connectivity in the application layer than in the network layer. Instead of today's sender-oriented architecture -- where a person can communicate with anyone whose IP address or e-mail address is known -- Davis Social Links uses a social networking system that requires both sides to have a trust relationship and to be willing to communicate with each other.
"As humans, we have very robust social networks. With the idea of six degrees of separation, it's very realistic that you will be able to find a way communicate with another," Wu says.
Another radical proposal to change the Internet infrastructure is content-centric networking, which is being developed at PARC. This research aims to address the problem of massive amounts of content -- increasingly multimedia -- that exists on the Internet.
Instead of using IP addresses to identify the machines that store content, content-centric networking uses file names and URLs to identify the content itself. The underlying idea is that knowing the content users want to access is more important than knowing the location of the machines used to store it.
"There are many exabytes of content floating around the 'Net...but IP wasn't designed for content," Jacobson explains. "We're trying to work around the fact that machines-talking-to-machines isn't important anymore. Moving content is really important. Peer-to-peer networks, content distribution networks, virtual servers and storage are all trying to get around this fact."
Jacobson proposes that content -- such as a movie, a document or an e-mail message -- would receive a structured name that users can search for and retrieve. The data has a name, but not a location, so that end users can find the nearest copy.
In this model, trust comes from the data itself, not from the machine it's stored on. Jacobson says this approach is more secure because end users decide what content they want to receive rather than having lots of unwanted content and e-mail messages pushed at them.
"Lots of relay attacks and man-in-the-middle attacks are impossible with our approach. You can get rid of spam," Jacobson says. "This is because we're securing the content itself and not the wrapper it's in."
Jacobson says content-centric networking is a better fit for today's applications, which require layers of complicated middleware to run on the Internet's host-oriented networking model. He also says this approach scales better when it comes to having millions of people watching multimedia content because it uses broadcast, multi-point communications instead of the point-to-point communications built into today' s Internet.
More than anything, content-centric networking hopes to improve the Internet's security posture, Jacobson says.
"TCP was designed so it didn't know what it was carrying. It didn't know what the bits were in the pipe," Jacobson explains. "We came up with a security model that we'll armor the pipe, or we'll wrap the bits in SSL, but we still don't know the bits. The attacks are on the bits, not the pipes carrying them. In general, we know that perimeter security doesn't work. We need to move to models where the security and trust come from the data and not from the wrappers or the pipes."
PARC has an initial implementation of content-centric networking up and running, and released early code to the Internet engineering community in September. Jacobson says he hopes content-centric networking will be one of the handful of proposals selected by the NSF for a large-scale experiment on the GENI platform.
Jacobson says the evolution to content-centric networking would be fairly painless because it would be like middleware, mapping between connection-oriented IP below and the content above. The approach uses multi-point communications and can run over anything: Ethernet, IP, optical or radio.
Will the Internet of 2020 include content-centric networking? Jacobson says he isn't sure. But he does believe that the Internet needs a radically different architecture by then, if for no other reason than to improve security.
"Security should be coming out of the Web of interactions between information," Jacobson says. "Just like we're using the Web to get information, we should be using it to build up our trust. You can make very usable, very robust security that way, but we keep trying to patch up the current 'Net."