Blondie Massage Spa

The Internet is Broken

Homo Erectus

Heterosexualist
Sep 28, 2001
842
41
28
In his office within the gleaming-stainless-steel and orange-brick jumble of MIT's Stata Center, Internet elder statesman and onetime chief protocol architect David D. Clark prints out an old PowerPoint talk. Dated July 1992, it ranges over technical issues like domain naming and scalability. But in one slide, Clark points to the Internet's dark side: its lack of built-in security.
In others, he observes that sometimes the worst disasters are caused not by sudden events but by slow, incremental processes -- and that humans are good at ignoring problems. "Things get worse slowly. People adjust," Clark noted in his presentation. "The problem is assigning the correct degree of fear to distant elephants."
Today, Clark believes the elephants are upon us. Yes, the Internet has wrought wonders: e-commerce has flourished, and e-mail has become a ubiquitous means of communication. Almost one billion people now use the Internet, and critical industries like banking increasingly rely on it.
At the same time, the Internet's shortcomings have resulted in plunging security and a decreased ability to accommodate new technologies. "We are at an inflection point, a revolution point," Clark now argues. And he delivers a strikingly pessimistic assessment of where the Internet will end up without dramatic intervention. "We might just be at the point where the utility of the Internet stalls -- and perhaps turns downward."
Indeed, for the average user, the Internet these days all too often resembles New York's Times Square in the 1980s. It was exciting and vibrant, but you made sure to keep your head down, lest you be offered drugs, robbed, or harangued by the insane. Times Square has been cleaned up, but the Internet keeps getting worse, both at the user's level, and -- in the view of Clark and others -- deep within its architecture.
Over the years, as Internet applications proliferated -- wireless devices, peer-to-peer file-sharing, telephony -- companies and network engineers came up with ingenious and expedient patches, plugs, and workarounds. The result is that the originally simple communications technology has become a complex and convoluted affair. For all of the Internet's wonders, it is also difficult to manage and more fragile with each passing day.
That's why Clark argues that it's time to rethink the Internet's basic architecture, to potentially start over with a fresh design -- and equally important, with a plausible strategy for proving the design's viability, so that it stands a chance of implementation. "It's not as if there is some killer technology at the protocol or network level that we somehow failed to include," says Clark. "We need to take all the technologies we already know and fit them together so that we get a different overall system. This is not about building a technology innovation that changes the world but about architecture -- pulling the pieces together in a different way to achieve high-level objectives."
Just such an approach is now gaining momentum, spurred on by the National Science Foundation. NSF managers are working to forge a five-to-seven-year plan estimated to cost $200 million to $300 million in research funding to develop clean-slate architectures that provide security, accommodate new technologies, and are easier to manage.
They also hope to develop an infrastructure that can be used to prove that the new system is really better than the current one. "If we succeed in what we are trying to do, this is bigger than anything we, as a research community, have done in computer science so far," says Guru Parulkar, an NSF program manager involved with the effort. "In terms of its mission and vision, it is a very big deal. But now we are just at the beginning. It has the potential to change the game. It could take it to the next level in realizing what the Internet could be that has not been possible because of the challenges and problems."
When AOL updates its software, the new version bears a number: 7.0, 8.0, 9.0. The most recent version is called AOL 9.0 Security Edition. These days, improving the utility of the Internet is not so much about delivering the latest cool application; it's about survival.
In August, IBM released a study reporting that "virus-laden e-mails and criminal driven security attacks" leapt by 50 percent in the first half of 2005, with government and the financial-services, manufacturing, and health-care industries in the crosshairs. In July, the Pew Internet and American Life Project reported that 43 percent of U.S. Internet users -- 59 million adults -- reported having spyware or adware on their computers, thanks merely to visiting websites. (In many cases, they learned this from the sudden proliferation of error messages or freeze-ups.) Fully 91 percent had adopted some defensive behavior -- avoiding certain kinds of websites, say, or not downloading software. "Go to a neighborhood bar, and people are talking about firewalls. That was just not true three years ago," says Susannah Fox, associate director of the Pew project.
Then there is spam. One leading online security company, Symantec, says that between July 1 and December 31, 2004, spam surged 77 percent at companies that Symantec monitored. The raw numbers are staggering: weekly spam totals on average rose from 800 million to more than 1.2 billion messages, and 60 percent of all e-mail was spam, according to Symantec.
But perhaps most menacing of all are "botnets" -- collections of computers hijacked by hackers to do remote-control tasks like sending spam or attacking websites. This kind of wholesale hijacking -- made more potent by wide adoption of always-on broadband connections -- has spawned hard-core crime: digital extortion. Hackers are threatening destructive attacks against companies that don't meet their financial demands. According to a study by a Carnegie Mellon University researcher, 17 of 100 companies surveyed had been threatened with such attacks.
Simply put, the Internet has no inherent security architecture -- nothing to stop viruses or spam or anything else. Protections like firewalls and antispam software are add-ons, security patches in a digital arms race.
The President's Information Technology Advisory Committee, a group stocked with a who's who of infotech CEOs and academic researchers, says the situation is bad and getting worse. "Today, the threat clearly is growing," the council wrote in a report issued in early 2005. "Most indicators and studies of the frequency, impact, scope, and cost of cyber security incidents -- among both organizations and individuals -- point to continuously increasing levels and varieties of attacks."
And we haven't even seen a real act of cyberterror, the "digital Pearl Harbor" memorably predicted by former White House counterterrorism czar Richard Clarke in 2000 (see "A Tangle of Wires"). Consider the nation's electrical grid: it relies on continuous network-based communications between power plants and grid managers to maintain a balance between production and demand. A well-placed attack could trigger a costly blackout that would cripple part of the country.
The conclusion of the advisory council's report could not have been starker: "The IT infrastructure is highly vulnerable to premeditated attacks with potentially catastrophic effects."
The system functions as well as it does only because of "the forbearance of the virus authors themselves," says Jonathan Zittrain, who cofounded the Berkman Center for Internet and Society at Harvard Law School and holds the Chair in Internet Governance and Regulation at the University of Oxford. "With one or two additional lines of code...the viruses could wipe their hosts' hard drives clean or quietly insinuate false data into spreadsheets or documents. Take any of the top ten viruses and add a bit of poison to them, and most of the world wakes up on a Tuesday morning unable to surf the Net -- or finding much less there if it can."
 

Homo Erectus

Heterosexualist
Sep 28, 2001
842
41
28
Part 2

The Internet's original protocols, forged in the late 1960s, were designed to do one thing very well: facilitate communication between a few hundred academic and government users. The protocols efficiently break digital data into simple units called packets and send the packets to their destinations through a series of network routers. Both the routers and PCs, also called nodes, have unique digital addresses known as Internet Protocol or IP addresses. That's basically it. The system assumed that all users on the network could be trusted and that the computers linked by the Internet were mostly fixed objects.
The Internet's design was indifferent to whether the information packets added up to a malicious virus or a love letter; it had no provisions for doing much besides getting the data to its destination. Nor did it accommodate nodes that moved -- such as PDAs that could connect to the Internet at any of myriad locations. Over the years, a slew of patches arose: firewalls, antivirus software, spam filters, and the like. One patch assigns each mobile node a new IP address every time it moves to a new point in the network.
Clearly, security patches aren't keeping pace. That's partly because different people use different patches and not everyone updates them religiously; some people don't have any installed. And the most common mobility patch -- the IP addresses that constantly change as you move around -- has downsides. When your mobile computer has a new identity every time it connects to the Internet, the websites you deal with regularly won't know it's you. This means, for example, that your favorite airline's Web page might not cough up a reservation form with your name and frequent-flyer number already filled out. The constantly changing address also means you can expect breaks in service if you are using the Internet to, say, listen to a streaming radio broadcast on your PDA. It also means that someone who commits a crime online using a mobile device will be harder to track down.
In the view of many experts in the field, there are even more fundamental reasons to be concerned. Patches create an ever more complicated system, one that becomes harder to manage, understand, and improve upon. "We've been on a track for 30 years of incrementally making improvements to the Internet and fixing problems that we see," says Larry Peterson, a computer scientist at Princeton University. "We see vulnerability, we try to patch it. That approach is one that has worked for 30 years. But there is reason to be concerned. Without a long-term plan, if you are just patching the next problem you see, you end up with an increasingly complex and brittle system. It makes new services difficult to employ. It makes it much harder to manage because of the added complexity of all these point solutions that have been added. At the same time, there is concern that we will hit a dead end at some point. There will be problems we can't sufficiently patch."
The patchwork approach draws complaints even from the founder of a business that is essentially an elaborate and ingenious patch for some of the Internet's shortcomings. Tom Leighton is cofounder and chief scientist of Akamai, a company that ensures that its clients' Web pages and applications are always available, even if huge numbers of customers try to log on to them or a key fiber-optic cable is severed. Akamai closely monitors network problems, strategically stores copies of a client's website at servers around the world, and accesses those servers as needed. But while his company makes its money from patching the Net, Leighton says the whole system needs fundamental architectural change. "We are in the mode of trying to plug holes in the dike," says Leighton, an MIT mathematician who is also a member of the President's Information Technology Advisory Committee and chair of its Cyber Security Subcommittee. "There are more and more holes, and more resources are going to plugging the holes, and there are less resources being devoted to fundamentally changing the game, to changing the Internet."
When Leighton says "resources," he's talking about billions of dollars. Take Microsoft, for example. Its software mediates between the Internet and the PC. These days, of the $6 billion that Microsoft spends annually on research and development, approximately one-third, or $2 billion, is directly spent on security efforts. "The evolution of the Internet, the development of threats from the Internet that could attempt to intrude on systems -- whether Web servers, Web browsers, or e-mail-based threats -- really changed the equation," says Steve Lipner, Microsoft's director of security strategy and engineering strategy. "Ten years ago, I think people here in the industry were designing software for new features, new performance, ease of use, what have you. Today, we train everybody for security." Not only does this focus on security siphon resources from other research, but it can even hamper research that does get funded. Some innovations have been kept in the lab, Lipner says, because Microsoft couldn't be sure they met security standards.
Of course, some would argue that Microsoft is now scrambling to make up for years of selling insecure products. But the Microsoft example has parallels elsewhere. Eric Brewer, director of Intel's Berkeley, CA, research lab, notes that expenditures on security are like a "tax" and are "costing the nation billions and billions of dollars." This tax shows up as increased product prices, as companies' expenditures on security services and damage repair, as the portion of processor speed and storage devoted to running defensive programs, as the network capacity consumed by spam, and as the costs to the average person trying to dodge the online minefield of buying the latest firewalls. "We absolutely can leave things alone. But it has this continuous 30 percent tax, and the tax might go up," Brewer says. "The penalty for not [fixing] it isn't immediately fatal. But things will slowly get worse and might get so bad that people won't use the Internet as much as they might like."
The existing Internet architecture also stands in the way of new technologies. Networks of intelligent sensors that collectively monitor and interpret things like factory conditions, the weather, or video images could change computing as much as cheap PCs did 20 years ago. But they have entirely different communication requirements. "Future networks aren't going to be PCs docking to mainframes. It's going to be about some car contacting the car next to it. All of this is happening in an embedded context. Everything is machine to machine rather than people to people," says Dipankar Raychaudhuri, director of the Wireless Information Network Laboratory (Winlab) at Rutgers University. With today's architecture, making such a vision reality would require more and more patches.
Architectural Digest
When Clark talks about creating a new architecture, he says the job must start with the setting of goals. First, give the medium a basic security architecture -- the ability to authenticate whom you are communicating with and prevent things like spam and viruses from ever reaching your PC. Better security is "the most important motivation for this redesign," Clark says. Second, make the new architecture practical by devising protocols that allow Internet service providers to better route traffic and collaborate to offer advanced services without compromising their businesses. Third, allow future computing devices of any size to connect to the Internet -- not just PCs but sensors and embedded processors. Fourth, add technology that makes the network easier to manage and more resilient. For example, a new design should allow all pieces of the network to detect and report emerging problems -- whether technical breakdowns, traffic jams, or replicating worms -- to network administrators.
The good news is that some of these goals are not so far off. NSF has, over the past few years, spent more than $30 million supporting and planning such research. Academic and corporate research labs have generated a number of promising technologies: ways to authenticate who's online; ways to identify criminals while protecting the privacy of others; ways to add wireless devices and sensors. While nobody is saying that any single one of these technologies will be included in a new architecture, they provide a starting point for understanding what a "new" Internet might actually look like and how it would differ from the old one.
 

Homo Erectus

Heterosexualist
Sep 28, 2001
842
41
28
Part 3

Right now, Shibboleth is used by universities to mediate access to online libraries and other resources; when you log on, the university knows your "attribute" -- you are an enrolled student -- and not your name or other personal information. This basic concept can be expanded: your employment status could open the gates to your company's servers; your birth date could allow you to buy wine online. A similar scheme could give a bank confidence that online account access is legitimate and conversely give a bank customer confidence that banking communications are really from the bank.
Shibboleth and similar technologies in development can, and do, work as patches. But some of their basic elements could also be built into a replacement Internet architecture. "Most people look at the Internet as such a dominant force, they only think how they can make it a little better," Clark says. "I'm saying, 'Hey, think about the future differently. What should our communications environment of 10 to 15 years from now look like? What is your goal?'"
It's worth remembering that despite all of its flaws, all of its architectural kluginess and insecurity and the costs associated with patching it, the Internet still gets the job done. Any effort to implement a better version faces enormous practical problems: all Internet service providers would have to agree to change all their routers and software, and someone would have to foot the bill, which will likely come to many billions of dollars. But NSF isn't proposing to abandon the old network or to forcibly impose something new on the world. Rather, it essentially wants to build a better mousetrap, show that it's better, and allow a changeover to take place in response to user demand.
To that end, the NSF effort envisions the construction of a sprawling infrastructure that could cost approximately $300 million. It would include research labs across the United States and perhaps link with research efforts abroad, where new architectures can be given a full workout. With a high-speed optical backbone and smart routers, this test bed would be far more elaborate and representative than the smaller, more limited test beds in use today. The idea is that new architectures would be battle tested with real-world Internet traffic. "You hope that provides enough value added that people are slowly and selectively willing to switch, and maybe it gets enough traction that people will switch over," Parulkar says. But he acknowledges, "Ten years from now, how things play out is anyone's guess. It could be a parallel infrastructure that people could use for selective applications."
 

Homo Erectus

Heterosexualist
Sep 28, 2001
842
41
28
Part 4

Still, skeptics claim that a smarter network could be even more complicated and thus failure-prone than the original bare-bones Internet. Conventional wisdom holds that the network should remain dumb, but that the smart devices at its ends should become smarter. "I'm not happy with the current state of affairs. I'm not happy with spam; I'm not happy with the amount of vulnerability to various forms of attack," says Vinton Cerf, one of the inventors of the Internet's basic protocols, who recently joined Google with a job title created just for him: chief Internet evangelist. "I do want to distinguish that the primary vectors causing a lot of trouble are penetrating holes in operating systems. It's more like the operating systems don't protect themselves very well. An argument could be made, 'Why does the network have to do that?'"

According to Cerf, the more you ask the network to examine data -- to authenticate a person's identity, say, or search for viruses -- the less efficiently it will move the data around. "It's really hard to have a network-level thing do this stuff, which means you have to assemble the packets into something bigger and thus violate all the protocols," Cerf says. "That takes a heck of a lot of resources." Still, Cerf sees value in the new NSF initiative. "If Dave Clark...sees some notions and ideas that would be dramatically better than what we have, I think that's important and healthy," Cerf says. "I sort of wonder about something, though. The collapse of the Net, or a major security disaster, has been predicted for a decade now." And of course no such disaster has occurred -- at least not by the time this issue of Technology Review went to press.
The NSF effort to make the medium smarter also runs up against the libertarian culture of the Internet, says Harvard's Zittrain. "The NSF program is a worthy one in the first instance because it begins with the premise that the current Net has outgrown some of its initial foundations and associated tenets," Zittrain says. "But there is a risk, too, that any attempt to rewrite the Net's technical constitution will be so much more fraught, so much more self-conscious of the nontechnical matters at stake, that the cure could be worse than the problem."

Still, Zittrain sees hazards ahead if some sensible action isn't taken. He posits that the Internet's security problems, and the theft of intellectual property, could produce a counterreaction that would amount to a clampdown on the medium -- everything from the tightening of software makers' control over their operating systems to security lockdowns by businesses. And of course, if a "digital Pearl Harbor" does occur, the federal government is liable to respond reflexively with heavy-handed reforms and controls. If such tightenings happen, Zittrain believes we're bound to get an Internet that is, in his words, "more secure -- and less interesting."

But what all sides agree on is that the Internet's perennial problems are getting worse, at the same time that society's dependence on it is deepening. Just a few years ago, the work of researchers like Peterson didn't garner wide interest outside the networking community. But these days, Clark and Peterson are giving briefings to Washington policymakers. "There is recognition that some of these problems are potentially quite serious. You could argue that they have always been there," Peterson says. "But there is a wider recognition in the highest level of the government that this is true. We are getting to the point where we are briefing people in the president's Office of Science and Technology Policy. I specifically did, and other people are doing that as well. As far as I know, that's pretty new."

Outside the door to Clark's office at MIT, a nametag placed by a prankster colleague announces it to be the office of Albus Dumbledore -- the wise headmaster of the Hogwarts School of Witchcraft and Wizardry, a central figure in the Harry Potter books. But while Clark in earlier years may have wrought some magic, helping transform the original Internet protocols into a robust communications technology that changed the world, he no longer has much control over what happens next.

But "because we don't have power, there is a greater chance that we will be left alone to try," he says. And so Clark, like Dumbledore, clucks over new generations of technical wizards. "My goal in calling for a fresh design is to free our minds from the current constraints, so we can envision a different future," he says. "The reason I stress this is that the Internet is so big, and so successful, that it seems like a fool's errand to send someone off to invent a different one." Whether the end result is a whole new architecture -- or just an effective set of changes to the existing one -- may not matter in the end. Given how entrenched the Internet is, the effort will have succeeded, he says, if it at least gets the research community working toward common goals, and helps "impose creep in the right direction."
 

Homo Erectus

Heterosexualist
Sep 28, 2001
842
41
28
Part 5

Foundations for a New Infrastructure
The NSF's emerging effort to forge a clean-slate Internet architecture will draw on a wide body of existing research. Below is a sampling of major efforts aimed at improving everything from security to wireless communications.

PLANETLAB
Princeton University
Princeton, NJ
Focus: Creating an Internet "overlay network" of hardware and software—currently 630 machines in 25 countries—that performs functions ranging from searching for worms to optimizing traffic.

EMULAB
University of Utah
Salt Lake City, UT
Focus: A software and hardware test *bed that provides researchers a simple, practical way to emulate the Internet for a wide variety of research goals.

DETER/University of Southern
California Information Sciences Institute
Marina del Rey, CA
Focus: A research test bed where researchers can safely launch simulated cyber-*attacks, analyze them, and develop defensive strategies, especially for critical infrastructure.

WINLAB (Wireless Information Network Laboratory)
Rutgers University
New Brunswick, NJ
Focus: Develops wireless networking architectures and protocols, aimed at deploying the mobile Internet. Performs research on everything from high-speed modems to spectrum management.
 
Ashley Madison
Toronto Escorts