1969 - Birth of a Network
The Internet as we know it today, in the mid-1990s, traces it origins back to a Defense Department project in 1969. The subject of the project was wartime digital communications. At that time the telephone system was about the only theater-scale communications system in use. A major problem had been identified in its design - its dependence on switching stations that could be targeted during an attack. Would it be possible to design a network that could quickly reroute digital traffic around failed nodes? A possible solution had been identified in theory. That was to build a "web" of datagram network, called an "catenet", and use dynamic routing protocols to constantly adjust the flow of traffic through the catenet. The Defense Advanced Research Projects Agency (DARPA) launched the DARPA Internet Program.1970s - Infancy
DARPA Internet, largely the plaything of academic and military researchers, spent more than a decade in relative obscurity. As Vietnam, Watergate, the Oil Crisis, and the Iranian Hostage Crisis rolled over the nation, several Internet research teams proceeded through a gradual evolution of protocols. In 1975, DARPA declared the project a success and handed its management over to the Defense Communications Agency. Several of today's key protocols (including IP and TCP) were stable by 1980, and adopted throughout ARPANET by 1983.Mid 1980s - The Research Net
Let's outline key features, circa-1983, of what was then called ARPANET. A small computer was a PDP-11/45, and a PDP-11/45 does not fit on your desk. Some sites had a hundred computers attached to the Internet. Most had a dozen or so, probably with something like a VAX doing most of the work - mail, news, EGP routing. Users did their work using DEC VT-100 terminals. FORTRAN was the word of the day. Few companies had Internet access, relying instead on SNA and IBM mainframes. Rather, the Internet community was dominated by universities and military research sites. It's most popular service was the rapid email it made possible with distant colleagues. In August 1983, there were 562 registered ARPANET hosts (RFC 1296).UNIX deserves at least an honorable mention, since almost all the initial Internet protocols were developed first for UNIX, largely due to the availability of kernel source (for a price) and the relative ease of implementation (relative to things like VMS or MVS). The University of California at Berkeley (UCB) deserves special mention, because their Computer Science Research Group (CSRG) developed the BSD variants of AT&T's UNIX operating system. BSD UNIX and its derivatives would become the most common Internet programming platform.
Many key features of the Internet were already in place, including the IP and TCP protocols. ARPANET was fundamentally unreliable in nature, as the Internet is still today. This principle of unreliable delivery means that the Internet only makes a best-effort attempt to deliver packets. The network can drop a packet without any notification to sender or receiver. Remember, the Internet was designed for military survivability. The software running on either end must be prepared to recognize data loss, retransmitting data as often as necessary to achieve its ultimate delivery.
Late 1980s - The PC Revolution
Driven largely by the development of the PC and LAN technology, subnetting was standardized in 1985 when RFC 950 was released. LAN technology made the idea of a "catenet" feasible - an internetwork of networks. Subnetting opened the possibilities of interconnecting LANs with WANs.The National Science Foundation (NSF) started the Supercomputer Centers program in 1986. Until then, supercomputers such as Crays were largely the playthings of large, well-funded universities and military research centers. NSF's idea was to make supercomputer resources available to those of more modest means by constructing five supercomputer centers around the country and building a network linking them with potential users. NSF decided to base their network on the Internet protocols, and NSFNET was born. For the next decade, NSFNET would be the core of the U.S. Internet, until its privatization and ultimate retirement in 1995.
Domain naming was stable by 1987 when RFC 1034 was released. Until then, hostnames were mapped to IP address using static tables, but the Internet's exponential growth had made this practice infeasible.
In the late 1980s, important advances related poor network performance with poor TCP performance, and a string of papers by the likes of Nagle and Van Jacobson (RFC 896, RFC 1072, RFC 1144, RFC 1323) present key insights into TCP performance.
The 1987 Internet Worm was the largest security failure in the history of the Internet. More information can be found in RFC 1135. All things considered, it could happen again.
Early 1990s - Address Exhaustion and the Web
In the early 90s, the first address exhaustion crisis hit the Internet technical community. The present solution, CIDR, will sustain the Internet for a few more years by making more efficient use of IP's existing 32-bit address space. For a more lasting solution, IETF is looking at IPv6 and its 128-bit address space, but CIDR is here to stay.Crisis aside, the World Wide Web (WWW) has been one of Internet's most exciting recent developments. The idea of hypertext has been around for more than a decade, but in 1989 a team at the European Center for Particle Research (CERN) in Switzerland developed a set of protocols for transferring hypertext via the Internet. In the early 1990s it was enhanced by a team at the National Center for Supercomputing Applications (NCSA) at the University of Illinois - one of NSF's supercomputer centers. The result was NCSA Mosaic, a graphical, point-and-click hypertext browser that made Internet easy. The resulting explosion in "Web sites" drove the Internet into the public eye.
Mid 1990s - The New Internet
Of at least as much interest as Internet's technical progress in the 1990s has been its sociological progress. It has already become part of the national vocabulary, and seems headed for even greater prominence. It has been accepted by the business community, with a resulting explosion of service providers, consultants, books, and TV coverage. It has given birth to the Free Software Movement.The Free Software Movement owes much to bulletin board systems, but really came into its own on the Internet, due to a combination of forces. The public nature of the Internet's early funding ensured that much of its networking software was non-proprietary. The emergence of anonymous FTP sites provided a distribution mechanism that almost anyone could use. Network newsgroups and mailing lists offered an open communication medium. Last but not least were individualists like Richard Stallman, who wrote EMACS, launched the GNU Project and founded the Free Software Foundation. In the 1990s, Linus Torvalds wrote Linux, the popular (and free) UNIX clone operating system.
- The explosion of capitalist conservatism, combined with a growing awareness of Internet's business value, has led to major changes in the Internet community. Many of them have not been for the good. First, there seems to be a growing departure from Internet's history of open protocols, published as RFCs. Many new protocols are being developed in an increasingly proprietary manner. IGRP, a trademark of Cisco Systems, has the dubious distinction as the most successful proprietary Internet routing protocol, capable only of operation between Cisco routers. Other protocols, such as BGP, are published as RFCs, but with important operational details omitted. The notoriously mis-named Open Software Foundation has introduced a whole suite of "open" protocols whose specifications are available - for a price - and not on the net. I am forced to wonder: 1) why do we need a new RPC? and 2) why won't OSF tell us how it works? People forget that businesses have tried to run digital communications networks in the past. IBM and DEC both developed proprietary networking schemes that only ran on their hardware. Several information providers did very well for themselves in the 80s, including LEXIS/NEXIS, Dialog, and Dow Jones. Public data networks were constructed by companies like Tymnet and run into every major US city. CompuServe and others built large bulletin board-like systems. Many of these services still offer a quality and depth of coverage unparalleled on the Internet (examine Dialog if you are skeptical of this claim). But none of them offered nudie GIFs that anyone could download. None of them let you read through the RFCs and then write a Perl script to tweak the one little thing you needed to adjust. None of them gave birth to a Free Software Movement. None of them caught people's imagination. The very existence of the Free Software Movement is part of the Internet saga, because free software would not exist without the net. "Movements" tend to arise when progress offers us new freedoms and we find new ways to explore and, sometimes, to exploit them. The Free Software Movement has offered what would be unimaginable when the Internet was formed - games, editors, windowing systems, compilers, networking software, and even entire operating systems available for anyone who wants them, without licensing fees, with complete source code, and all you need is Internet access. It also offers challenges, forcing us to ask what changes are needed in our society to support these new freedoms that have touched so many people. And it offers chances at exploitation, from the businesses using free software development platforms for commercial code, to the Internet Worm and the security risks of open systems. People wonder whether progress is better served through government funding or private industry. The Internet defies the popular wisdom of "business is better". Both business and government tried to build large data communication networks in the 1980s. Business depended on good market decisions; the government researchers based their system on openness, imagination and freedom. Business failed; Internet succeeded. Our reward has been its commercialization.
0 comments:
Post a Comment