The History Of The Internet
In the 1960s, physicists had realized that the electromagnetic pulse from a high-altitude nuclear explosion would disrupt, and quite possibly destroy, electrical systems in a large area. Any centralized communication network such as the phone system that the United States (and the rest of the world) used would be in trouble. RAND researcher Paul Baran went to work on this problem. Since the frequency of AM radio stations would not be disrupted by the blast, Baran realized the stations could be used to relay messages. He implemented this using a dozen radio stations.
Meanwhile, using digital networks, Donald Davies of the United Kingdom’s National Physical Laboratory found another solution to the problem. Davies solved Baran’s problem while trying to address a completely different question. Davies was interested in transmitting large data files across networked computers. The problem is different from voice communications. Data traffic is bursty: lots of data for a short time, then nothing, then lots again.
Dedicating a telephone circuit to a data transfer did not make a lot of sense; the line would just not be used to its full extent and, unlike with voice, a small delay is not a major issue in transmitting data files. Both Baran and Davies hit upon the same solution. Redundancy in the network-multiple distinct ways of going between the sender and the recipient-was key. Such redundancy is surprisingly cheap to obtain.
Say a network has redundancy 1 if there are exactly enough wires connecting the nodes so that there is one path between any two nodes; if there are twice as many wires, call that redundancy 2, and so on. Trying experiments on more traditional communications networks, Baran ran simulations and discovered that with a redundancy level of about 3, “The enemy could destroy 50, 60, 70% of the targets or more and [the network] would still work. The design resulted in a highly robust system.
The distributed networks that Davies and Baran had independently invented were to be even more decentralized than decentralized networks of earlier efforts. Network redundancy meant that paths might be much longer than the typical PSTN communication. Thus the communications signal had to be digital, not analog. That turned out to be a tremendous advantage. There was no need for the entire data transfer to occur in one large message; indeed, efficiency and reliability argued that the message should be split into small packets. The idea was that when the packets were received the recipient’s machine sent a message back to the sender saying, “OK; got it.” If there was no acknowledgment, after a short period, the sender’s machine would resend the packet. Of course, because the packets traveled by varied routes, they might arrive out of order.
But the packets could be numbered, and the receiving end could simply sort them back into order. One of the striking things about this proposed network was that while
the network itself was to be extremely reliable, individual components need not achieve that same level of reliability. Instead the network depended on “structural reliability, rather than component reliability.”
The Beginnings of Peer to Peer Networks
Small amounts of redundancy led to vastly increased reliability, a result surprising to the engineers. The Internet’s decentralized control meant all machines on the network were, more or less, peers. No one computer was in charge; the machines were more or less equal and more or less capable of doing any of the communication tasks. A computer could be the initiator or recipient of a communication, or could simply pass a message through to a different machine. This is the essence of a peer-to-peer network, and very much the antithesis of the telephone company’s hierarchical model of network communication.
In Britain, the telecommunications establishment supported Davies, but in the United States Baran received a chilly reception from AT&T. Baran was turning all the ideas that AT&T had used to manage their system upside down. While scientists at the research arm of AT&T were quite excited by Baran’s work, corporate headquarters viewed that approbation as the reaction of head-in-the-cloud scientists and refused to have anything to do with Baran’s packet-switched network. The odd thing about all this was that the new network was not actually a new network at all. Baran had built his network on top of the existing telephone network built by Alexander Graham Bell and his successors. It “hooked itself together as a mesh, simply connecting everything in new ways.
Scientific and technological ideas often emerge when the time is ripe, and Baran and Davies were not the only ones to be considering packet switched networks. (The actual term packet is due to Davies, who wanted to convey the idea of a small package.) In 1961, Leonard Kleinrock, then a graduate student at MIT, published the first of a series of papers analyzing the mathematical behavior of messages traveling on one-way links in a network. This analysis was critical for building a large-scale packet – switched network.One could say, only partially tongue in cheek, that the Internet is due to Sputnik, the Soviet satellite that in 1957 startled the United States out of its scientific complacency. In response the U.S. government founded the Defense Advanced Research Projects Agency (DARPA), a Department of Defense agency devoted to developing advanced technology for military use.
The Internet grew out of an ARPANET project and was perhaps the most important civilian application that came from DARPA. In 1966 DARPA hired MIT’s Lawrence Roberts to build a network of different computers that all communicated with one another. This would be a resource-sharing network. Each individual system would follow its own design with the only requirement being that the various networks be able to “internetwork” with one another through the use of Interface Message Processors (IMPs). Designing the IMPs fell to a Cambridge, Massachusetts, consulting company, Bolt Beranak and Newman (BBN), one of whose researchers, Robert Kahn, moved to DARPA. Kahn realized that only the IMPs would need a common language to communicate, and this greatly simplified the entire scheme. The other machines within the individual networks would not need to be transformed in any way in order to communicate with the rest of the system.
These principles made so much sense that forty years later they still govern the Internet:
•Each individual network would stand on its own and would not need internal changes in order to connect to the internetwork.
• Communications were on a “best-effort” basis. If a communication did not go through, it would be retransmitted.
• The IMPs would connect the networks. These gateway machines (now known as routers and switches) did not store information about the packets that flowed through them, but simply directed the packets.
Note: Feel free to republish this article on your own blog or website but please copy paste the below ‘Author Credits’ and include it at the bottom of your post or page. Thank you.
Amit Sen, a commercial pilot by training, has over 15 years experience in the space of corporate investigations, handling Copyright & Trademark infringement cases, Pre – employment verification Industrial Espionage investigations, Asset & Net – Worth assessment assignments and vendor / supplier verification cases, among others. Co-founder of Alliance One Detectives – the best private investigation detectives Mumbai, Amit has successfully completed assignments in a wide range of sectors, including the machine tools industry, pharmaceutical industry, hospitality sector, specialized equipment (Oil & natural gas sector, aviation industry etc.), telecom industry & the IT & ITes sectors. These cases have all involved both offline and online investigations.