The History of Networking
The Aloha Network – The Beginning of Ethernet
In 1968, Norman Abramson at the University of Hawaii developed a network to communicate between campuses of the University located on different islands. The network employed radios to transmit data. In this system, which has been dubbed the “Aloha” system, a station wishing to send data first listened to see if the channel was in use. If it was not, the station proceeded to transmit a frame. It then monitored the channel to see if anyone else had transmitted a frame at the same moment. If so, a collision had occurred. The station then waited a random period of time and retransmitted the frame.
Since the frequency of data transmissions in this network was not great, collisions were rare and were not a serious problem. The Aloha system pioneered the basic precepts of Ethernet — collision detection and recovery.
PARC Research Contribution
In the early 1970’s, engineers at the Palo Alto Research Center envisioned a network having no central components. A single wire connected each computer to the network by the same simple method. Because it was just a wire, all of the computers could “hear” all of the transmissions of all of the other computers. A network topology designed so that all of the computers connect to the same wire segment is called a “bus topology.” Computer networks based on a bus topology have several advantages:
- Outstanding modularity because every connection to the network is identical
- Simplicity, since every transmission is automatically delivered to every user
- Low cost, because of the modularity and simplicity
The disadvantage of the bus topology is that all of the users must share the same total wire bandwidth.
In 1973, Bob Metcalfe and David Boggs succeeded in building a bus topology network which transmitted data between users at PARC at 2.94 Mbps. Metcalfe had conceived a network that connected all computers by a ubiquitous force that flowed through a building, connecting everything. This experimental network was dubbed “Ethernet”, after the luminiferous ether through which electromagnetic radiation was once thought to propagate.
Consortium – Ethernet Becomes De Facto Standard
Xerox never did market 3 Mbps Ethernet, although a system was installed in the White House, where it was used for word processing during the Jimmy Carter presidency. In 1976, Ethernet was upgraded to 10 Mbps by Ron Crane, Bob Garner, and Roy Ogus. This faster design quickly became a de facto networking standard. It was published in September of 1980 by DIX, an industry consortium consisting of DEC, Intel, and Xerox.
After its launch by DIX, Ethernet’s acceptance grew rapidly. In 1982, 3-Com introduced the first Ethernet adapter for the IBM PC — an ISA adapter selling for $950. In June of 1983, the IEEE finally approved the first 802.3 standard, which, aside from some unimportant modifications, was identical to the DIX standard.
As networking continued to grow, CPUs became faster and applications more sophisticated. The “single wire’ of the bus topology was moved inside a small box called a hub, and all the computers on an Ethernet network “plugged in” to the hub to connect to the network. Soon the shared 10 Mbps bandwidth became insufficient to handle the requirements of growing LANs.
The obvious solution was to break up the LAN into multiple LAN segments, each with the full 10 Mbps bandwidth. The problem with this approach was that users on one segment required data stored on other segments.
The industry solution to this problem was to build routers to bridge traffic between LAN segments. As the technology continued to develop, these “enterprise” networks became more and more difficult to manage. Equipment became expensive, the training to manage a network extensive, and throughput across multiple routers sluggish. The real need was more bandwidth.
Fast Ethernet
In 1992, Grand Junction Networks realized the need for increased bandwidth and launched a 100 Mbps version of Ethernet. It was Ethernet, but at 10 times the bit rate and was called “Fast Ethernet.” Fast Ethernet emerged as an IEEE standard in 1995 — again, after having been accepted first as a de facto standard. However, Fast Ethernet suffered from problems associated with collision detection and recovery. Because of its higher speed, a smaller collision domain diameter of 205 meters was adopted. Fast Ethernet was very quickly accepted by the market when affordable combo boards — adapters capable of auto-detecting either 10 or 100 Mbps Ethernet — became available.
It is interesting to note that early sales of Fast Ethernet repeaters and switches did not keep pace with adapter sales, resulting in a significant percentage of the 100-Megabit combo adapters installed early on operating at 10 Mbps. This phenomenon was partially explained by the fact that Fast Ethernet switches were relatively expensive, and large investments in Fast Ethernet equipment often provided little relief from problems of network congestion. It was not until Ethernet switches began to use Full Duplex transmission that common, real-world usage of Fast Ethernet began. When this occurred, Fast Ethernet networks doubled their performance, were no longer troubled by collisions, and began to take over the market.
But even at ten times the speed of the original Ethernet, Fast Ethernet has limitations. While the increase in bandwidth has been welcomed by the industry and Fast Ethernet has enjoyed command of the networking market, the industry has worked to develop the next generation Ethernet technology — Gigabit Ethernet (GbE).
Gigabit Ethernet
Considering the many complexities surrounding issues related to computer networking, perhaps the single issue upon which everyone can agree is the ever-increasing need for more bandwidth.
Since its inception, Gigabit Ethernet has been a dominant factor in improving network performance. Originally coming to market as a fiber-only technology, GbE has been mainly deployed in the backbone. In recent months, however, the copper version of GbE has begun to make a significant impact on the market. Incorporating Full Duplex transmission and Flow Control to limit loss of data packets, GbE has become a viable desktop solution. According to recent market research, Gigabit Ethernet usage is expected to follow the same rapid growth trend as did Fast Ethernet during the last decade.
The expansion of GbE to the edge of the network will, in many cases, provide measurable improvements in performance for a variety of applications and users. The cost of GbE has fallen so dramatically that it is no longer such a significant factor in a deployment decision.
In the foreseeable future, it is conceivable that GbE equipment will actually be more affordable than its 10/100 predecessors.