Internetworking In The 21st Century

Andrew McRae
cisco Systems



The 21st millienium is closer than the year 1990. The Internet has (finally) spawned a killer application that has triggered unexpected and unprecedented growth, and while most users are besotted with the Web, the demand is for more and more bandwidth to feed the hungry child. Where is this bandwidth going to come from? What are the technologies that are going to be form the foundation for internetworking in the 21st Century?
This paper attempts to paint a picture of the underlying technologies that will enable the successful deployment of the Internet applications currently being planned, designed and built.


Recently a Chicago newspaper described one of the creators of the Netscape Navigator as `one of the founders of the Internet' in spite of the fact that the individual was born around the same time as the Internet itself. Such is the overwhelming and somewhat unexpected explosion brought about by the first Internet killer application, the Web. Even though we had been talking about it for years, the sudden acceptance and use of the web took even the most optimistic of pundits by surprise.

Many sceptics have predicted the Imminent Death of the Net, being variously attributed to government censorship, the "about to be totally accepted by everybody" OSI protocols, total and irretrievable collapse of the Internet Backbone, the exhaustion of the IP address space, the unveiling of private networks Much Better(tm) then the Internet (e.g MSN), and the obvious fact that ordinary people wouldn't be able to use it without a computer science degree. In spite of all this, the Internet not only has failed to roll over and die, but has become something far greater than its proponents ever expected. How has it done so? The primary reason is due to the fundamentally solid technical foundations and engineering instilled into it from early days. The Internet was not designed, it was engineered. And in fact re-engineered at key times. The critical factor is that the Internet infrastructure can take advantage of new technology without disruption, and as new technologies become available, they are used to regenerate older portions. The Internet faces a grand challenge of maintaining this record of growth. What technology will be used in the future to allow the Internet to sustain the combined load of all the applications that are being mooted? What should Network Planners be installing to allow their organisation to actively take advantage of the killer applications being designed and deployed now and in the future?

Internet Growth

Traditionally the Internet backbone has been a US government supported entity, with organisations such as the NSF playing an active role in funding and maintaining the core of the Internet.

In April 1995 the NSFnet disbanded in order to allow commercialisation of the core to take place.

The following graph shows the traffic figures and extrapolations from that time:

At this present moment we are well off the scale, providing an indication of the exponential nature of the growth.

A number of factors are driving this growth:

It is impossible to predict the true requirements of the Internet infrastructure, as the introduction of faster and faster links makes new applications possible. This is a market that is expanding into a vacuum, and has not reached any bounding factor except cost of bandwidth. The challenge facing the organisations dealing with the core of the Internet is knowing what technologies need to be in place to deal with this growth.

New Infrastructure Technology

There are a number of areas of new technology that will have a major impact on the Net. These fall broadly into three categories; Enterprise local and campus area networking, wide area networking (including Internet backbone requirements), and user access.

Enterprise Networking

It is surprising to discover that the biggest growth area for networking equipment such as routers and switches is not as part of the Internet, but as equipment within organisations wishing to network the entire enterprise e.g government departments, medium and large companies etc.

A key concept is the emergence of the Intranet, or an Internet within an enterprise. These Intranets use the widely available tools and applications originally designed for the Internet (such as routers and web browsers) to provide easily accessible data to all levels of an organisation.

cisco Systems itself is a prime example, where a large amount of data is available via internal web sites. Nearly all data that needs to be accessed such as documentation, project information, database records are obtainable via web pages. This trend will grow rapidly as more gateways become available to proprietary database systems and other former islands of information. Once organisations start to internally rearrange their data to allow business to be performed via the web, it is a simple step to open this up to allow external business to be conducted via the same interface.

The requirement that this brings about is to provide a stable and common enterprise network that can interoperate with many different hosts, legacy systems, and desktop systems. The need is not only to be stable, but to be fast. Most organisations cannot accept this new technology if it requires a large cost to install, so existing infrastructures must be incorporated seamlessly.

The use of twisted pair Ethernet (10BaseT) has been very widespread, and continues to grow. The development of Fast Ethernet (100BaseT) has been made possible through the use of Ethernet switches which allow traffic isolation i.e. since 100BaseT uses the same packet framing as 10BaseT, it is simple to integrate the two. The fact that 100BaseT can run on the same twisted pair means that organisations have a protected investment and a smooth upgrade path to faster networks.

This has led to the use of 100BaseT as a backbone interconnect, allowing 10BaseT to be used to the desktop, and 100BaseT to servers and as backbones within a local area such as a building. The use of virtual LANs within switches allows workgroups to be separate, and protocols such as ISL (Inter Switch Link) across Fast Ethernet may be used to route between virtual LANs.

The next step will be the introduction of 100BaseT to the desktop; the fact that the same wiring is used is a huge advantage. Fast Ethernet has the bandwidth to allow a new range of applications to be brought to the desktop such as full motion video etc. Since each 100BaseT link is a point-to-point link between the end station and switch, it may operate full duplex without suffering collisions like a collapsed backbone hub would. This allows full use of the 100 Mbits in both directions. Fast Ethernet will spell doom for more complex and expensive backbone networks such as FDDI.

Once the desktop obtains 100 Mbits, the backbone interconnects will be stressed if they remain at 100 Mbits. In this arena it was expected that ATM would start to make a major impact. Due to a number of reasons, this has not eventuated as expected; ATM is still considered a technology with a low penetration of the Enterprise backbone market. A driving factor was the integration of voice, data and video, but cost and complexity have been major inhibitors. The 1996 Global IT Survey by IDC indicates that only about 5% of users are considering any use of ATM.

The emergence of the next generation Ethernet, known as Gigabit Ethernet will likely place a few nails in the coffin of ATM as a local Enterprise backbone. The first Gigabit Ethernet solutions will arrive over the next 12 months, and it is expected that this technology will become the backbone interconnect of choice over FDDI and ATM (in conjunction with Fast Ethernet).

WAN Technologies

The Wide Area Networking infrastructures are much more reliant upon the provision of physical link capability by the relevant telecommunication organisations e.g Telstra in Australia, MCI/Sprint etc. in the USA. It is clear that the internal technologies used in these organisations will be available more and more as raw bandwidth interconnecting points of presence for Internet Providers and other private networks. Examples of these technologies are SONET (Switched Optical Network), ATM etc. Currently, the major backbone providers make extensive use of OC-3 (155 Mbits/sec) connections, with some OC-12 (650 Mbits) starting to be deployed. It is interesting to see a demand in Packet Over SONET, which uses the raw ATM connection as a simple IP packet transfer mechanism, eliminating the internal fragmentation of the ATM cell size.

Within the next twelve to eighteen months, it is expected that OC-48 technology (2.4 Gbits/sec) will be deployed within the core of the Internet. This places a large demand on the actual switching and routing equipment used within this network. For example, an OC-3 link from Stockholm to Sydney (using a bandwidth by round-trip-time factor) would require 6 Mbytes of buffering, and requires packet switch times in the order of 4 microseconds. OC-48 is 16 times higher bandwidth than OC-3, with a corresponding increase in support requirements.

I don't expect many organisations have to deal with the issues that are facing the Internet core providers, or the vendors that are working with these providers to deliver working solutions. The problems facing ordinary mortals are usually related to the lack of digital capability within the current telephone network. It is interesting to see that Telstra are seeing major problems with users tying up phone circuits with modem calls; if Telstra had the forethought to provide a reasonably priced digital network using technology such as frame-relay or ISDN, then these problems would not exist. I suspect there are few sympathisers, just people installing POTS lines and running multi-link PPP over multiple modems to try and get decent bandwidth without paying extraordinary sums of money.

The availability of reasonably priced digital capacity is key to developing the local Internet infrastructure successfully, and telecommunication authorities must be willing to provide these capabilities.

User Access

In the USA, there is a race on. The prize is lucrative - access to millions of homes to provide voice, data and most importantly services. In the contest are the cable TV providers, who are using their existing cabling structure to deliver direct high speed Internet access to homes. Once achieved, the user will have access to a complete range of services; the user will be able to make phone calls via the Net, order pizza, or surf the Net. The phone companys are (and rightly so) most concerned that their traditional base of customers could be wiped out by a bunch of TV moguls; they are attempting to deliver their own high speed service via a number of high speed Digital Subscriber Line systems (xDSL). There can only be one eventual winner in this race, even if both deliver a product. The real winner is the user, having the capability of an Ethernet delivered to his house connected to the Internet.

The ones who are getting worried are the backbone providers, who are considering what it means to have their end users connected via a 6 Mbit link instead of a 28.8 Kbit modem.

It is clear that the Internet will survive for long time to come, even if it evolves into something quite different to what we are used to.

Protocol Changes

Whilst there is massive change ocurring within the physical hardware that the Internet operates on, what changes are required at the protocol or organisation levels to support the Internet growth?

The first observation is in spite of the Chicken Littles, the sky hasn't fallen, at least not in the manner predicted. The concern (and a valid concern) was raised several years ago of the rapid exhaustion of the IP address space. An ancilliary concern was the explosion of IP routes, and whether the backbone routers could cope with the exponential addition of new routes.

In the time-honoured Internet tradition, the problem was examined from an engineering perspective, and several actions were set in place to allieviate the problems:

The observation is that CIDR has slowed the headlong rush of address space exhaustion, and has certainly eased the problem of route explosion. Currently, a typical backbone router would maintain around 40,000 IP address prefixes, but the growth in routes has slowed to less than exponential rates.

Another factor has been the development of Network Address Translation (NAT) applications, where hosts behind a firewall or network gateway have their (presumably not officially allocated) IP address translated into an official address from a small pool available to the gateway. In this way, a large number of hosts can share a relatively small number of IP addresses. The situations where NAT is most useful are:

Both CIDR and NAT have provided breathing space to allow the development and the deployment of the next generation of IP, IPv6. It will take some time for IPv6 to penetrate the Internet mainstream, but there are some compelling technical advantages over the current version of IP that may mean rapid acceptance.

The deployment of IPv6 will be of little interest to most users of the Internet; depending on the need, IPv6 may start to operate on the Internet backbones over the next two years with the current IP protocol being used at the edges for some time to come.


The challenges that are facing the Internet are major. These challenges fall into the broad areas of new switching infrastructures and new routing techniques.

Switching Infrastructures

Years ago, a common Internet gateway was a host such as a Sun workstation. The most common gateway now is a specialised router. This change came about because the router could do the job faster and would support more network connections. Ethernet switches are commonly used within organisations, often with a `router on a stick' connected to it to provide routing capability.

This trend could be summarised as one of increasing specialisation. To handle the performance of the networks being deployed, routers and switches need to contain custom hardware and software quite distinct from hosts. It is expected that this trend will undoubtedly continue as technologies such as Gigabit Ethernet become common. It is expected that higher end switches will fuse routing and switching to provide seamless network operation. Certainly as the demand for higher and higher speeds continue, the technology demands on interconnecting devices will be major.

Another infrastructure change is the high level of aggregation through channelised high speed connections such OC-3, allowing many distinct circuits to be processed with a single physical connection. This has already been in place for some time with the ISDN Primary Rate Interface, but the use of high speed connections has been limited.

The integration of other services such as voice and video has been touted for a long time, without a great deal of realisation. The fundamental problem has been one of perspective; the telecommunications industry sees that data is just another service to be provided along with other services such as voice and video, and the Internetworking world sees that a common data networking platform could support other services such as voice or video. Until this dichotomy is resolved, there will always be mismatching of protocol layering e.g trying to provide video/voice quality of service using ATM cells, and also trying to efficiently run protocols over it with their own ideas about quality of service.

New Routing Techniques

The essential problem with high speed routing is that the forwarding node has only a very, very small time in which to look at the packet header and decide what to do with it. Specialised hardware can be used to help the router do this, but there is still a finite number of things that must be done when forwarding an IP packet.

The table below provides some indication of the problem. This shows the required switching times for a minimum sized (64 byte) packet.
Packet Switching Times


Switching Time
10BaseT 10 Mbits/sec 67 microseconds
100BaseT 100 Mbits/sec 6.7 microseconds
OC-3 155 Mbits/sec 4.3 microseconds
Gigabit Ethernet 1 Gbits/sec 670 nanoseconds
OC-48 2.4 Gbits/sec 280 nanoseconds

Given the above times, it is hard to see how current mechanisms will scale to perform the task at hand.

Techniques will be described about ways that new routing technology can be used to reduce the routing overhead considerably and speed up the switching of packets. Without these techniques, the Internet will not scale to cope with the expected demand.

Routing protocols have developed greatly over the last few years, with the advent of link state protocols, border protocols and route aggregation at demarcation routers. Routing protocols have evolved with the Internet, and periodically new generations of routing protocols have to be introduced that are capable of scaling a thousand fold. It is expected that as the Internet grows, the routing protocols will have design goals of network stability and policy management as well as fast convergence. Particular attention will be paid to controlling the dampening of the system so that route flaps or changes do not cause major network reconvergence or disruption.


Looking back over the years, it is a miracle that the Internet has survived (and grown!) given how little the fundamental protocols have changed. It is truly a testament to the designers that it has stood the test of time.

Nevertheless, the future growth of the Internet will highlight more and more the necessity to migrate to new switching techniques; the core of the Internet will change radically as very high speed backbone lines come into play, and the advent of new user access devices such as cable modems or xDSL will open up the Internet to a much wider audience. This in turn will fuel new opportunities for applications. Underlying all this, the Internet infrastructure will require continual engineering to allow it to scale as needed.

One thing is sure, we live in interesting times.