Why Understanding the Evolution of Internet Traffic Management is Important
When a network approaches link saturation, poor Internet access quickly follows. As Internet Service Providers (ISPs) work to bring their subscribers the best internet experience, traffic management has become an important part of the process. To understand the evolution of internet traffic management, this blog will delve into the relevance of application-aware architectures, Edward Snowden’s role in shaping encryption, trends in application-based traffic management, the role of Bufferbloat, the present-day internet traffic management experience, and more.
Prefer watching a video? If so, scroll to the end of this article to watch Dan Siemon, Chief Product Officer at Preseem, walk you through the evolution of internet traffic management in under 15 minutes.
Where Internet Traffic Management Started
In 2006, the internet was a relatively new technology that many people were eager to access. ISPs were eager to meet this increasing demand and the industry as a whole was booming. Around the same time that ISPs began signing up an abundance of customers to their networks, BitTorrent became popular. Rather quickly, the amount of BitTorrent traffic reached a point where it was disrupting non-BitTorrent network traffic. As a result, links, cable modems, and backhauls became full, creating a poor experience for BitTorrent and non-BitTorrent users alike.
Noticing the poor performance of their networks, network managers decided that they needed to control subscribers’ use of BitTorrent. Accordingly, managers sought a way to control the applications used across their networks. Through this, network managers spawned an entirely new industry concerning application-specific traffic management. While application management initially consisted of limiting the number of connections across a network, the notion of exerting control over applications across networks would later evolve into modern-day issues such as the Comcast Corp. v. FCC case and the Net Neutrality movement.
The Move to Application-Aware Architectures
Poor reception to connection limiting led to the development of inline, traffic-shaping architectures. This gave rise to the development of complex inspection engines, built with the purpose of identifying applications associated with packets. Deployed in conjunction with traffic management policies, these inspection engines helped decide how to prioritize applications.
In the beginning, it was relatively easy to match traffic accurately. Nevertheless, the increased complexity of application-aware architectures over time led to less accurate results. As control was exerted over an increasingly large number of applications, an abundance of unanswerable questions formed. Decisions about which applications should be prioritized across a multitude of use cases caused traffic-shaping architectures to become increasingly intricate. Indeed, this increasing complexity resulted in a decreasing usefulness of matching traffic. In addition, revelations brought about by Edward Snowden soon made matching traffic across networks even more difficult.
A Turning Point with Snowden
In 2013, Edward Snowden released thousands of secret documents from the US government, revealing pervasive surveillance of internet traffic. Moreover, these documents revealed that data being transmitted within data centers was not being encrypted, allowing tapping of the fibers between data centers. Consequently, Snowden’s leak galvanized technology companies and the internet community to pursue new protocols and to encrypt everything, no matter how trivial the data. As a result, the ability to accurately identify the application associated with every packet is continually getting more difficult.
New Trends in Application-Based Traffic Management
Today, we’re at the point where the amount of identifiable internet traffic is decreasing, while the complexity required to identify traffic is increasing. As discussed earlier, when utilizing network links beyond 80% of their capacity, network latency can skyrocket. This in turn leads to a poor quality of experience for subscribers. In the past, the focus has consistently been on trying to control network traffic so that it stays below link saturation. However, what if, instead of instituting a set of increasingly complex policies to try and control network traffic, we were to simply improve the subscriber experience when links are nearing saturation? This is the function of application-based traffic management.
Bufferbloat and Active Queue Management (AQM)
In 2010, Jim Gettys coined the term “bufferbloat.” This describes the poor network behavior that occurs due to full links, which results from associated buffers that also get full. As awareness of bufferbloat came to the forefront, it created a renaissance in queue management technique development, known as AQM. This consists of algorithms applied to packet buffers, which allow for an optimal network experience even under heavy load.
Find out how Preseem uses AQM to enforce bandwidth limits and manage high-bandwidth applications, while improving each subscriber’s quality of experience (QoE).
The Present-Day Experience of Internet Traffic Management
As a result of AQM, the saturated link problem of the past has been solved. Consequently, complex policies and application awareness are no longer required to deliver a good subscriber experience. As Preseem uses AQM, we have the capability to ensure an optimal network experience alongside measuring and providing QoE analytics across access points and towers, and individual subscribers. For a more in-depth explanation of the evolution of internet traffic management, check out the video below!
Using AQM techniques, Preseem enforces bandwidth limits by managing the use of high-bandwidth applications to ensure a positive subscriber QoE. Click the button below to sign up for a 30-day free trial.