Bandwidth management is how Internet Service Providers (ISPs) control traffic on their networks and enforce subscriber plan rates.
Traffic shaping is a commonly-used bandwidth control mechanism that’s intended to improve network performance, decrease latency, and increase available bandwidth. The ultimate end goal of this is to improve the subscriber quality of experience (QoE), though it’s worth noting that traffic management can only help QoE if the underlying access network itself is healthy. If you have an overloaded access point, for example, it doesn’t matter how smart your traffic management system is—your subscribers are still likely to have a poor experience.
With improved QoE, internet users enjoy a smooth online experience that always “feels fast.” This in turn helps reduce support costs for ISPs, while improving their reputation and bottom line.
Think of bandwidth control as the place where the science of network management and the art of customer success meet.
There are many different traffic management techniques used by ISPs to control bandwidth, each with their own pros and cons. Of course, here at Preseem we have definite thoughts on this 🙂 That’s why we’ve decided to explain how we approach bandwidth shaping and look at what methods we don’t use, and why.
Bandwidth Control – Preseem Best Practices
Our bandwidth management philosophy is that ISPs shouldn’t have to manually manipulate subscriber traffic to improve QoE.
For bandwidth enforcement and traffic management, we use active queue management (AQM) techniques based on the FQ-CoDel algorithm. This looks at all flows and separates them into two categories: bulk (e.g. Netflix, system updates) and interactive (e.g. online gaming, Zoom calls).
This automatically prioritizes the interactive flows. We then leverage AQM to set a latency target as opposed to a size target for the bulk flows. This way, no packet stays in the queue for longer than 10-20 milliseconds. This assures lower and more consistent latency, lower jitter, and most importantly, a better experience for your subscribers.
For ISPs, this translates directly to fewer subscriber tickets and support calls, more efficient troubleshooting, happier customers, and reduced churn.
Just as crucially, this is a set-it-and-forget-it, hands-off approach that “just works.” You don’t need to worry about setting complex rules, or keeping up with application changes. You also don’t have to monkey with your subscriber traffic to temporarily boost or limit rates.
Our QoE optimized plan enforcement ensures a subscriber hitting the limit of their plan continues to get a good experience. In practice, this solves problems like video calls dropping when other members of the household are streaming video.
How DM-Tech Improved Throughput and Subscriber QoE with Preseem
Let’s look at an example to see how this translates in the real world. DM-Tech is an ISP in Northern California that’s been serving residents there since 1994. As Tyler Casey, co-owner and Network Engineer at DM-Tech explained during a webinar at our WISP Virtual Summit, the company installed Preseem at a time when they were overwhelmed with subscriber complaints about internet speed.
After installing Preseem, they immediately identified hundreds of problematic APs, backhauls, and subscribers. To fix this problem, they froze all scheduled installs and invested money into the needed upgrades to their system.
Impact: Within eight months, DM-Tech’s subscriber throughput increased from 2.5 Gbps to over 4 Gbps. Because of this, Tyler said he could “infer that we were operating at a massive bandwidth deficit, which we were blissfully unaware of. Fortunately, once we got Preseem integrated, Preseem’s shaping engine kept the latency as low as possible for us while we were upgrading the network to provide more throughput.”
As a result, Tyler said “our subscriber retention and satisfaction is at an all-time high. This is evident in both social media reviews and just the word-of-mouth increase that we’ve been getting.” Watch the full video below to learn more about DM-Tech’s success.
What We Don’t Do and Why
Bandwidth bursting is the concept of having a set base plan rate and the ability for the subscriber to go to a higher bitrate for a short amount of time. For example, a plan may have a 5 Mbps rate and the ability to go up to 10 Mbps for 15 seconds.
The idea is that ISPs can temporarily boost available bandwidth without having to upgrade to support those speeds permanently. For example, throughput may be doubled for the first 15 seconds of a subscriber’s online activity. Theoretically, this also means that subscribers can use more available bandwidth without having to upgrade to a higher plan.
Bandwidth bursting drawbacks include:
- Many applications, like Netflix, adjust their behavior to the available bandwidth. For example, with a higher rate burst at the start, it may select a higher bitrate for the video. The key thing to remember with Netflix is that it’s not a streaming protocol, it’s a chunk download protocol. It downloads each chunk at the fastest rate it can and then goes idle. This can mean more data to transfer and more impact to other subscribers on the access point.
- When the rate suddenly changes (i.e. when the burst ends), flows that are moving data will have enough packets in flight to keep the bigger pipe full. When the rate suddenly drops, these packets queue up and add latency until they drain and the sender backs off.
- The biggest problem is that giving a burst to one subscriber doesn’t come for free when the AP is busy. Instead, Instead, it comes at the cost of lower throughput, latency and loss for other subscribers making their QoE worse.
ISPs who engage in bandwidth bursting are often trying to improve QoE for their subscribers by making their plan “feel faster.” While this is a commendable goal, it’s better achieved by reducing latency and flow isolation, as opposed to a short-term, unreliable boost in throughput.
Deep Packet Inspection
One way to think about Deep Packet Inspection (DPI) is as a classification function. DPI looks at packets and decides, where possible, which application generated the packet. Classification is achieved with complex techniques such as signatures, heuristics, and machine learning.
The challenge with this is that internet applications are constantly changing. New applications are always being developed (e.g. new games, sites) and existing ones change. Keeping up-to-date is a constant game of cat and mouse. This is getting more challenging over time as applications add encryption and move to common infrastructure at a small number of cloud providers (Google Cloud, AWS, Azure, etc.). This chase means that the cost of keeping up with what users are doing will never end.
Here’s a few reasons why FQ-CoDel and AQM are a better option for bandwidth management:
- FQ-CoDel isolates bulk and interactive traffic flows to reduce latency, even under heavy usage—this means significantly fewer “slow internet” calls. DPI can help explain which applications a customer was using when their internet was slow, but isn’t it better to just solve the slow internet problem in the first place?
- AQM manages high-bandwidth applications and enforces plan speed limits automatically without manual configuration, giving you time back in your day and peace of mind that your network is running smoothly. Unlike with DPI, there are no uber-technical “geek knobs.” This means you can diagnose and proactively fix problem areas easily.
- DPI looks at packets and decides, where possible, which application generated them and then attempts to classify for queue placement. With FQ-CoDel, you just set it and forget it. The algorithm classifies packets based on flow volume and assigns them to bulk or interactive queues automatically. That leads to low latency and happier customers, and an easier life for you and your staff!
Application-Specific Rate Limiting
Application-specific limiting involves setting artificial bandwidth constraints on specific internet applications to address subscriber QoE issues. This is actually related to DPI, as that’s how traffic for each individual application is identified.
Let’s take Netflix as a common example. Your ISP may receive complaints from subscribers indicating that Netflix is buffering or displaying sub-HD quality. With application-aware rate limiting, the “solution” might be to limit throughput for Netflix to a rate which the customer’s plan can support, and so eliminate the buffering.
However, why sell a plan to a customer that you can’t deliver? And who’d want to buy a plan that deliberately delivers less than the customer thinks they’re buying? Application-specific rate limiting is really just a misleading band-aid that doesn’t address the underlying network issues. It’s also going to cost you time and involve a great deal of manual work.
For example, in the above scenario, setting a rate limit for Netflix works around the problem for that particular application. Similar rules would then be required for every other video streaming service, game, or application a subscriber might use. This adds major complexity including:
- Determining the appropriate rate for each application (this in turn may be household-specific, e.g. based on the number of devices)
- Determining inter-application priorities (or implicitly letting them be equal)
- Having up-to-date application signatures for each application. This is possible for the major applications, but there’s many more applications (e.g. new games) that impact an individual subscriber’s experience.
We believe strongly in application-agnostic QoE measurement, analysis, and optimization. Preseem provides visibility to identify problems in the network that cause poor subscriber QoE in the first place. Once those underlying problems have been addressed, Preseem’s QoE optimization ensures that each customer gets a good experience even if they fully load their connection.
The Transmission Control Protocol (TCP) is used to move data reliably between two devices on the internet. Many online applications use this, from streaming video to web browsing.
TCP Acceleration (via TCP Accelerators) aims to improve some aspects of network performance, like throughput, by modifying TCP congestion control and retransmission behaviors. It does this by actively interacting with the TCP flow between the two endpoints. There are a few issues with this method of bandwidth management, however:
- With the emergence of the QUIC protocol, TCP traffic is declining. As a result, so is the usefulness of optimizing it.
- TCP acceleration comes at the expense of other traffic. There’s no free lunch on a busy access point, subscriber plan, or any congestion point.
- Similarly, a TCP flow accelerated to a low-modulation subscriber can lead to consumption of more of the AP’s airtime and affect other subscribers.
- TCP Accelerators create another stateful point in the network. This has complexity and reliability implications.
Think of it this way. A properly provisioned access network configured to deliver good QoE doesn’t need TCP Acceleration. In a “clean” network, TCP Acceleration and techniques like application-specific rate limiting have no real benefit and only bring unnecessary complexity.
Our Bandwidth Management Summary
Consumers today rightly expect access to a reliable internet service that doesn’t slow down or experience buffering issues at any time of day. This need has become even more critical as the number of online devices and interactive applications increases.
Band-aid solutions that temporarily solve bandwidth issues until the next crisis comes along are not ideal. At Preseem, we build tools that enable ISPs to understand and proactively improve subscriber QoE. We use AQM to reduce latency and make the internet feel faster for your subscribers, even under heavy usage.
If you’d like to learn more about our bandwidth management philosophy, or if you’re ready to remove the band-aids and improve your network’s health permanently, contact us to schedule a demo and start your free 30-day trial.