SD-WAN (Software-Defined WAN) is the trending buzzword in the network communications world. At its core, SD-WAN utilizes centralized controlling to direct traffic via the strongest and fastest path. It combines the major benefits of SDN, cost efficiency and flexibility, and reproduces them using a WAN, allowing users to take advantage of all possible pathways, including public internet, to get the most value and best performance out of their dollars. In doing so, you eliminate the need for expensive MPLS links, freeing up more of your budget for other, more impactful business activities. It’s the next wave in efficient network traffic flow, but what exactly is the difference?
An SD-WAN expert tells us five key points of differentiation.*
The premise here is that traffic traversing the SD-WAN network can be aggregated across available links, realizing the sum total bandwidth of all. Some vendors are also able to move traffic around from path to path based on the characteristics of the links and their performance compared against an application profile. For example, latency sensitive traffic can be moved to the lowest latency line while high bandwidth more latency tolerant traffic can leverage the higher bandwidth lines and all of this can be performed on the fly, dynamically. The lines are all utilized in an active/active state so there is no failover to occur if a line fails, the traffic continues flow over the available active lines.
For years, redundancy and failover has been performed with dynamic routing protocols such as BGP or OSPF. These protocols will use a path (or in the case of equal cost multi-path, a few paths) and then switch to a backup path if the primary goes down. What these protocols DO NOT have is the ability to measure the quality of a path. So if a line is saturated and has packet loss or is flapping quickly enough that enough packets get through to keep routing protocol session alive, network performance will be critically impacted. Most SD-WAN solutions embed information into the tunnel overlay packets to measure performance of their paths so if one is under performing, it can be taken out of service then restored to service when the performance comes back into tolerance.
As mentioned before, applications can be steered to use the paths most appropriate for best performance. Because of this, there is a great deal of capabilities for the purposes of visibility and control at an application level. Different platforms have varying capabilities but all have some degree of classification and treatment of applications available.
The “software defined” revolution has presented a host of ways to provide centralized, standardized control over large infrastructures. In a controller based model, you can manage many different devices, with a central pane of glass that drives them all instead of logging into each device individually. Also imagine creating one single profile for the device configuration (VLANs, IP addresses, interface), business policy (classification, prioritization, rate limiting) and security policy (firewall rules, application control) that applies to all of your locations and when you make a change to those policies, updating the configuration automatically to all locations in minutes. Not new and not specific to SD-WAN, but when coupled with the other characteristics, very powerful.
Using the application recognition features mentioned earlier, services can be prioritized at an application level. No longer do you need to be concerned with keeping QoS/CoS tags intact across the internet (let’s face it, this never works) nor build complicated QoS policies at each location. You define the applications to prioritize for your organization, setup classification rules and assign these rules their appropriate priority level and you’re done. Also available in most platforms is to rate limit applications to a percentage of available bandwidth. For example, choking down Windows Update traffic or Netflix so that it can still operate but not saturate the whole pipe.
*from an article by Jason Gilbert