We live in an age where the volume of traffic passing through networks is at an all-time high.
It is not just the volume though, in response to the higher incidence of cyberattacks, we are seeing increasing amounts of encrypted traffic, with up to 50 percent of all Internet traffic now encrypted.
This all has a huge impact on networks, placing them under real strain. It is not just sensitive data being encrypted either, as even Netflix is now encrypting its movies. This presents a real challenge for network managers and application architects who must ensure that their applications are secure and responsive for end users whilst dealing with these cumbersome packets.
The increasing amount of encrypted traffic places enormous strain on load balancers. One of the core functions of load balancers is to offload or decrypt these Secure Sockets Layer (SSL) encrypted packets before they reach the application.
The traditional way of dealing with such an issue is to deploy hardware load balancers, which use purpose-built appliances with processors and chips designed to meet throughput or transactions-per-second requirements. However, to have this in place there needs to be an initial large investment to ensure that you have enough hardware in place to cover any unforeseen incidents and spikes in traffic. This means of course that much of your capital investment is lying dormant for a majority of the time only coming into play when needed to address peak traffic.
At a time when expenditure and budgets are under more scrutiny than ever, having large amounts of capital tied up in technology that is infrequently used, is a rare, but apparently necessary luxury.
Hardware, up until recently, has been regarded as the only option that offers a robust enough solution to handle high-volume network traffic. The likes of F5 and Citrix have been pushing this message for years and for much of that time it has absolutely been the case.
The appliance vendors have previously tried to position their virtual appliances which are deployed on virtual machines instead of custom hardware as the solution for enterprises considering software-based alternatives. However, these virtual appliances are seen as a poor relation to their hardware cousins.
They inherit much of the architectural limitations of hardware appliances and have been seen (rightly so) as too brittle to take on and deal with sudden increases in traffic or handling high-volume encrypted traffic. Unfortunately, this has served to cement the belief that software load balancers are simply not good enough for the most demanding networking needs.
This is no longer the case. The last few years has seen a real departure from reliance on ‘boxes’. We have seen organisations move their data away from the tangible server locked away in IT departments to a flexible, cost effective cloud solution. Advances in Intel architecture servers with faster processors and memory, both hardware and software improvements in network cards and software-defined datacentre architectures have allowed network technology providers to take a new look at what is possible, how much it should cost and who is able to provide it.
A thorough architectural redesign of enterprise-grade load balancing using software-defined principles to deal with these problems is enabling businesses to reimagine what is possible with load balancers. Even better, imagine a load balancer that is agile and flexible enough to scale up and down according to need, ensuring that you only pay for the capacity that you need. Imagine, all of this flexibility and it having no impact on the performance of your network.
This is no longer a fantasy. The rethinking of the architecture starts with separating the control or management plane (the layer that makes configuration and orchestration decisions) from the data plane (the layer that provides the actual load balancing and other L4 – L7 services).
With this separation, it is possible to have central control over a distributed pool of software load balancers that run on any physical server, virtual machine, container, or the public cloud. Recent testing has proved that such software-defined load balancers can elastically scale applications from zero to one million SSL transactions per second with no impact on performance.
They can automatically scale up load balancing services from zero to peak traffic and then scale down to normal levels without any issue. It proves that the same robust service can now be expected from software as it would be from hardware, except of course, that this is at a fraction of the price as you are only paying for what you need and when you need it.
This is a huge step forward in the industry. The levels of encrypted traffic and the sudden spikes in activity that many experience means that a service that is elastic and yet robust enough to handle these scenarios with no impact on performance is a game changer. It also gives organisations a tool that can deal with many adverse scenarios – effectively handling DDoS attacks for example.
The elastic autoscaling allows networks to manage massive traffic surges that such an attack brings until it has been mitigated. It can do this by scaling beyond the traditional data centre by automatically taking the extra traffic into private or public clouds.
The over provisioning of hardware load balancers has been accepted by many as a necessary cost of ensuring you have enough capacity to handle sudden increases in traffic and the pressures that encrypted traffic is placing on networks. The cost of this excess capacity is huge and increasingly unacceptable.
Software-defined architecture for L4 – L7 services has come of age, offering a flexible, analytics-driven load balancing solution that can replace legacy hardware options at a fraction of the cost, bringing true elasticity and automation to application services. There has been a shift in the balance for power in this sector and it can only be a good thing for customers and end-users.