Guest Column | January 11, 2016

Rack, Stack, And Go Back: Why Hardware Appliance Strategies Don't Work Anymore

By Amit Pandey, CEO, Avi Networks

During a recent cab ride, a colleague and I chatted about the combination of external factors that made companies like Uber possible. To technologize the mundane cab ride, Uber required ubiquitous cellular networks, mobile devices and apps, GPS and mapping services, and simplified credit card transactions to come together. A set of synergistic technology changes and market trends are leading to similar fundamental shifts in enterprise computing.

Application Services In Data Centers

Enterprise infrastructure consists of bare metal servers or virtual machines in the compute layer connected by the underlying network along with the necessary switches and routers to manage access controls or traffic flows. Applications are then deployed with various essential services such as storage, security, and load balancing provided by service-specific appliances. This operational model for the enterprise data center has remained the same for a long time. While virtualization drove significant changes in extracting better utilization from physical compute layer, the rest of the data center services did not change much. These application services provided by dedicated hardware appliances became the norm upon which applications were deployed. The focus by the appliance vendors was on extracting the best performance and throughput possible from the hardware with custom FPGAs (field programmable gate arrays) and ASICs (application-specific integrated circuits). Higher performance requirements demanded better hardware and thereby higher costs.

From Network-Centric To Application-Centric Thinking

Internet usage on mobile devices exceeded that on traditional desktops and laptops by early 2014. The overall number of client access points for applications have grown dramatically and enterprises are rolling out new applications and updates to existing ones at a much faster pace than ever before. With goading as well as support from the executive team, lines of business are requiring shorter deployment timelines for app rollouts, with high expectations for performance and uptime. This is forcing application-centric thinking on enterprise IT teams. Fortunately, they now have at their disposal efficient and cost-effective tools such as better computing power, software-based application services, faster deployment models, and choices in infrastructure ownership. The need for specialized hardware to meet performance requirements is fading away, as commodity x86 servers prove they can satisfy transactions-per-second requirements in almost all categories of application services for modern data centers. Deployment choices for applications range from bare metal servers and VMs, to Linux Containers, with companion orchestration choices ranging from Chef and Puppet Labs to OpenStack and Mesos.

Data Center Inefficiencies — Rack, Stack, And Go Back

In this new ecosystem, traditional appliance models have prevented enterprises from taking advantage of the dynamic computing capabilities, whether implemented in proprietary hardware or its virtual appliance cousin. These solutions require IT or networking teams to provision hardware appliances to account for new applications, perform the necessary networking or security changes and then repeat the process when things change. I call it the “rack, stack, and go back” model of handling changes by capacity guessing, CapEx-based purchasing, and manual provisioning. The underutilization, overhead, and cost inefficiencies of the approach are well understood, but enterprises have not been able to do much to address the issues — and this is just half of the story. On the other side of the coin, the internal customers and lines of business that requested new applications or updates to existing ones to be provisioned have to deal with delays of several weeks as IT change control processes are set in motion to service their requests. The net result is that the delivery of applications and related services are bogged down by infrastructure and manual effort.

Bringing The Goodness Of The Cloud In House

The public cloud has quickly become an exemplar for the desirable aspects of agility and elasticity for enterprise computing. It is no surprise that DevOps teams are addicted to the cloud and, judging by Amazon’s results for AWS, cloud adoption is continuing to accelerate. However, it is also true that most enterprises will continue to operate in a hybrid model across traditional data centers and private and public clouds for the foreseeable future. Enterprises will also want to deploy in a multicloud model where they are not tied to any single cloud service provider. With such deployments, architectural uniformity for application services becomes important to eliminate the problem of making environment specific choices every time an application is deployed. Virtualization and the public cloud were the initial catalysts to a more agile model of computing for enterprises but similar agility has eluded private data centers. But, there are encouraging choices that are bringing the simplicity and flexibility of the cloud into the data center for services ranging from application delivery and provisioning, to automatic scaling and security. The common traits among these solutions are that they are designed to be software-based, deployed on commodity hardware, data-driven, and automation- and self-service-oriented. They can enable IT teams to take concrete steps to service internal customers faster and avoid the unenviable tradeoffs between responsiveness to application change requests, capacity utilization, and reuse of existing investments. The hegemony of proprietary hardware for application services in data centers is slowly but surely ending. Let us look to a software-driven future that is elastic, flexible, and automated not just in the cloud but also in our data centers.