In 1999, Bill Gates published a book called Business at the Speed of Thought: Succeeding in the Digital Economy.
Microsoft and Gates were just about at the peak of their respective powers and the first dotcom boom had yet to become a bubble, so the book garnered a lot of attention.
Everybody wanted to know how to speed up their operations, to gain a first-mover advantage and to respond in real-time to opportunities and challenges. It was decided: the old, slow-moving business world was dying and from its ashes would rise the real-time business.
With the benefit of hindsight, it’s clear that the marketing had overtaken reality. In 1999, bandwidth was horribly crunched, companies were spending millions on their own servers, there was no cloud and the mobile world was mostly of the laptop and dumbphone variety. Shopping cart abandonment rates were stratospherically high and clickstream analytics primitive. A promotional sales day would bring most sites to a juddering halt.
IT and the web were very different then, but today we still recognise many of those earlier challenges. Even now, ‘real-time’ is really rare: we all have become accustomed to look out for event triggers but we can’t always act on them right away.
This challenge goes all the way back to the fundamental building blocks of service and application development. Datacentre developers who are building a new service today will usually be operating in highly virtualised environment, but they will still need to have a virtual machine provisioned, be granted database access permissions, have memory allocation approved and, behind the scenes, load balancing will have to be in place. All of this might take three to five days – hardly business at the speed of thought.
Need it be this way? Everyone wants the chance to leap on opportunities (or to put out fires) but the world we live in today – despite the rise of cloud services and virtualisation – is only partly automated and still requires layers of approval processes. Security remains a critical concern, as do dependencies that can affect other services and understanding root cause problems is still difficult. Got an issue on your rented cloud space? Most cloud platforms today only provide very blunt green/red light information on outages.
Are hybrid and multi-cloud systems the answer?
Many organisations today try to get around challenges by adopting a hybrid or multi-cloud approach. For example, they might use a public cloud to test a service or to get something up and running quickly before they then bring it back to a more controlled private cloud or on-premises deployment. In these environments, everyone wants to deploy, govern and scale applications the same way they can do in their own datacentres. But it’s like having a sports car and a track to race it on… but with no idea as to how to steer it.
Hybrid cloud—where you marry a virtualisation system like VMware with a public cloud such as AWS or Microsoft Azure—and multi-cloud—where you marry multiple public clouds—sound great, a best of both worlds solution that goes together like “eggs and bacon fried”, in the words of the old Sinatra song. But really, that combination is more apples and oranges than eggs and bacon. It’s like building a mansion by combining a penthouse in the city with a country ranch. Security, provisioning, scalability and governance are very different in these parallel worlds.
Fix the people and process issues that impede you from moving fast and you’re still not all the way there. You still need to figure out how you make the clouds and the on-premises infrastructure work together as one slick machine. Diversity is good, but you need to be application-centric, not infrastructure-centric, and that means changing a dynamic and culture that has persisted for decades. You need to get away from caring about the microprocessor, middleware, server brand and the rest of the detail and abstract, to think about the application as ground zero.
Software-defined load balancing doesn’t make any assumptions about infrastructure; it can be on-premises or in any cloud, we need raw compute – give me a cluster and we will run on it. Use the application as the foundation stone and pull in modular infrastructure and application services. Think of your infrastructure (on-premises and cloud) as a pool of compute resources. Your applications need to pull resources from that pool without conditions forced upon it by opinionated infrastructure or appliances. It’s a paradigm change where you need to ask yourself ‘what is my real application need?’ and go from there. Business at the speed of thought? It’s possible if you put the app at the front of your thinking.