Serverless computing is a relatively new computational model that is quickly gaining adoption in public cloud environments. However, as serverless technology is new and fundamentally disruptive, there is considerable ambiguity about what it is and what it can do, even amongst the technology c-suite, writes Arun Chandrasekaran, Distinguished VP Analyst, Gartner.
In this short guide we aim to define “serverless”, highlight its value to organisations, look at its limitations, and demonstrate some best practice use cases that are already generating business and technological dividends.
What is Serverless Computing?
Serverless computing is a way to build and/or run applications and services without having to manage the infrastructure they need. The most prominent manifestation of serverless computing is serverless functions also known as Function as a Service (FaaS). With FaaS, application code is packaged into units called “functions”, with the execution of these functions delivered as a managed service by a third-party vendor.
From a user perspective, serverless functions allow developers to build and run applications and services without thinking about servers which allows them to focus instead on design and configuration. Examples of serverless function platforms (fPaaS) include Amazon Web Services Lambda, Azure Functions, Google Cloud Functions and IBM Cloud Functions.
In the past few years, serverless computing as a term has evolved to include much more than just application code being executed in the cloud; it also refers to an operational model where all provisioning, scaling, monitoring and configuration of the compute infrastructure are delegated to a vendor platform too.
Hence, FaaS – and its offshoot fPaaS – is no longer the only form of serverless services, but in this guide, it will be our focus because of its role as the bedrock of serverless computing.
Guide to Serverless Computing: What are the Benefits of Serverless?
> Operational Simplicity
Most serverless computing leverages containers and virtual machines (VMs) in the underlying architecture. However, by removing the need for infrastructure setup and management, serverless computing architectures have lower operational overheads when compared to those in which developers target the VMs or containers directly.
> “Built-in” Scalability
In serverless functions, infrastructure scaling is automated and elastic, which makes it very appealing for unpredictable, spiky workloads. Hence, application scalability is most often limited by poor application design rather than any inherent limitation in the underlying infrastructure.
In public cloud-based serverless environments, you only pay for infrastructure resources when the application code is running, which follows the standard “pay as you go” model of the cloud. Cost is a direct function of application design and code efficiency thus competitively rewarding best practice and use.
> Developer Productivity and Business Agility
Serverless architectures allow developers to focus on what they should be doing — writing code and focusing on application design — and abstracts away most infrastructure aspects. It also enables business agility, where the time to market for new digital projects can be significantly shortened while simultaneously allowing for rapid experimentation.
What are the Limitations of Serverless Computing?
While serverless architecture delivers several benefits, it imposes some unexpected limits on the execution environment. Some are inherent, like cold starts (initialisation latency) but others are artificial, such as the limits on function runtime, to ensure developers use the platform as intended. The lack of server-side state support (variables that persist in memory from function call to call) may make fPaaS less suitable for several workloads over using containers or VMs.
With fPaaS, application logic must be packaged as discrete functions, which are executed when triggered by events. This means existing applications must be significantly refactored to fit this packaging model or new applications need to be written to fit these patterns. This is well-suited to design patterns such as microservices architecture, but could just as easily lead to code sprawl, in which an app becomes a large set of hard-to-manage functions.
Serverless programming, such as with fPaaS, requires a major shift in application architecture skills and best practices. These skills do not exist in abundance in the market, so teams typically pick up the knowledge as part of adopting fPaaS. Early attempts at building serverless applications usually see developers making many mistakes as they gain experience with the new model. Although this may eventually lead to success, it may disillusion some early in the adoption phase and delay delivery of projects.
The leading fPaaS implementations are proprietary to a specific cloud provider, where organisations take advantage of native integration and tooling. While this is beneficial for shortening time to market and keeping things simple, if the application has to move from one cloud platform to another, then it will have to be significantly reengineered.
Low Degree of Control
The managed service model and runtime virtualisation of serverless technologies like FaaS bestow huge benefits, but at the cost of little to no control of the service. The environment is a “black box” that must be used as-is.
What are the Primary Use Cases for Serverless Computing?
The most natural use case is to harness FaaS to execute operational code that manages the cloud environment.
Many cloud management objectives can be achieved by creating a serverless function that is triggered when the platform signals an infrastructure event. For example, a function could be triggered whenever an object is placed into a bucket in the object store. It would be notified of the object’s handle, and then could investigate the object to act on it, such as looking for uploaded photos to then place into an image catalogue, along with a thumbnail.
Microservices architecture is a variant of service-oriented architecture (SOA) that emphasises small, well-defined independent services yet are combined to create an application or suite of applications. fPaaS is a good first choice to investigate for a microservice, however, not all microservices are good fits for fPaaS. Those that must persist data between calls and are called frequently might be better implemented in a container or VM, so for this reason many microservices solutions are combinations of services backed by a mix of VMs, containers and serverless functions.
In these applications, data is being transmitted from the edge into the central cloud. In the simplest case, this is telemetry data from sensors, either sampled at regular intervals or triggered by a physical event. As such, each payload is very small, and the interarrival time distribution of the payloads can be highly variable, unknown and “bursty”. Serverless functions are a natural ingress point for these data – each function captures the incoming data, then processes it in some manner, usually resulting in aggregating the data, storing it, or triggering a new event or generating a control signal directly back toward the edge.
CTOs should focus on “greenfield” use cases for serverless functions – start with event-driven applications that are inherently “bursty” to maximise the benefits of the pricing model and explore the rapid scalability of the service.
When planning a cloud computing strategy around serverless elements you should encourage a product ownership mindset and DevOps approach since there is no longer be a line between cloud operations engineers and developers.
CTOs should also already be consuming the native services from their cloud provider before adopting its serverless computing solution to fully benefit from the system. However regardless of platform experience, you must first run a thorough proof of concept (POC) to validate assumptions on application scalability, performance and cost of ownership.
Finally, it is best to prioritise serverless security early in the adoption cycle through a combination of process, tooling and culture changes, starting in development and extending into production, to minimise risks further down the line.