Serverless computing, still slightly shrouded in mystery but certain to become one of the most valuable tools in IT’s pocket.

The potentially game-changing technology isn’t brand new, but like technologies before it such as containers, there’s a few myths and misconceptions about it.

To overcome this lack of insight into what to use serverless computing for, when not to use it, and how much it costs, CBR’s James Nunns teamed up with Ian Massingham, Worldwide Lead, AWS Technical Evangelism at Amazon Web Services to answer the big questions about the technology.

What is serverless computing?

Serverless computing allows you to build and run applications and services without thinking about servers. Serverless applications don’t require you to provision, scale, and manage any servers and can be built for virtually any type of application or backend service. Everything required to run and scale a high availability application is handled by the cloud service provider.

 

Serverless applications provide four main benefits:

  • No server management – there is no need to provision or maintain any servers. There is no software or runtime to install, maintain, or administer.
  • Flexible scaling – applications can be done automatically or by adjusting its capacity through toggling the units of consumption (e.g. throughput, memory) rather than units of individual servers.
  • High availability – serverless applications have built-in availability and fault tolerance. There is no need to architect for these capabilities since the services running the application provide them by default.
  • No idle capacity – there is no need to pre- or over-provision capacity for things like compute and storage. For example, there is no charge when your code is not running.

 

Building serverless applications means that developers can focus on their core product instead of worrying about managing and operating servers or runtimes, either in the cloud or on-premises.

 

How does it work?

Rather than consuming virtual machines or other low-level primitives to deploy and operate applications, serverless computing provides services that are delivered at a high level of abstraction.

Services provided vary as they have different abstractions and sets of ‘triggers’.  In the case of computing, the abstraction has a specific function and the trigger for the abstraction is usually an event.  In the case of databases, the abstraction may be, for example, a table and the trigger would be a query or search against that table – or alternately an event generated by doing something within the table.

Serverless Computing
Ian Massingham, Worldwide Lead, AWS Technical Evangelism at Amazon Web Services.

A mobile game for example allows users to access a high score table for top players worldwide on different platforms (iOS, Android, web browser). When this information is requested, the request goes from the application to the API endpoint. The API endpoint might trigger an AWS Lambda function, or a serverless function, which in turn gets access to the available data stream from inside your table (this could be using Amazon DynamoDB, for example). It then returns the data back to the user in a fixed format normally as an object which contains those top five high scores.

Once built, the application functionality can be reused across the mobile and web based version of the game.

This is different to a server setup, rather than having to have an Amazon EC2 instance or virtual machine, sitting there waiting for requests, the environment is triggered by an event and the logic required to respond to the event is only executed in response to that. It means the resources to run that logic are only created at that time. It results in a very resource efficient way to build applications.

 

What use cases is it good for?

Serverless computing is good for a wide variety of different use cases for anything that is event driven. This includes IoT, mobile applications, web-based applications and chat bots. Whether events have been generated from actions from human beings (pressing buttons on an interface), sensors or from data flowing through the system.

One example of this is Thomson Reuters’ use of AWS Lambda to load and process data streaming without the need to provision or manage any servers. Thomson Reuters has built a solution that enables it to capture, analyse, and visualise analytics data generated by its offerings, providing insights to help product teams continuously improve the user experience. AWS Lambda runs code only when triggered by new data entry via integrations with other AWS services, Kinesis and Amazon S3.

Being event-driven means the company is only charged for compute processing when the code is running, so it is very cost efficient.

 

What is it not good for?

It’s not necessarily a drop-in solution for legacy applications that have already been built. If you have already got an application that’s been built as a monolith, or built with the operating system the levels of abstraction that the application needs to run on, this might exclude you from immediately running the application inside a serverless platform. This doesn’t mean you can’t fulfil the use cases for serverless architectures – it just means you may need to rebuild the applications to do so.

A good example is a web application where you may begin by running it as a large-scale job monolith inside an application server like Tom Cat. If you decide that you want to break up the applications into composite set of functions, you can implement all of the new functions using a serverless model.  Over time the level of usage for the old version of the application gets smaller and the level of usage for these new serverless components ramps up as usage increases. For any customer wanting to do this, there is a transitional model that customers can follow to move traditional machine-based application architectures over to function-based architectures.

 

Is serverless computing expensive?

No, serverless computing is not expensive. There aren’t any costs associated with serverless computing and you only pay for what you use. It is therefore very cost effective, particularly for small use cases, and for companies whose application usage varies significantly over time.

It can also be very cost effective for customers who want to manage workloads and operations as it enables customers to avoid costs such as capacity planning or deployment tools. Many AWS customers, are now experimenting with serverless as a way to increase agility but also save costs. Graze, the healthy snack company, has a number of uses for AWS Lambda, including real-time uploads of analytical data to Amazon Redshift, managing backups, and checking GitHub pull requests, but is looking to increase it usage by two/three times in the coming months.

 

Why is there so much industry hype about serverless computing?

Serverless computing has had a hugely positive response from developers and the outcomes they are experiencing from the technology.  It gives options and a broader set of possibilities when it comes to delivering applications in a resource efficient way. It is giving the power to the developer.

We are now seeing large companies, such as Netflix, explore how serverless computing can improve its service and free up developer’s time. In Netflix’s case they are planning to use AWS Lambda to build rule-based self-managing infrastructure and replace inefficient processes to reduce the rate of errors and save valuable time for its developers.

Previously cloud developers had to use machines, which were labour intensive and time consuming. Serverless allows developers to run tests and productions within minutes. The developer is in direct control of when and how they choose to deploy, as well as in control of the application architecture through modelling frameworks.  It also allows them to release their own products and experience the outcomes first-hand.