Just how big is Amazon Web Services? The cloud computing platform from online retail giant Amazon.com is just one of a multitude of web service offerings to choose from. We went to look for some of its customers and explore what they used it for.
NASA’s Jet Propulsion Laboratory (JPL), a HQ for the robotic exploration of space, has sent a robot to every planet in the solar system. One of its most famous interplanetary missions, the Mars Curiosity Rover, landed on Mars on August 5 2012.
For this very public event NASA said that AWS delivered the images and video of Curiosity’s landing from the Pasadena HQ of NASA/JPL to the rest of the world.
Amazon Route 53 and Elastic Load Balancers (ELB) enabled NASA/JPL to balance the load across AWS regions.
NASA/JPL serviced hundreds of gigabits/second of traffic for hundreds of thousands of concurrent viewers.
NASA used Amazon Simple Workflow Service (Amazon SWF) to copy the images from Mars to Amazon S3. Metadata is stored in Amazon SimpleDB and Amazon SWF triggers provisioning of Amazon EC2 instances to process images as each transmission from Curiosity is relayed to Earth.
Reddit, the extremely popular social network-cum-news aggregator, uses AWS to help scale to handle 4 billion page views per month. During President Obama’slive Q&A session in 2012, Reddit engineers provisioned additional volumes and double capacity for the event in "less than 10 minutes" using AWS.
Keith Mitchell, a programmer at Reddit said: "Reddit was running on 10-15 physical servers. We had five technical staff and we were looking to grow as quickly as our user base was growing.
"We moved from physical servers on the east coast to a data centre in the cloud. Our application servers and databases run on Amazon EC2. We do our traffic analytics using Amazon EMR. Our search functions are run through Amazon cloud search."
Hailo, founded in London by some taxi drivers who wanted to let people use their smartphones to hail cabs, uses AWS to manage over 32,000 drivers and half a million customers. Using a cloud computing platform allows the company to use only the computing resources that it needs rather than deploying excess capacity up front.
Hailo’s servers run on the Ubuntu operating system, uses a Cassandra NoSQL database as its main database, a MySQL relational database, and Amazon Relational Database Service (Amazon RDS) for reporting.
Hailo is migrating to the Amazon Virtual Private Cloud (Amazon VPC), because, as system administrator Stephen Tan explains, "there are a lot of boxes within the PCI compliance requirements that we need to tick, and so we are migrating to Amazon VPC, a service that has been validated as being compliant with PCI standards."
Guardian News & Media, the publisher of The Guardian and The Observer newspapers, uses Amazon Elastic Compute Cloud (Amazon EC2) for a whole range of projects.
The group has automated the launching of its servers in the cloud using shell scripts and Puppet, a tool for configuring new Amazon Elastic Compute Cloud (Amazon EC2) instances. The group has two Amazon Machine Images (AMIs) — 32 and 64 bit — and provides user data when creating each instance in order to determine which Puppet manifests to download and apply to create the right type of server.
Nokia’s telecommunication arm used its Xpress Internet Services platform to dish out mobile Internet services for emerging markets. The platform ran on 2200 servers and collected 800 GB of log data daily. The volume of data became too large for the traditional relational database and Nokia could no longer scale the database and generate reports.
Nokia moved to AWS and used Amazon Redshift as a data warehouse. The company claims that by using AWS, it is able to run queries twice as fast as its previous solution and can use business intelligence tools to mine and analyse big data at a 50% costsaving.
All case studies courtesy of Amazon Web Services.
Gartner analyst Lydia Leong famously shot down Amazon’s Service Level Agreement (SLA) in 2012.
A service level agreement (SLA) is a contract between a cloud provider and a customer that guarantees what services the provider will supply customers with.
Leong said: "Unfortunately, cloud IaaS SLAs can readily be structured to make it unlikely that you’ll ever see a penny of money back — greatly reducing the provider’s financial risks in the event of an outage.
"Amazon Web Services (AWS) is the poster-child for cloud IaaS, but the AWS SLA also has the dubious status of "worst SLA of any major cloud IaaS provider". (It’s notable that, in several major outages, AWS did voluntary givebacks — for some outages, there were no applicable SLAs.)"
Amazon Web Services most recent (2013) SLA guarantees an up time of 99.95%, promising penalties if the service falls below this mark in any month or quarter.
However, it’s always not as clear cut as that. The latest version of Amazon’s SLA employs "commercially reasonable efforts" to reach an uptime of 99.95%, which means the reason why it sometimes might be unavailable lies in favour of Amazon.
Furthermore, all workloads on AWS need to be down if its credit repayments are to be given out.
Amazon also relies on tactics like having to have more than one availability zone unavailable to the customer for them to qualify for repayment. Amazon’s best practice guide tells customers that they should have a backup availability zone. If everything goes down in one zone, customers won’t be getting any reparations unless your workloads were also in another zone and that went down too.