AWS users were left sweating over the weekend after numerous customers reported unexpectedly high bills following what appears to have been a widespread invoicing issue at the cloud provider, which has millions of customers globally.
Many complained that charges had doubled or even tripled overnight, in an apparent billing bug. It is unclear how many customers were affected, but it appears to have hit numerous AWS users in the UK, according to social media posts.
“We just added one small EC2 instance yet my AWS bill is showing at 3x higher than last month”, tweeted Jason Cavett, CTO of webCemetries.
Another customer — commenting on a thread about the issue on Hacker News — said: “I opened a service ticker. My ec2 instances have somehow been running for over 950 hours in one month.” [There are 730 hours, on average, in a month].
They added: “Amazon invented more hours than actually exist in a month for billing purposes. Bezos is a genius! I should have thought of that!”
This is affecting Chipper's AWS bill. Probably causing minor heart attacks across the really big AWS users tonight.https://t.co/h8VcyPdMeb
— Chris Fidao (@fideloper) September 22, 2019
Many customers have now received an email from the company, which appears to attribute the issue to invoices being sent early. (Computer Business Review has contacted AWS for comment and will update this story if/when we receive it).
Joe Dixon, CTO of the UK’s Ubisend, an AI-powered chatbot specialist, also tweeted: “Glad it’s not just our account. Pretty much ruined my Friday night when I logged in to check!”
AWS Billing Issue: Company Starts Firing Out Emails
According to a thread on the Hacker News forum on the AWS billing issu, the email reads: “We are notifying you that we incorrectly issued an early invoice for your September AWS usage on September 18th. As your card was charged successfully for this invoice, we are currently processing a refund for the unexpected charge.
It adds: “Full monthly usage will be invoiced through our normal billing process on or around the 3rd of October. We will send another notification when your correct September invoice is available. We apologize for any inconvenience.”
G4 Instances
Those AWS users less worried about their bill and more about how to handle computationally demanding workloads on EC2 instances will, meanwhile, be pleased to learn that Amazon has announced general availability today of G4 instances.
These are a GPU-based elastic cloud compute (EC2) offering that it hopes will attract customers with demanding machine learning, graphics or streaming application workloads off-premises and into the cloud.
G4 instances are underpinned by NVIDIA’s T4 GPUs and Intel Cascade Lake processors. The service is currently only available on virtual machines running off shared servers, but AWS says a bare metal instance that will be available in the coming months.
Customers can rent up to 64 vCPUs, up to 4 NVIDIA T4 GPUs, and up to 256 GB of host memory. With the cost of Machine Learning inference often amounting to 90 percent of overall operational costs in the cloud, AWS is touting G4 instances as an “ideal solution for businesses or institutions looking for a more cost-effective platform for ML inference as well as a solution for machine learning inference applications that need direct access to GPU libraries such as, CUDA, CuDNN, and TensorRT.”
(Inference is the process of using a trained machine learning model to make predictions, which typically requires processing a lot of small compute jobs simultaneously. It can rapidly get energy-intensive, and expensive).
Initial cost structures range from $0.526/hr to $4.352/hour. (Table below).
The public cloud heavyweight describes the service as “ideal for machine learning inferencing, computer vision, video processing, and real-time speech and natural language processing”, with games developer Electronic Arts among early users.
G4 instances will also support Amazon Elastic Inference in the coming weeks. This is a service that optimises GPU and CPU use to ensure it is primed just right for machine learning workloads. AWS claims it can help cut inference costs by 75 percent.
Matt Garman, AWS’s VP of compute services, said: “AWS offers the most comprehensive portfolio to build, train, and deploy machine learning models powered by Amazon EC2’s broad selection of instance types optimized for different machine learning use cases.
He added in a release shared today: “With new G4 instances, we’re making it more affordable to put machine learning in the hands of every developer.”
Customers with machine learning workloads can launch G4 instances using Amazon SageMaker or AWS Deep Learning AMIs, which include machine learning frameworks such as TensorFlow, TensorRT, MXNet, PyTorch, Caffe2, CNTK, and Chainer.
Customers with graphics and streaming applications can launch G4 instances using Windows, Linux, or AWS Marketplace AMIs from NVIDIA.