Over the past week Amazon Web Services held its re:Invent conference in Las Vegas where it made a number of key announcements regarding updates to its portfolio of products and services.

Along with seven new compute instances, an addition to Aurora, new artificial intelligence offerings and a serverless query service, it also revealed the next generations of its Snowball product, a new open source project and security products aimed at DDoS attacks.

CBR runs through the major releases from the event. 

 

Amazon Lightsail – access for virtual private servers

The Amazon Lightsail service is a way for developers to gain access to virtual private servers quickly and cheaply.

The company aims to remove the hassle of provisioning storage, security groups, or identity and access management and make it easy to get started on AWS.

The company said on its blog: “With a couple of clicks you can choose a configuration from a menu and launch a virtual machine preconfigured with SSD-based storage, DNS management, and a static IP address.

“You can launch your favorite operating system (Amazon Linux AMI or Ubuntu), developer stack (LAMP, LEMP, MEAN, or Node.js), or application (Drupal, Joomla, Redmine, GitLab, and many others), with flat-rate pricing plans that start at $5 per month including a generous allowance for data transfer.”

Next Page: How do you give the IOT compute? What is AWS Greengrass?

 AWS Greengrass

AWS wants to make to make Internet of Things smarter by giving them compute capabilities.

Users will be able to run IoT applications across both its cloud and local devices thanks to the application of AWS Lambda and AWS IoT.

The ability to build these in to IoT devices is said to make them more intelligent by giving them the ability to do things like respond to local events while operating with intermittent connections.

Currently in preview, the company said that devices which are running Linux and support x86 or ARM architectures will be able to host Greengrass Core, this enables local execution of Lambda code, security, cashing and messaging.

Once this is running then a device will run as a hub that communicates with other devices that have the AWS IoT Device SDK installed.

Next Page: A year on from Snowball we have Snowball Edge and the even better named Snowmobile – see what’s inside the giant truck

 

AWS Snowball Edge and Snowmobile

The first Snowball appeared in 2015 and offered a 50 TB data transfer appliance. A year later and there is Snowball Edge.

The appliance has more connectivity, storage, horizontal scalability via clustering, Lambda powered local processing and new storage endpoints that can be accessed from existing S3 and NFS clients.

That's no normal lorry, that's basically a big portable hard drive.
That’s no normal lorry, that’s basically a big portable hard drive.

Offering 100 TB of storage it will allow customers on the network side to use: “10GBase-T, 10 or 25 Gb SFP28, or 40 Gb QSFP+. Your IoT devices can upload data using 3G cellular or Wi-Fi. If that’s not enough, there’s also a PCIe expansion port,” the company said.

For some companies, those dealing with exabytes of data, 100 TB of storage isn’t enough so AWS introduced Snowmobile.

This is a lorry carrying a secure container that is capable of moving 100 Petabytes of data.

The company said: “Physically, Snowmobile is a ruggedized, tamper-resistant shipping container 45 feet long, 9.6 feet high, and 8 feet wide. It is water-proof, climate-controlled, and can be parked in a covered or uncovered area adjacent to your existing data center. Each Snowmobile consumes about 350 kW of AC power; if you don’t have sufficient capacity on site we can arrange for a generator.”

Next Page: What are in the magnificent Seven New instances

 

New instances

In total seven new instances were announced, F1, which is a customer programmable, compute instance with Field Programmable Gate Arrays.

Amazon EC2 Elastic GPUs, which are said to allow customers to attach low-cost, professional grade graphics acceleration to EC2 instances.

The company revealed two larger sizes of T2 Burstable Performance Instances: “t2.xlarge offers 16 GiB of memory and 4 vCPU, and the new t2.2xlarge offers 32 GiB of memory and 8 vCPU. Customers with existing T2 workloads can now scale up to the larger T2 sizes if desired.

R4 instances that are designed for high performance databases, distributed memory caches, in-memory analytics, genome assembly and analysis.

“They feature a larger L3 cache that is twice the size of the previous generation (R3), a new 16xlarge size that offers twice the memory as the previous generation with 488 GiB of fast, DDR4 memory, and 64 vCPUs (two times as many as the largest R3) – all for 20 percent less per GiB of RAM than the previous generation R3 instances.”

C5 instances will be coming in early 2017 as will I3 instances.

Next Page: What’s new in AWS Artificial Intelligence for developers? What does Rekognition do?

 

Artificial Intelligence

Amazon Rekognition, which is designed to allow developers to add image analysis to applications using deep learning based image and face recognition.

Amazon Polly will turn text into lifelike speech and let apps talk in 47 different voice in 24 languages.

CEO Andy Jassy made 14 announcements on day one.
CEO Andy Jassy made 14 announcements on day one.

Amazon Lex, the technology that powers Amazon Alexa, has been released so that developers will be able to build conversational user experiences across web, mobile, and connected device apps.

Raju Gulabani, VP, Databases, Analytics, and AI, AWS, said: “The combination of better algorithms and broad access to massive amounts of data and cost-effective computing power provided by the cloud is making AI a reality for application developers. AWS is home to some of the most innovative and creative AI applications in use today.”

Lex is described as a service that helps to build conversational interfaces using voice and text. It’s built off the automatic speech recognition technology and natural language understanding that is used in Amazon Alexa.

Next Page: Glue to connect databases – a new architecture for cloud

AWS Glue

A fully managed data catalogue and extract, transform and load (ETL) service that is capable of connecting to databases in the cloud and on premises.

Glue, along with some of its other products, are designed to build out the data-architecture offering of AWS according to Amazon.com CTO Werner Vogels.

Vogels said: “It reads the metadata. Glue then allows transforming data and prepare it into a format that your analytics engine needs. And it allows scheduling and running jobs. If data changes, it will make adjustments.”

Glue is basically a tool that is designed for automatically running jobs for cleaning data sources from multiple sources, that data is then ready to be analysed in other tools.

Next Page: Step Functions for Micro Services – the future

 

AWS Step Functions

Aimed at the growing use of micro services and the challenge of coordinating them, Step Functions helps developers to arrange the components with the help of a visual workflow tool that should make it easier to track each step as error conditions are retraced.

In total 24 announcements would be made over day one and two.
In total 24 announcements would be made over day one and two.

The service includes an editor that maps out the desired relationships among Lambda functions.

Vogels said: “This is really going to change the way you build distributed applications.”
The company said that it wants to make it easier for developers to build complex, distributed applications by connecting multiple web and microservices.

Next Page: Remember Batch Processing? AWS says it can do it any just about any scale

 

AWS Batch

Providing batch processing services that integrate with EC2, Spit and Lambda, Vogels described it as a fully managed batch processing service that will dynamically handle batch processing at any scale.

The tool will allow users to run apps and container images on any EC2 image.

The company said that this tool comes in response to customers stringing together EC2 instances, containers, notifications, and CloudWatch monitoring, so with Batch the process should become easier.

AWS won’t be charging any additional fee for the tool on top of the costs of the other resources being used.

Currently available in Preview for some customers in the US, the company said it will both further migrate it into Lambda and open it up to additional areas in the future.

Next Page: Shield – the AWS response to DDoS

 

AWS Shield

Coming in response to a growing concern among customers regarding distributed denial of service (DDoS) attacks, AWS Shield comes in two levels, Standard and Advanced.

DDoS attacks have been in the news a lot this year, most recently the Dyn attack.
DDoS attacks have been in the news a lot this year, most recently the Dyn attack.

The Standard version will come with basic integrated DDoS protection, which will be made available to all AWS customers by default.

Advanced, the premium version, is aimed at those who feel they are perhaps more likely targets of more sophisticated and targeted attacks.

Vogels said: “I think this will really help you protect yourselves even against the largest and most sophisticated attacks that we’ve seen out there.”

Last Page: What’s in the Blox?

 

AWS Blox

Blox is a collection of open source projects that the company says will help developers to build schedulers and to integrate third-party schedulers on top of ECS, which can be used to manage and scale clusters at the same time.

Netflix is said to already be using it widely and it’s a big step for the company to make more of a name for itself in the open source community with projects aimed at container management and orchestration.

The company wrote on its blog: “This new open source project includes a service that consumes the event stream, uses it to track the state of the cluster, and makes the state accessible via a set of REST APIs.

“The package also includes a daemon scheduler that runs one copy of a task on each container instance in a cluster. This one-per-container model supports workloads that process logs and collect metrics.”