The Amazon Web Services Snowball is rolling back to customers and it’s bringing data with it.

The Snowball, which was launched last year, is a way for customers to send large volumes of data to Amazon data centres in a storage device. Now, the public cloud provider is making the same model available for delivering the data back to customers.

Like the original model, the data will come back in the 50TB Snowball array and, like the original concept behind the idea, it is designed to overcome the challenges posed by slow network connections.

The challenge that Snowball is trying to overcome, or circumvent, is that it can be an extremely lengthy process in uploading or downloading data from cloud services, basically because even if the connection is between 20-30 Mbps then a transfer in the area of Terabytes will take days, weeks, months to complete.

So Amazon gets around this by having someone deliver a storage device to the customer, for both importing and exporting of data.

Customers can log in to the AWS Management Console and create an export request and specify the data to be exported, a single request can span one or more Amazon Simple Storage Service (S3) buckets.

This service will then determine how many appliances are needed and create export jobs accordingly. As with the existing system, the data stored on the appliance is encrypted using keys that the user specify, with the keys not being stored on the device.

Amazon Glacier users will first need to restore it to S3 using the Lifecycle Restore feature in order to do this. Once the data is delivered it can be downloaded and the appliance returned to AWS.

The default delivery option is two-day shopping and there will be a fee of $200 for each Snowball job, with users paying $0.03 per GB to transfer data out of AWS. It is expected that the Snowball will be kept for ten days, if it takes longer than that then there is another charge of $15 a day and users pay for shipping.