The biggest names in cloud computing have agreed a new set of guidelines which they hope will reassure customers that data stored in the cloud is safe, secure and protected from government interception. The launch of the Trusted Cloud Principles follows reports that Amazon plans to take a more proactive stance on policing the content it allows on its AWS platform, which the cloud market leader denied. But privacy campaigners have warned that increasing levels of censorship from AWS and its rivals could lead to innocent businesses getting caught in the crossfire.
Nine tech companies including AWS, Amazon’s cloud division, Microsoft and Google have signed up to the Trusted Cloud Principles. The signatories are calling on governments to work with them to build transparent processes around how authorities can request data, and what happens if a request is denied. They are also seeking commitments from ministers to facilitate cross-border data flows in a secure manner.
But while the cloud providers are looking to the public sector to help build a trusted cloud environment for businesses, their own behaviour is also being put in the spotlight. A recent report from Reuters stated that Amazon was expanding the AWS trust and safety team to help determine what types of content violate its terms of service and should be removed. AWS plans to “hire a small group of people to develop expertise and work with outside researchers to monitor for future threats,” the report said, citing two sources familiar with the plans.
AWS hit back with a statement denying the report and saying: “AWS Trust & Safety has no plans to change its policies or processes, and the team has always existed.” But the company has shown in the past 12 months it is not afraid to take action to remove content it deems unsuitable.
In January, AWS suspended the social network Parler, which had been hosted on its servers, stating it violated the company’s terms of service. The app was allegedly used to coordinate the attacks on the Capitol Building in Washington, and featured a host of death threats targeted at well-known figures and posts inciting violence. And in August, AWS took down a propaganda site for terror group ISIS having discovered it had been hosting the page since April.
While removing terror-related material is a legal requirement for AWS, infrastructure as a service (IaaS) providers are under growing scrutiny for the organisations they work with and the values they promote, says security specialist Bruce Schneier, a fellow at Harvard’s Kennedy School for public policy. “Companies like AWS are increasingly being judged on who they provide services for,” he says.
Companies like AWS are increasingly being judged on who they provide services for. Bruce Schneier, Harvard Kennedy School
This may force them to take a more aggressive approach to content moderation. Last month, internet hosting firm GoDaddy decided to take down a tip site that allowed users to blow the whistle on anyone breaking Texas’s controversial new abortion laws. GoDaddy said the site violated its terms and conditions, but Schneier says IaaS providers are increasingly required to make value judgements regarding their clients’ content. “Ideally you want these [infrastructure] companies to provide services to as broad a range of clients as possible, but the politics of those decisions is becoming increasingly important,” he says. “This is not about individual pieces of content as much as it’s about who [AWS and others] are doing business with and who they will do business with in future.”
AWS enjoys a dominant position when it comes to the IaaS market. “Though it hasn’t quite cornered the market, AWS is a very powerful intermediary [in internet infrastructure],” says Corynne McSherry, legal director of the Electronic Frontier Foundation (EFF), a non-profit organisation that campaigns for digital privacy and free speech online. “It can’t quite decide whether or not you can be on the internet, but without it your options are narrowed.”
Indeed, AWS and Microsoft’s cloud platform, Azure, accounted for more than 50% of IaaS spending by enterprise in the first quarter of 2021, and no other single company has more than 10% of the market.
"Usually when we think about content moderation decisions, we think of it in terms of the social media companies and generally they don't a very good job at it, because it's an almost impossible task," McSherry continues. "To have any kind of infrastructure provider getting involved in this too would be very worrisome for a number of reasons, not least because of the failures we have seen in the social media space. There's no doubt they will make mistakes, and at the infrastructure level those mistakes can have serious consequences."
McSherry agrees that AWS and its rivals are becoming more conscious of the content they host for political reasons: "Big Tech companies are under a lot of pressure right now and regularly get called to Washington to answer questions on their moderation policies," she says. "The problem is a lot of users, civil rights groups and politicians are not able to differentiate between the layers of the stack. So while the problem [of content moderation] may predominantly be in the social media space, the [infrastructure providers] feel under pressure to be seen to be doing the 'right thing', but without the appropriate level of nuance and thought about the consequences."
Is cloud censorship an issue for 'normal' businesses?
Few businesses deploying services in the cloud do not face the same challenges as social networks when it comes to dealing with inappropriate content. However, they do need to be aware of the risks they face if their cloud provider takes a dislike to their organisation's work. "Executive leaders should ensure that suspension and contract nonrenewal risks are properly assessed and that cloud contracts are negotiated to minimise such risks," wrote Gartner analyst Lydia Leong in a research note to clients earlier this year. "[Gartner] believes that many C-level executives may not have been previously aware of these risks," she added.
Greater scrutiny of their client base may lead IaaS providers to deny service to certain businesses on the grounds that they pose too great a risk. Leong uses the example of companies that provide safety- or life-critical applications. "Some providers may be concerned about potential civil or criminal liability if they were to host certain businesses, activities or content on their platform," she writes. "It is possible that providers could alter their stance on hosting content or applications that represent too much liability risk."
Schneier says greater policing of the cloud could lead to businesses being targeted because of the behaviour of their customers. "I would be concerned if I was a company providing services to others on a regular basis," he says. "If you're providing credit card, or tax, or real estate services, you might be so small it doesn't occur to you to check the list [of what's permitted]," he says.
Moderation is using a sledgehammer when you need a scalpel, and there will be all sorts of collateral damage to legitimate businesses. Corynne McSherry, Electronic Frontier Foundation
EFF's McSherry believes there will be collateral damage if more moderation of the cloud becomes the norm. "When you engage in content moderation, you are inevitably oversensitive – you take down more content than is 'harmful'," she says. "We've seen on social media that the take-down decisions never stop, and you may or may not have a mechanism for appealing the decision in a timely way and getting it reversed."
She adds: "There is no reason to think that the situation would be any different at an infrastructure level, moderation is using a sledgehammer when you need a scalpel, and there will be all sorts of collateral damage to legitimate businesses."
Join Our Newsletter
Want more on technology leadership?
Sign up for Tech Monitor's weekly newsletter, Changelog, for the latest insight and analysis delivered straight to your inbox.