Sign up for our newsletter
Technology / Cybersecurity

Google Crypto Expert Exposes Trio of AWS Encryption Bugs

Amazon has updated its S3 encryption client after a cryptographic expert at Google identified three security vulnerabilities in how it secures content in S3 buckets. These included two bugs in its software development kit (SDK), earning her a brace of rare CVEs against the hyperscaler: CVE-2020-8912 and CVE-2020-8911.

Among Dr Sophie Schmieg’s trio of finds was one dubbed by security colleague Thai Duong as “one of the coolest crypto exploits in recent memory”. 

AWS acknowledged the vulns more coolly in an August 7 developer blog as “interesting”. The cloud provider played down the severity of the bugs, saying they “do not impact S3 server-side encryption” and require write access to the target S3 bucket. Schmieg meawhile said they result in potential “loss of confidentiality and message forgery”, and expose users to “insider risks/privilege escalation risks”.

Two of the bugs have now been fixed in the latest version of the AWS encryption SDK; the cloud giant’s client-side encryption library. The third (and the only one apparently not allocated a CVE) meanwhile was patched by AWS on August 5.

White papers from our partners

It allowed an attacker with read access to an encrypted S3 bucket to recover the plaintext without accessing the encryption key. As Dr Schmieg noted this week: “The S3 crypto library tries to store an unencrypted hash of the plaintext alongside the ciphertext as a metadata field. This hash can be used to brute force the plaintext in an offline attack, if the hash is readable to the attacker.”*

AWS said the issue “owes its history to the S3 ‘ETag,’ which is a content fingerprint used by HTTP servers and caches to determine if some content has changed.”

The company added: “Maintaining a hash of the plaintext allowed synchronization tools to confirm that the content had not changed as it was encrypted. [We’ve removed this] capability in the updated S3 Encryption Client,[and] also removed the custom hashes generated by older versions of the S3 Encryption Client from S3 object read responses.”

AWS Encryption Bugs: The CVEs

CVE-2020-8911 was detailed by Dr Schmeig on GitHub on Monday.

It involves a bug in how AWS’s SDK implements AES-CBC: a mechanism for encryption and decryption; key wrapping; and key unwrapping. As she notes: “V1 of the S3 crypto SDK, allows users to encrypt files with AES-CBC, without computing a MAC [message authentication code that checks the ciphertext prior to decryption] on the data.”

“This exposes a padding oracle vulnerability.**

“If the attacker has write access to the S3 bucket… they can reconstruct the plaintext with (on average) 128*length(plaintext) queries to the endpoint, by exploiting CBC’s ability to manipulate the bytes of the next block and PKCS5 padding errors.”

This issue is fixed in V2 of the API, by disabling encryption with CBC mode for new files, after AWS killed that option off. old files, if they have been encrypted with CBC mode, remain vulnerable until they are reencrypted with AES-GCM.

Amazon downplayed the bug (which is rated “medium”) saying: “To use this issue as part of a security attack, an attacker would need the ability to upload or modify objects, and also to observe whether or not a target has successfully decrypted an object. By observing those attempts, an attacker could gradually learn the value of encrypted content, one byte at a time and at a cost of 128 attempts per byte.”

The company is now killing off its use of AES-CBC as an option for encrypting new objects however, it said, in favour of AES-GCM (which is “now supported and performant in all modern runtimes and languages”).

The issue is fixed in version 2 of the S3 crypto SDK.

CVE-2020-8912 was also detailed with a proof-of-concept by Dr Schmieg this week.

The bug is in the golang AWS S3 Crypto SDK (“with a similar issue in the non “strict” versions of C++ and Java S3 Crypto SDKs”). 

V1 of the S3 crypto SDK does not authenticate the algorithm parameters for the data encryption key, she explained. “An attacker with write access to the bucket can use this in order to change the encryption algorithm of an object in the bucket…”

“For example, a switch from AES-GCM to AES-CTR in combination with a decryption oracle can reveal the authentication key used by AES-GCM as decrypting the GMAC tag leaves the authentication key recoverable as an algebraic equation.

By default up to this point, the only available algorithms in the AWS SDK have been AES-GCM and AES-CBC. By switching the algorithm from AES-GCM to AES-CBC an attacker can reconstruct the plaintext through an “oracle endpoint revealing decryption failures, by brute forcing 16 byte chunks of the plaintext.”

More details of this attack are here.

The issue is fixed in version 2 of the S3 crypto SDK.

AWS Response

AWS said: “We’re making updates to the Amazon S3 Encryption Client in the AWS SDKs. The updates add fixes for two issues in the AWS C++ SDK that the AWS Cryptography team discovered, and for three issues that were discovered and reported by Sophie Schmieg, from Google’s ISE team. The issues are interesting finds, and they mirror issues that have been discovered in other cryptographic designs (including SSL!), but they also all require a privileged level of access, such as write access to an S3 bucket and the ability to observe whether a decryption operation has succeeded or not.

“These issues do not impact S3 server-side encryption, or S3’s SSL/TLS encryption, which also protects these issues from any network threats”.

Amazon also made a series of updates that fixed bugs found internally.

The company added: “We’ve updated the AWS C++ SDK’s implementation of the AES-GCM encryption algorithm to correctly validate the GCM tag. Prior to this update, someone with sufficient access to modify the encrypted data could corrupt or alter the plaintext data, and that the change would survive decryption. This would succeed if the C++ SDK is being used to decrypt data; our other SDKs would detect the alteration. This sort of issue was one of the design considerations behind “SCRAM”, an encryption mode we released earlier this year that cryptographically prevents errors like this. We may use SCRAM in future versions of our encryption formats, but for now we’ve made the backwards-compatible change to have the AWS C++ SDK detect any alterations.”

AWS has also added new alerts to “identify attempts to use encryption without robust integrity checks. We have also added additional interoperability testing, regression tests, and validation to all updated S3 Encryption Client implementations.”

Schmieg noted on Twitter: “This issue demonstrates nicely how software engineers and cryptographers have a completely different idea about what a hash function does. For many software engineers, a hash function is a “one-way” function, with the output being essentially meaningless. For cryptographers on the other hand, the hash of anything that isn’t a cryptographic key itself is basically the same as the input, so e.g. a digital signature is seen as revealing the signed data, even though the signature only contains a hash of this data. The truth lies somewhere between these two viewpoints, but in general, the “except by brute force” part of “a hash function cannot be inverted except by brute force” being very important and often neglected.”

* As Dr Schmieg puts it: “The S3 crypto library tries to store an unencrypted hash of the plaintext alongside the ciphertext as a metadata field. This hash can be used to brute force the plaintext in an offline attack, if the hash is readable to the attacker. In order to be impacted by this issue, the attacker has to be able to guess the plaintext as a whole. The attack is theoretically valid if the plaintext entropy is below the key size, i.e. if it is easier to brute force the plaintext instead of the key itself, but practically feasible only for short plaintexts or plaintexts otherwise accessible to the attacker in order to create a rainbow table. The issue has been fixed server-side by AWS as of Aug 5th, by blocking the related metadata field. No S3 objects are affected anymore.”

** Ed: Crudely, the ability to decrypt existing strings or encrypt new ones. Nothing to do with “Oracle”: an oracle is a system that performs cryptographic operations for a user — or indeed, an attacker. 

See also: AWS Users Are Unwittingly Opting In to Sharing AI Datasets with Amazon


This article is from the CBROnline archive: some formatting and images may not be present.

CBR Staff Writer

CBR Online legacy content.