The Great iPhone vs Android Debate® has been raging for years among consumers: opinions are strident on which is slicker, sexier; more secure.
The attractions of conflict crudely juxtaposed extend – no surprise – to the security researcher community: few recent incidents have thrown up quite as much starkly divisive opining as Google’s research into the exploitation of iPhone vulnerabilities, and Apple’s less than effusive reaction to these revelations.
Computer Business Review dug a little deeper into the debate.
iPhone vs Android Security
Security researchers at Google lit a fire under backsides in Cupertino when they published a series of blogs on August 30 detailing five unique iOS exploit chains
The research by Google’s Project Zero and Threat Analysis Group, detailed a series of “watering hole” websites. These were being used to attack iPhone-using visitors of a specific demographic indiscriminately, using a total of 14 vulnerabilities: seven for Safari, five for the kernel and two separate sandbox escapes.
Google claimed the attack stretched over a two-year period “at least”, spanning almost every iOS version “through to the latest version of iOS 12.”
Despite it featuring what appears to have been a sophisticated, state-level attacker, the incident hit iPhone’s reputation for security (“we’ve heard from customers who were concerned by some of the claims” Apple admitted), and Google’s research made pointed reference to perceived poor quality iPhone development processes.
(One exploit chain made use of code which Google said it can “only assume slipped through code review, testing and QA”, including a feature apparently abandoned mid-development: any Apple developer attempting to use the feature over the past four years would have caused a kernel panic; i.e. crashed their own phone.)
When, days later, zero day broker Zerodium cut the prices it pays for iOS exploits and hiked them for Android exploits, it was just salt in the wound.
“It was China” (Said No One)
Curiously, Google – despite emphasising that the attack represented “a sustained effort to hack the users of iPhones in certain communities” and clearly knowing what that community was – declined to identify the targeted demographic in its report on the incident. It took a terse and unhappy response from Apple a week later to make that clear: the group targeted was China’s persecuted Uyghur minority.
(Cybereason CISO Israel Barak told Computer Business Review he was sympathetic to the challenges of attribution. He said: “This kind of activity? It’s highly likely to be something that’s outsourced. It’s probably not a core nation state espionage activity. Especially in the Chinese ecosystem… actual nation state agencies focus on high value or high risk activities and outsource activities like this.”)
In the short statement, published on September 6, Apple played down the incident, saying: “All evidence indicates that these website attacks were only operational for a brief period, roughly two months, not ‘two years’ as Google implies”.
The company said it worked “extremely quickly” to fix the vulnerabilities, adding (arguably churlishly) “when Google approached us, we were already in the process of fixing the exploited bugs.”
The September 6 blog post was widely panned by the security community for breaking with unwritten industry decorum in failing to thank Google, downplaying the severity of the incident – “iOS security is unmatched” it claimed – and lack of openness about ongoing steps being taken to shore up security.
Former Facebook CISO Alex Stamos was among those who stuck a boot in, taking to Twitter to draft a replacement Apple statement.
Security researchers say Apple has long tried to enforce control over security research by making devices hard to access by third-party researchers.
Debugging work requires using specialist cables, developer-fused iPhones, and other equipment. (A Motherboard investigation puts the price for these cables at $2,000 on the grey market and a dev-fused iPhone XR at a chunky $20,000.)
Critics argue that Apple’s reputation for sub-par engagement with the security research community (many gripe that they’ve won scant thanks for efforts to highlight security shortcomings) and litigious approach to tools that make reverse engineering of its software more viable, have done it no favours.
They point to Apple’s action against SektionEins and Corellium (both tools used by security researchers to identify security weaknesses in iOS) to make the point, saying that in the 21st century, security has to be a team sport.
Many add that Apple is vulnerable because once its tough skin is peeled, it’s all juicy middle: as a monoculture (unlike Android’s plethora of devices and the heterogenous hardware and software they use) you bag one iOS, you bag all.
Thomas Reed, director of Mac and mobile at security firm Malwarebytes told Computer Business Review: “The majority of the bugs used to infect iOS devices [in this particular incident] were actually already fixed, iOS is more at risk when users don’t update it, and the two zero-day bugs were fixed extremely fast.
He added: “Apple has been rightfully criticised for the opaqueness of iOS and its response to threats. This malware would have been spotted earlier if it were possible to inspect processes running on iOS, but iOS is a black box that cannot be easily inspected, so when iOS malware does happen, it’s extremely hard to recognise.
“Many have demanded that Apple provide some capability for doing this, and I absolutely agree with that.”
Dear Apple: you can still take e2e responsibility & let us help. You can't fight malware alone. No one can. #FreeTheSandbox, *at least* when users have physical connection to the device + pincode. Android should implement the same mechanism. #TrustButVerify
Despite these claims of “security by opacity”, Apple says it has been working to engage with the broader security community, not least through a radically overhauled bug bounty programme. At Black Hat in August the company expanded its bug bounty programme to include macOS, tvOS, watchOS, and iCloud, and sharply increased rewards to $1 million for a zero-click, full chain attack.
Many emphasise that framing this as an Android vs iPhone debate is a strawman, intriguing though the recent security research by Google was.
As Will LaSala, director of security Services at OneSpan notes: “Ultimately, mobile devices are untrusted and potentially hostile environments, regardless of whether the platform is iOS or Android. [And] even though the new Android 10 comes with updated security features, many consumers won’t be able to access them because their carrier or device manufacturer will not distribute the updated OS.
He added: “Data shows slow adoption of Android Pie (9) which only 10.4 percent of Android users had installed in May – 9 months after its release.”
Android’s long-held reputation for being a bug-ridden security nightmare is fading, however, as Google and others finesse improvements.
Security veteran the Grugq, who sells hardened Android phones for the uber-security conscious, told Computer Business Review: “Android has significantly improved its platform security, starting with Android 8 and now 9. As a result, modern Android devices are quite robust compared to the state of previous versions.
He added: “The wide range of target devices means each device requires modifications of the exploit, which is an added expense. Sometimes a vulnerability on a chipset will not be exploitable on a different chipset. Phones can ship under the same model name but have different boards and chipsets. Fragmentation makes it more expensive to get exploits that work across a large number of devices. Even the diverse patch levels just increase the complexity of the possible targets…”
Those improvements are ongoing.
As Jeff Vander Stoep from Android’s security and privacy team noted in a blog earlier this year, most of Android’s vulnerabilities occur in the media and bluetooth components.
(Use-after-free (UAF), integer overflows, and out of bounds (OOB) reads/writes comprise 90 percent of vulnerabilities.)
He said: “[In Android Q/10] we moved software codecs out of the main mediacodec service into a constrained sandbox. This is a big step forward in our effort to improve security by isolating various media components .”
His team uses a range of open source tools to build and secure Android, and he says actively contributes back to them. To developers, it’s a winning approach: sunlight’s the best disinfectant, most hold. To Apple, its black boxes are quite sterile enough, thank you, the Uyghur are now safe to carry on using them, and Google can stop splashing its bottle of disinfectant around outside of its own doors. Now. Please.