There are growing questions over to what extent mobile apps can eavesdrop on their users. And if they can, what can they do with that information? Could a conversation recorded by a voice-activated app get us into trouble? (WritesAndrew Kelly, Principal Consultant – Cyber Security, QinetiQ).
The feeling we are constantly being listened to is scary enough. But what if the app had been hijacked by criminals or terrorists, or the recorded conversation were stolen when the app’s creator is hacked? What if that app was on a phone belonging to a police officer or a doctor? What could the hacker do with that conversation?
Recording personal conversations may send particular chills down the spines of privacy advocates, but it is far from the only personal information that legitimate mobile apps collect. Mobile apps track our movements, follow our interests, and increasingly – via connected IOT devices – control our world.
This has not gone unnoticed by organised cybercriminals, who see opportunities to steal sensitive information, cause chaos, and extort money. For individuals, this could mean financial loss or embarrassment. For government department or companies, it could mean big fines and public humiliation. For safety critical industries – police, defence, hospitals – it could compromise national security and put lives at risk.
A particular worry is that apps create weak points in highly secure environments. Many safety critical organisations used to have purpose-built, locked-down systems. But mobile devices are so ubiquitous and practical, it makes sense to use them. For example, the emergency services are currently migrating from voice-only TETRA systems to modified smartphones communicating over 4G networks. Whilst this project is taking app security seriously, in conducting ongoing testing of any app before it is allowed on devices, most organisations have not fully understood the associated risk.
What do Apps Know?
Most people are surprised at how much data seemingly innocuous apps collect, and where they share it.
We recently ran tests on widely used apps and found the following permissions were commonly demanded upon installation:
To access phone number, device ID, SIM ID (so it knows when the user moves device) and numbers called
To make changes to device configuration and install shortcuts
If the company collecting this data is hacked, the bad guys get a lot of personal information. More worryingly, if the device is compromised, e.g. via a rogue app or an attack which inserts malicious code into a legitimate app, the bad guys have this information in real time. And those are just the legitimate apps; there are plenty of rogue apps out there designed specifically to take control of your phone.
What is notable is the potential for targeted attacks. Hacking has tended to focus on financial fraud, but mobile apps could be used for ideological or disruptive ends. Gaining control of a phone belonging to a police officer, doctor or politician could expose information which damages reputations or threatens national security. Taking control of heating and lighting could disrupt daily operations.
Minimising App Risk
There is no doubt that apps present a growing threat and cybercriminals are putting more resources into targeting them. But employees carrying phones around is a fact of modern life. Any response must balance security with employee freedoms.
Organisations need to start with a clear view of what apps in their ecosystem are doing so they can make informed decisions about them. This starts with testing regimes for apps as they are downloaded.
Most fundamentally, these should check for malware, which should be blocked and reported. But a thorough assessment of what legitimate apps are doing must also be conducted. Look for device permissions and trackers within apps, to assess how much control it exerts over the device. Carry out a network traffic analysis to see if data is being sent via a location or company that does not meet your security standards. Analyse the app’s code and API’s to identify zero-day issues that could be exploited.
Some apps update weekly, changing what data they collect, so any app testing programme should be ongoing and include flagging when updates or new versions alter permissions.
Armed with this information, organisations can understand risk and make informed security decisions.
If the app represents an undue risk, there are a few options. Some risks such as tracker URL’s can be blocked with a firewall on the device, or at the end of a VPN. The app software could be modified to meet your security criteria. Larger organisations may have the clout to get developers to make changes. If all else fails, apps can be blacklisted, and the app testing analysis you have conducted used to explain why.
Like anything in security, the solution requires a technical analysis of threats, combined with informed human decisions, which balance security with allowing a business to function effectively. As apps grow as a threat vector, organisations need to expand testing regimes to adequately cover the risk they present, so that they can make those informed decisions.
This article is from the CBROnline archive: some formatting and images may not be present.
Join Our Newsletter
Want more on technology leadership?
Sign up for Tech Monitor's weekly newsletter, Changelog, for the latest insight and analysis delivered straight to your inbox.