Microsoft’s contentious new feature, Recall AI, will not be launched alongside the much-anticipated Copilot Plus PCs next week due to security and privacy concerns.
Recall AI was unveiled last month as a feature to be included in the new Copilot Plus computers, the first-of-their-kind PCs with built-in AI hardware and an operating system suited for AI-powered services. The laptops’ flagship feature, Recall AI, was supposed to create “an explorable visual timeline” of users’ activity by taking constant screenshots of everything appearing on the screen.
The heavy tracking of all activity including voice chats and web browsing required by Recall AI prompted concerns and criticism by other tech players and cybersecurity analysts. Several users and experts even went so far as to label the new feature a potential security “disaster”, warning the public about the risks of spying and data privacy breaches. Elon Musk, who criticised Apple’s partnership with OpenAI earlier this week, wrote on X that Recall AI was a “Black Mirror episode”, implying it could be dangerous for human safety.
The Copilot Plus laptops will still be launched on 18 June without the additional AI feature. Instead, Microsoft will conduct further tests of Recall AI with the Windows Insider Program, a community of millions of Windows enthusiasts who regularly preview features and give feedback.
“We are adjusting the release model for Recall to leverage the expertise of the Windows Insider community to ensure the experience meets our high standards for quality and security,” head of Windows and devices Pavan Davuluri said in a blog post. He added: “This decision is rooted in our commitment to providing a trusted, secure and robust experience for all customers and to seek additional feedback prior to making the feature available to all Copilot+ PC users.”
What are Microsoft’s measures for AI safety?
In May 2024, the UK’s Information Commissioner’s Office (ICO) launched an inquiry into Microsoft’s new AI feature, to assess the potential risks attached to its deployment. An ICO spokesperson said they “expect organisations to be transparent with users about how their data is being used and only process personal data to the extent that it is necessary to achieve a specific purpose,” and stressed the importance for the industry to “consider data protection from the outset and rigorously assess and mitigate risks to peoples’ rights and freedoms before bringing products to market.”
“We are making enquiries with Microsoft to understand the safeguards in place to protect user privacy,” the ICO spokesperson added.
Last week, Microsoft announced certain updates to Recall AI in response to the concerns raised around the upcoming controversial feature. Modifications include making Recall AI an optional feature rather than a default one, and ensuring the screenshot database is encrypted and requires user authentication to be accessed. However, some experts have noted unresolved cybersecurity risks even after these updates.
The decision to postpone Recall AI’s launch comes only hours after Microsoft president Brad Smith was heard by the House Homeland Security Committee yesterday during a hearing over the company’s recent cybersecurity failures.
The tech giant recently drew criticism from US government officials following a “preventable” cyberattack which saw a “cascade of Microsoft’s avoidable errors” enabling Chinese government-backed cyber operators to hack into the email accounts of senior US officials. A US Cyber Safety Review Board (CSRB) report blamed Microsoft for a “preventable” hack from state-backed hackers from China, Storm-0558, criticising the company for multiple cybersecurity lapses and a lack of transparency surrounding resolution of vulnerabilities.