Elon Musk’s public pursuit of Twitter looks set to pay off after the social network’s board of directors accepted a $44bn takeover bid from the world’s richest man. Musk has already laid out big plans for Twitter, including a pledge to make the company’s content policing algorithm open-source in the name of transparency. But experts say this may amount to little more than a “token gesture” and could present a cybersecurity risk for Twitter and its users.
Musk first bid for Twitter two weeks ago after taking a 9% stake in the company. The social network initially rebuffed his advances, but yesterday announced a deal had been struck which will see the Tesla CEO pay $54.20 per share for the company.
"Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated," said Musk, who will take Twitter private after the deal is completed.
Is an open-source Twitter algorithm realistic?
Twitter has come under the fire for its inability to deal with hate speech and misinformation, problems which also plague other social media platforms. Both the EU and UK have brought forward legislation, the Digital Services Act and the Online Safety Bill respectively, which aim to put more responsibility on social media platforms to control dangerous content. Musk has been a vocal critic of the platform's content moderation policies for different reasons, chiefly because he believes they stymie free speech.
Given that Twitter serves as the de facto public town square, failing to adhere to free speech principles fundamentally undermines democracy.
— Elon Musk (@elonmusk) March 26, 2022
What should be done? https://t.co/aPS9ycji37
Speaking of his plans for the platform, Musk said: "I also want to make Twitter better than ever by enhancing the product with new features, making the algorithms open source to increase trust, defeating the spam bots, and authenticating all humans. Twitter has tremendous potential – I look forward to working with the company and the community of users to unlock it."
Musk has yet to elaborate on how he might make the company's algorithms, which prioritise content that is seen by which users, available for greater scrutiny. It may be easier said than done. "Moves to increase transparency by disclosing information about how such algorithms work are generally a good thing, but open sourcing the actual code comes with its own risks," says Michael Rovatsos, Professor of artificial intelligence and director of the Bayes Centre for data science and AI at Edinburgh University.
The mechanisms that determine what appears on a social media platform timeline are typically "extremely complex", Professor Rovatsos says, and "involve things like moderation and filtering, promotion of paid content, and user profiling."
He adds: "When we talk about 'the algorithm', it’s actually a complex combination of data processing and human intervention steps, plus algorithmic models that have been trained with using historical data – an open source version of core algorithms will likely not tell us very much about how the content on Twitter is actually shaped.
"Having the code is also of course not sufficient to really understand how the platform works, as its actual behaviour depends on the data fed into it. I think it’s unlikely that Twitter would disclose substantial amounts of this data, for obvious commercial reasons, and, in the case of tweets that are not public such sharing would violate privacy rules in many cases."
Dr Daan Kolkman, senior researcher in decision making at the Jheronimus Academy of Data Science in the Netherlands, agrees that open sourcing the algorithm "seems like a good move". But, he says, "it may well amount to little more than a token gesture in practice. It all depends on how exactly it will be open sourced."
He explains: "Just having access to the algorithm is not enough to ensure fairness. To do a solid algorithmic audit you want, amongst other things, access to data used to train the model and insight into the development process. Twitter’s algorithms are likely updated frequently, so just having a snapshot isn’t all that useful."
Opening Twitter's algorithm could pose a security risk
Even if Musk succeeds at making the Twitter algorithm open source, he may create more problems for himself and his newly acquired company when it comes to security.
If made freely available, the Twitter algorithm would likely be adopted by other social platforms, advertisers, and others who "are looking to hone their user targeting", says Jamie Moles of security company ExtraHop. That could provide cybercriminals with the motivation to seek to compromise it, he argues.
"As we've seen with Log4Shell and Spring4Shell, vulnerabilities in widely used open-source applications are exponentially more valuable," says Moles. "Making its code open source may increase transparency for Twitter users, but it may also make Twitter a much bigger target for attackers."
Musk's pledge to eradicate bots, however, is likely to be of interest to the security community. "If he's successful, the methods used by Twitter to eliminate bots from the platform may generate new techniques that improve the detection and identification of spam emails, spam posts, and other malicious intrusion attempts," says Moles. "If Musk and his team can train AI to be more effective in combating this, it may well be a boon to security practitioners everywhere."
Can Elon Musk solve Twitter's problems?
Professor Rovatsos believes taking action on Twitter's algorithm will not get to the root of issues with the platform. "There is an implicit assumption in Musk’s statement that solving problems around bias is a matter of getting the algorithm right," he says. "This is simply not true. Algorithms don’t solve ethical problems, people and organisations do, and that requires putting solid risk management, governance and oversight mechanisms in place that keep users safe and protect, safeguard key societal values, and protect fundamental rights."
He adds that Musk's often-stated commitment to “free speech absolutism” could prove problematic. "I am concerned that reducing content moderation will exacerbate existing problems around hate speech, misinformation, and other online harms, while at the same time disclosing some technical details might create a semblance of transparency," Professor Rovatsos adds.