US President Donald Trump today deleted two Tweets that slammed Google for allegedly favouring certain media outlets, saying “96% of results on ‘Trump News’ are from National Left-Wing Media” – then five hours later retweeted them.

The reasons for the initial deletion were unclear.

In an unlikely one-two punch at the search giant, Human Rights Watch (HRW) followed the early morning tirade by the present with its own criticism of Google’s plans to launch a censored search engine in China, “Dragonfly”.

That criticism came in an open letter published today that was also signed by Amnesty International and Reporters Without Borders, with 10 other organisations .

Trump Attacks Google: Amnesty, HRW Wade In

Google in an emailed statement denied its algorithms favour media outlets with certain political persuasions, saying in response to Trump’s tweets: “When users type queries into the Google Search bar, our goal is to make sure they receive the most relevant answers in a matter of seconds”.

trump attacks google“Search is not used to set a political agenda and we don’t bias our results toward any political ideology. Every year, we issue hundreds of improvements to our algorithms to ensure they surface high-quality content in response to users’ queries. We continually work to improve Google Search and we never rank search results to manipulate political sentiment.”

This appears, however, to be precisely what Google is planning to do in China. As Human Rights Watch notes in an open letter: “According to confidential Google documents obtained by The Intercept, the new search app being developed under Project Dragonfly would comply with China’s draconian rules by automatically identifying and filtering websites blocked in China, and “blacklisting sensitive queries”.

Black Boxes Raise Concern

Trump is not the first leader to raise the issue of algorithmic opacity, even if his 96 percent claim appears to lack all credibility.

In 2016, German chancellor Angela Merkel said: “The big internet platforms, via their algorithms, have become an eye of a needle, which diverse media must pass through to reach users … These algorithms, when they are not transparent, can lead to a distortion of our perception, they narrow our breadth of information.”

Google’s algorithmic engines are proprietary black boxes – independent regulatory scrutiny of the fairness of its information shaping is essentially non-existent.

A Council of Europe expert committee on automated processing and different forms of artificial intelligence (MSI-AUT) is developing “detailed guidelines for member states to curb the negative human rights impacts of algorithms in the public and private sector and to enhance their benefits for society”, but they are unlikely to be binding.

The Council says: “Institutions that use algorithmic processes should be encouraged to provide easily accessible explanations with respect to the data that is used by the algorithm, the procedures followed and the criteria based on which decisions are proposed. Moreover, industries that develop the analytical systems used in algorithmic decision-making and data collection processes should create awareness and understanding regarding the possible biases that may be embedded in the design of algorithms.”