Skip to main contentSkip to navigationSkip to navigation
Erica the robot being exhibited in Madrid last year
Erica the robot being exhibited in Madrid last year: are we being told lies about AI? Photograph: Gabriel Bouys/AFP/Getty Images
Erica the robot being exhibited in Madrid last year: are we being told lies about AI? Photograph: Gabriel Bouys/AFP/Getty Images

Don’t believe the hype: the media are unwittingly selling us an AI fantasy

This article is more than 5 years old
John Naughton

Journalists need to stop parroting the industry line when it comes to artificial intelligence

Artificial intelligence (AI) is a term that is now widely used (and abused), loosely defined and mostly misunderstood. Much the same might be said of, say, quantum physics. But there is one important difference, for whereas quantum phenomena are not likely to have much of a direct impact on the lives of most people, one particular manifestation of AI – machine-learning – is already having a measurable impact on most of us.

The tech giants that own and control the technology have plans to exponentially increase that impact and to that end have crafted a distinctive narrative. Crudely summarised, it goes like this: “While there may be odd glitches and the occasional regrettable downside on the way to a glorious future, on balance AI will be good for humanity. Oh – and by the way – its progress is unstoppable, so don’t worry your silly little heads fretting about it because we take ethics very seriously.”

Critical analysis of this narrative suggests that the formula for creating it involves mixing one part fact with three parts self-serving corporate cant and one part tech-fantasy emitted by geeks who regularly inhale their own exhaust. The truly extraordinary thing, therefore, is how many apparently sane people seem to take the narrative as a credible version of humanity’s future.

Chief among them is our own dear prime minister, who in recent speeches has identified AI as a major growth area for both British industry and healthcare. But she is by no means the only politician to have drunk that particular Kool-Aid.

Why do people believe so much nonsense about AI? The obvious answer is that they are influenced by what they see, hear and read in mainstream media. But until now that was just an anecdotal conjecture. The good news is that we now have some empirical support for it, in the shape of a remarkable investigation by the Reuters Institute for the Study of Journalism at Oxford University into how UK media cover artificial intelligence.

The researchers conducted a systematic examination of 760 articles published in the first eight months of 2018 by six mainstream UK news outlets, chosen to represent a variety of political leanings – the Telegraph, Mail Online (and the Daily Mail), the Guardian, HuffPost, the BBC and the UK edition of Wired magazine. The main conclusion of the study is that media coverage of AI is dominated by the industry itself. Nearly 60% of articles were focused on new products, announcements and initiatives supposedly involving AI; a third were based on industry sources; and 12% explicitly mentioned Elon Musk, the would-be colonist of Mars.

Critically, AI products were often portrayed as relevant and competent solutions to a range of public problems. Journalists rarely questioned whether AI was likely to be the best answer to these problems, nor did they acknowledge debates about the technology’s public effects.

“By amplifying industry’s self-interested claims about AI,” said one of the researchers, “media coverage presents AI as a solution to a range of problems that will disrupt nearly all areas of our lives, often without acknowledging ongoing debates concerning AI’s potential effects. In this way, coverage also positions AI mostly as a private commercial concern and undercuts the role and potential of public action in addressing this emerging public issue.”

This research reveals why so many people seem oblivious to, or complacent about, the challenges that AI technology poses to fundamental rights and the rule of law. The tech industry narrative is explicitly designed to make sure that societies don’t twig this until it’s too late to do anything about it. (In the same way that it’s now too late to do anything about fake news.) The Oxford research suggests that the strategy is succeeding and that mainstream journalism is unwittingly aiding and abetting it.

Another plank in the industry’s strategy is to pretend that all the important issues about AI are about ethics and accordingly the companies have banded together to finance numerous initiatives to study ethical issues in the hope of earning brownie points from gullible politicians and potential regulators. This is what is known in rugby circles as “getting your retaliation in first” and the result is what can only be described as “ethics theatre”, much like the security theatre that goes on at airports.

Nobody should be taken in by this kind of deception. There are ethical issues in the development and deployment of any technology, but in the end it’s law, not ethics, that should decide what happens, as Paul Nemitz, principal adviser to the European commission, points out in a terrific article just published by the Royal Society. Just as architects have to think about building codes when designing a house, he writes, tech companies “will have to think from the outset… about how their future program could affect democracy, fundamental rights and the rule of law and how to ensure that the program does not undermine or disregard… these basic tenets of constitutional democracy”.

Yep. So lets have no more “soft” coverage of artificial intelligence and some real, sceptical journalism instead.

What I’m reading

Music to my ears
Who said analogue nostalgia doesn’t have a future? According to a new BuzzAngle report, vinyl and cassette sales saw double-digit growth last year!

Rise of the machines
One giant step for a chess-playing machine… Science publishes Garry Kasparov’s thoughtful reflections on the Deep Blue supercomputer.

The search engineer
Overlooked no more. The New York Times’s long-overdue obituary of Karen Spärck Jones, the British computer scientist who laid the foundation for search engines.

Most viewed

Most viewed