Ignoring AI is the wrong response to ethical and social questions about it

The tech sector’s free pass must be cancelled


The writer is the author of ‘Uncharted: How to Navigate the Future’
How has the tech sector got away with exceptionalism for so long? Whenever the harmful effects of a new technology emerge, the loudest clamour opposes regulation, arguing that it would strangle innovation and that the engineering is too complex for legislation.
The costs may indeed be high — loss of trust, the degradation of democracy, the rising death toll of adolescents — but regulation would be a sacrilege against the crusade for knowledge and economic growth. Or so the argument goes.
Even when apostate pioneers in artificial intelligence warn vociferously against its dangers, an equal and opposite cohort (sometimes even including the same voices) argues that the technology is too young to be constrained, that there isn’t yet sufficient evidence of harm and that business can be trusted to do the right thing.
But no other industry gets such a free pass. Electrical appliances are tested to ensure they don’t explode or catch fire. Cars aren’t allowed on the road if they don’t meet safety standards. Pharmaceutical businesses must prove their products are safe before they can go on sale. If harms emerge in any of these, regulation piles on top of regulation. Developing and enforcing standards is a cornerstone of the social contract: citizens expect their governments to strive to keep them safe.
So why is technology the exception? When Facebook was found to have experimented on users without their consent, when social media has been shown to harm young and vulnerable users, calls for regulation resound and then go quiet. In the argument that new technology is too precious an economic opportunity, AI is but the latest target. Its evangelists argue that, so far, it hasn’t shown any signs of harm, and the engineering is too abstruse for legislators to understand. The first point is debatable, the second is often correct. I have had many conversations with MPs and chief executives who privately acknowledge feeling out of their depth when it comes to tech. It’s less embarrassing to avoid the gnarly problems, meaning they find common cause with the companies who also benefit from ignoring them.
Such wilful blindness is not the only option. In 1982, the Warnock Committee was established in the UK “to consider recent and potential developments in medicine and science related to human fertilisation and embryology”. Keen to maintain its lead in the field, the British government recognised that the science also posed profound ethical and social questions that had to be answered if the new technology was to be acceptable.
To lead the committee the government appointed Mary Warnock, a moral philosopher who had no expertise in the field. That philosophers have no subject, she told me once, was a gift. It meant that people trusted her and that she had no authority or perspective to defend. Her skill lay in thinking through hard problems.
To do so required convening scientists, lawyers, GPs and theologians “to have”, Warnock told me, “the conversation we are not having”. The scientists weren’t ethicists, the ethicists weren’t scientific experts. Understanding the painful first-hand experience of infertility was critical. Everyone had to learn a great deal.
Divisions ran deep. At one point, Warnock said, she learnt that the wife of the then Bishop of Ely had held secret meetings to undermine the committee. But she persevered, hugely helped by the scientist Anne McLaren, who was credited with a gift for clarifying complexity without simplifying it.
The recommendations published in 1984 met a gold standard for regulation: even the people who didn’t like them could understand and live with them. Although many bishops in the House of Lords stood against the resulting bill, it did become law. Innovation wasn’t throttled, Britain maintained its leading position and the technology benefited hundreds of thousands who had endured the agony and grief of infertility.
That AI is also complex is no reason to avoid the challenge of containing it. AI systems optimise for specific outcomes, but risk generating solutions that are worse than the problems they were designed to solve. (Solving climate change by preventing procreation is a favourite example of mine.)
Formulating standards around the technology isn’t impossible; it will simply require agreeing principles and limits of the kind that all other industries have to live with and work around. Doing that demands people from a variety of disciplines and backgrounds listen, learn and have the informed, thoughtful and non-polarised conversations that, right now, are missing. That AI is so powerful means we have no choice but to try.
This story originally appeared on: Financial Times - Author:Margaret Heffernan