Experts are waking up to the threat posed by artificial intelligence programmes if they fall into malevolent hands

The perils of open-source AI


During the height of the Covid-19 pandemic, some well-meaning American officials floated a novel idea: why not put the details of the known zoonotic viral threats online to enable scientists around the world to predict what variants might emerge next? And, hopefully, find antidotes.
Theoretically, it sounded attractive. Covid had shown the cost of ignoring pandemics. It also revealed the astonishing breakthroughs that can occur when governments finally throw resources into finding vaccines, at speed.
Illustration of a giant head in profile, with a long nose and circuit board showing behind its face. Two people in lab coats stand below it, one taking measurements of the nose and the other beneath the exposed circuit board, writing on a clipboard
In 2016, a campaign body called 314 Action was created to support scientists who want to run for public office. It has already had some success, leading its website to claim, “In 2018, we played a pivotal role in flipping the United States House of Representatives by electing nine first-time science candidates.” It will also be supporting pro-science candidates in next year’s race. But there is still a long way to go and, given how rapidly technology like AI is developing, that is cause for alarm.
The second lesson is that policymakers need to handle the idea of transparency carefully – not just with pathogens, but AI too. Until now, some western AI experts have chosen to publish their cutting-edge research on open-source platforms to advance the cause of science and win accolades. But just as biotech experts realised that publishing pathogen details could be risky, so experts are waking up to the threat posed by AI tools if they fall into malevolent hands.
This story originally appeared on: Financial Times - Author:Gillian Tett