AI will threaten humans in two years
An artificial intelligence task force adviser to the UK prime minister has a stark warning: AI will threaten humans in two years.
The adviser, Matt Clifford, said in an interview with TalkTV that humans have a narrow window of two years to control and regulate AI before it becomes too powerful.
“The near-term risks are actually pretty scary. You can use AI today to create new recipes for bioweapons or to launch large-scale cyber attacks,” said Clifford.
“You can have really very dangerous threats to humans that could kill many humans – not all humans – simply from where we would expect models to be in two years’ time.”
Clifford, who also chairs the government’s Advanced Research and Invention Agency (ARIA), emphasised the need for a framework that addresses the safety and regulation of AI systems.
In the interview, Clifford highlighted the growing capabilities of AI systems and the urgent need to consider the risks associated with them. He warned that if safety and regulations are not put in place, these systems could become highly powerful within two years, posing significant risks in both the short and long term.
He referenced an open letter signed by 350 AI experts, including OpenAI CEO Sam Altman, which called for treating AI as an existential threat akin to nuclear weapons and pandemics.
“The kind of existential risk that I think the letter writers were talking about is … about what happens once we effectively create a new species, an intelligence that is greater than humans,” explains Clifford.
Clifford went on to emphasise the importance of understanding and controlling AI models, stating that the lack of comprehension regarding their behaviour is a significant concern. He stressed the need for an audit and evaluation process before the deployment of powerful models, a sentiment shared by many AI development leaders.
“I think the thing to focus on now is how do we make sure that we know how to control these models because right now we don’t,” says Clifford.
Around the world, regulators are grappling with AI’s rapid advancements and the complexities and implications that it introduces. Regulators are aiming to strike a balance between user protection and fostering innovation.
In the United Kingdom, a member of the opposition Labour Party echoed the concerns raised in the Center for AI Safety’s letter, calling for technology to be regulated on par with medicine and nuclear power.
During a US visit, UK Prime Minister Rishi Sunak is expected to pitch for a London-based global AI watchdog. Sunak has said he is “looking very carefully” at the risk of extinction posed by AI.
The EU, meanwhile, has even proposed the mandatory labelling of all AI-generated content to combat disinformation.
With only a limited timeframe to act, policymakers, researchers, and developers must collaborate to ensure the responsible development and deployment of AI systems, taking into account the potential risks and implications associated with their rapid advancement.
You can view the full interview with Clifford below:
(Photo by Goh Rhy Yan on Unsplash)
Related: Over 1,000 experts call for halt to ‘out-of-control’ AI development
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.