Elon Musk has warned that AI poses an “existential risk” to humanity (Image: Leon Neal/PA)

Elon Musk said he believes AI is “one of the biggest threats” to humanity and that the UK AI Security Summit is “timely” given the scale of the threat.

The billionaire said AI is an “existential risk” because for the first time humans will be confronted with something “that will be much smarter than us.”

Musk was speaking on the first day of the AI ​​Safety Summit in Bletchley Park, which the government is using to hold conversations with world leaders, technology companies and scientists about the risks of advancing AI technology.

At the summit, Musk said: “I think AI is one of the biggest threats.” [to humans].

“For the first time, we have the situation where we have something that is much smarter than the smartest person.

“We are not stronger or faster than other creatures, but we are more intelligent, and here we are for the first time, really in human history, with something that will be much more intelligent than us.”

Please enable JavaScript to view this video and consider upgrading to a web browser that supports HTML5 videos

“It’s not clear to me that we can control something like this, but I think we can strive to steer it in a direction that is beneficial to humanity.”

“But I think it is one of the existential risks we face, and perhaps the most urgent given the timescale and pace of progress – the summit is timely and I applaud the Prime Minister for that.

Mr Musk added that he hoped the two-day summit could be used to build an ‘international consensus’ on understanding AI, so that there could be an ‘external arbiter’ in the sector ‘who can observe what leading AI companies are’. . . and certainly raise the alarm if they have any concerns.”

The SpaceX and Tesla boss was originally a board member of OpenAI, the company behind one of the world’s most powerful AI chatbots ChatGPT.

He later revealed that he left the company because he “didn’t agree with some of the intentions of the OpenAI team.”

Musk said on the Joe Rogan Experience podcast ahead of the summit that he feared environmentalists could use AI to kill lives.

“If you start thinking that humans are bad, then the logical conclusion is that humans should become extinct,” he said.

“If AI is programmed by extinction experts, its useful function will be the extinction of humanity… they won’t even think it’s bad.”

The theory is widely shared on social media channels popular with climate change deniers.