Advancements in artificial intelligence are growing rapidly, so it’s no surprise it’s becoming increasingly difficult to spot – whether that be through scams, AI-generated content or calls.
And it’s only getting smarter, with scientists recently warning AI has crossed a “red line” with its newfound ability to replicate itself.
Researchers from Fudan University in China discovered that two well-renowned language learning models (LLMs), a type of AI that can understand, predict and generate human-like text, could clone themselves.
The researchers experimented with Meta and Alibaba LLMs across 10 different trials to find whether AIs could go rogue. The findings revealed that both models were able to create replicas of themselves – 50 per cent and 90 per cent of the time, respectively.
The results showed that AIs may already have the ability to go rogue.
“Successful self-replication under no human assistance is the essential step for AI to outsmart [humans], and is an early signal for rogue AIs,” they wrote in the study published in arXiv.
They hope their discovery will “serve as a timely alert” to “put more efforts on understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible.”
This follows a recent ChatGPT outage that highlighted just how reliant people have already become on AI bots.
“ChatGPT is down again??? During the work day? So you’re telling me I have to… THINK?!” one social media user responded, while another feared they were about to get fired as they were no longer capable of carrying out their job.