Experts have raised concerns that next year’s elections in the UK and the US may be plagued by a surge of AI-driven disinformation. They warn that the proliferation of Artificial Intelligence generated images, text, and deep fake videos, spread by swarms of AI-powered propaganda bots, could have a significant impact.
In recent days, concerns about AI have soared, particularly following breakthroughs in generative AI technology.
Why is AI a concern now?
As per the Guardian, while earlier waves of “propaganda bots” relied on simple pre-written messages or “paid trolls”, tools like ChatGPT and Midjourney hold the ability to produce realistic text, images, and even voice on command. This has raised concerns about interactive election interference on a large scale.
A study by NewsGuard, an organisation that monitors misinformation, tested ChatGPT and Google’s Bard chatbot and found that they were capable of generating false news narratives when prompted. This highlights the potential for AI tools to mass-produce variations of fake stories, which raises concerns about the deliberate dissemination of false information.
On Friday, NewsGuard also announced that there has been a two-fold increase in AI-generated news and information websites, which have more than doubled to 125 in two weeks.
Highlighting the need for regulation, public education, and transparency regarding the use of AI, he said, “The general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern.”
“Regulation would be quite wise: people need to know if they’re talking to an AI, or if content that they’re looking at is generated or not. The ability to really model … to predict humans, I think is going to require a combination of companies doing the right thing, regulation and public education,” he added.
AI-powered disinformation
Professor Michael Wooldridge of the Alan Turing Institute identified AI-powered disinformation as his primary concern about the technology. He noted that generative AI can generate disinformation on an “industrial scale”.
“Right now in terms of my worries for AI, it is number one on the list. We have elections coming up in the UK and the US and we know social media is an incredibly powerful conduit for misinformation. But we now know that generative AI can produce disinformation on an industrial scale.”
He said chatbots like ChatGPT can produce tailored misinformation targeted at specific political groups or demographics, adding that “it’s an afternoon’s work for somebody with a bit of programming experience to create fake identities and just start generating these fake news stories.”
AI-generated images
AI-generated imagery is another problem. Remember the pictures of Donald Trump getting arrested and the Pope in a ‘dope’ puffer jacket? That’s the work of artificial intelligence.
Earlier this year, as these images went viral, many people raised concerns about the potential use of generated imagery to cause confusion and misinformation.
However, Sam Altman, while addressing US Senators, suggested that these concerns might be exaggerated or blown out of proportion. He compared AI-generated images to the earlier adoption of Photoshop.
He said that people eventually developed an understanding that images could be manipulated.
“Photoshop came on to the scene a long time ago and for a while people were really quite fooled by Photoshopped images – then pretty quickly developed an understanding that images might be Photoshopped.”
However, in spite of Altman’s statement, as AI capabilities advance, there are concerns that it is becoming increasingly difficult to trust things on social media and that as the technology develops further, the difficulty in differentiating between misinformation and purposeful disinformation will only increase.
AI and voice cloning
In January, voice cloning gained significant attention when a manipulated video of US President Joe Biden surfaced. The original footage, in which he discussed sending tanks to Ukraine, was altered using voice simulation technology to make it appear as though he was attacking transgender people. The doctored video was widely circulated on social media platforms.
The growing availability of voice cloning services, including the cloning of corporate executives and public figures, raises concerns about its potential misuse.
Recorded Future, a US cybersecurity firm, warned that rogue actors could sell voice cloning services online.
Alexander Leslie, an analyst with the firm, said that the technology is improving and poses a heightened threat as the US presidential election approaches.
“Without widespread education and awareness, this could become a real threat vector as we head into the presidential election,” he said.
Steven Brill, the co-CEO of NewsGuard, expressed concern about the potential misuse of chatbot technology by malicious individuals to create varied versions of fake stories. He said, “the danger is someone using it deliberately to pump out these false narratives”.
(With inputs from agencies)
You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.