Facebook parent Meta has said that it took down five co-ordinated influence networks originating form China, consisting of thousands of accounts aimed at influencing online opinion about 2024 US elections and more. Talking about the action it took this year, the tech giant warned that efforts were on to influence public opinion ahead of 2024 US Presidential Election.
“Foreign threat actors are attempting to reach people across the Internet ahead of next year’s elections, and we need to remain alert,” Meta global threat intelligence lead Ben Nimmo said during a briefing about its latest security report, as quoted by AFP.
The company said that in one of the campaign it took action against, 4789 fake Facebook accounts were found to be involved. These accounts have now been removed. The campaign was posting content about politics and US-China relations.
These accounts, said Meta, praised China and lashed out at its critics. The accounts even copy-pasted real content posted by US politicians potentially to sow divisions.
Watch | Meta shuts down China-based network of 4800 fake accounts
“As election campaigns ramp up, we should expect foreign influence operations to try and leverage authentic parties and debate rather than creating original content themselves,” Nimmo said.
“We anticipate that if relations with China become an election topic in a particular country, we may see China-based influence operations pivot to target those debates.”
Meta has found that these networks originated from China, but has not attributed the efforts to the Chinese government.
Russia at the top
The company said that maximum such network continue to originate from Russia. These operations are based inside the country and are mainly focused of information surrounding the Ukraine war.
The security report said that websites that have links to such Russia-based campaign have of late, also using Israel-Hamas war to tarnish image of the US.
The security team at Meta expects that efforts will be made in coming days and months to sway elections.
“We hope that people will try to be deliberate when engaging with political content across the internet,” Nimmo said.
“For political groups, it’s important to be aware that heightened partisan tensions can play into the hands of foreign threat actors.”
Nathaniel Gleicher, head of security policy, said during the briefing that generative artificial intelligence (AI) has introduced a new element into these efforts. Such tools, like ChatGPT, are being used to churn out bogus content that looks convincing.
“Threat actors can use generative AI to create larger volumes of convincing content, even if they don’t have the cultural or language skills to speak to their audiences,” Gleicher said.
(With inputs from agencies)